mirror of
https://git.anonymousland.org/anonymousland/synapse.git
synced 2025-08-14 22:55:28 -04:00
Merge remote-tracking branch 'upstream/release-v1.60'
This commit is contained in:
commit
8975980844
183 changed files with 5167 additions and 1948 deletions
|
@ -6,3 +6,6 @@ aff1eb7c671b0a3813407321d2702ec46c71fa56
|
||||||
|
|
||||||
# Update black to 20.8b1 (#9381).
|
# Update black to 20.8b1 (#9381).
|
||||||
0a00b7ff14890987f09112a2ae696c61001e6cf1
|
0a00b7ff14890987f09112a2ae696c61001e6cf1
|
||||||
|
|
||||||
|
# Convert tests/rest/admin/test_room.py to unix file endings (#7953).
|
||||||
|
c4268e3da64f1abb5b31deaeb5769adb6510c0a7
|
112
CHANGES.md
112
CHANGES.md
|
@ -1,3 +1,115 @@
|
||||||
|
Synapse 1.60.0rc1 (2022-05-24)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Measure the time taken in spam-checking callbacks and expose those measurements as metrics. ([\#12513](https://github.com/matrix-org/synapse/issues/12513))
|
||||||
|
- Add a `default_power_level_content_override` config option to set default room power levels per room preset. ([\#12618](https://github.com/matrix-org/synapse/issues/12618))
|
||||||
|
- Add support for [MSC3787: Allowing knocks to restricted rooms](https://github.com/matrix-org/matrix-spec-proposals/pull/3787). ([\#12623](https://github.com/matrix-org/synapse/issues/12623))
|
||||||
|
- Send `USER_IP` commands on a different Redis channel, in order to reduce traffic to workers that do not process these commands. ([\#12672](https://github.com/matrix-org/synapse/issues/12672), [\#12809](https://github.com/matrix-org/synapse/issues/12809))
|
||||||
|
- Synapse will now reload [cache config](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#caching) when it receives a [SIGHUP](https://en.wikipedia.org/wiki/SIGHUP) signal. ([\#12673](https://github.com/matrix-org/synapse/issues/12673))
|
||||||
|
- Add a config options to allow for auto-tuning of caches. ([\#12701](https://github.com/matrix-org/synapse/issues/12701))
|
||||||
|
- Update [MSC2716](https://github.com/matrix-org/matrix-spec-proposals/pull/2716) implementation to process marker events from the current state to avoid markers being lost in timeline gaps for federated servers which would cause the imported history to be undiscovered. ([\#12718](https://github.com/matrix-org/synapse/issues/12718))
|
||||||
|
- Add a `drop_federated_event` callback to `SpamChecker` to disregard inbound federated events before they take up much processing power, in an emergency. ([\#12744](https://github.com/matrix-org/synapse/issues/12744))
|
||||||
|
- Implement [MSC3818: Copy room type on upgrade](https://github.com/matrix-org/matrix-spec-proposals/pull/3818). ([\#12786](https://github.com/matrix-org/synapse/issues/12786), [\#12792](https://github.com/matrix-org/synapse/issues/12792))
|
||||||
|
- Update to `check_event_for_spam`. Deprecate the current callback signature, replace it with a new signature that is both less ambiguous (replacing booleans with explicit allow/block) and more powerful (ability to return explicit error codes). ([\#12808](https://github.com/matrix-org/synapse/issues/12808))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix a bug introduced in Synapse 1.7.0 that would prevent events from being sent to clients if there's a retention policy in the room when the support for retention policies is disabled. ([\#12611](https://github.com/matrix-org/synapse/issues/12611))
|
||||||
|
- Fix a bug introduced in Synapse 1.57.0 where `/messages` would throw a 500 error when querying for a non-existent room. ([\#12683](https://github.com/matrix-org/synapse/issues/12683))
|
||||||
|
- Add a unique index to `state_group_edges` to prevent duplicates being accidentally introduced and the consequential impact to performance. ([\#12687](https://github.com/matrix-org/synapse/issues/12687))
|
||||||
|
- Fix a long-standing bug where an empty room would be created when a user with an insufficient power level tried to upgrade a room. ([\#12696](https://github.com/matrix-org/synapse/issues/12696))
|
||||||
|
- Fix a bug introduced in Synapse 1.30.0 where empty rooms could be automatically created if a monthly active users limit is set. ([\#12713](https://github.com/matrix-org/synapse/issues/12713))
|
||||||
|
- Fix push to dismiss notifications when read on another client. Contributed by @SpiritCroc @ Beeper. ([\#12721](https://github.com/matrix-org/synapse/issues/12721))
|
||||||
|
- Fix poor database performance when reading the cache invalidation stream for large servers with lots of workers. ([\#12747](https://github.com/matrix-org/synapse/issues/12747))
|
||||||
|
- Delete events from the `federation_inbound_events_staging` table when a room is purged through the admin API. ([\#12770](https://github.com/matrix-org/synapse/issues/12770))
|
||||||
|
- Give a meaningful error message when a client tries to create a room with an invalid alias localpart. ([\#12779](https://github.com/matrix-org/synapse/issues/12779))
|
||||||
|
- Fix a bug introduced in 1.43.0 where a file (`providers.json`) was never closed. Contributed by @arkamar. ([\#12794](https://github.com/matrix-org/synapse/issues/12794))
|
||||||
|
- Fix a long-standing bug where finished log contexts would be re-started when failing to contact remote homeservers. ([\#12803](https://github.com/matrix-org/synapse/issues/12803))
|
||||||
|
- Fix a bug, introduced in Synapse 1.21.0, that led to media thumbnails being unusable before the index has been added in the background. ([\#12823](https://github.com/matrix-org/synapse/issues/12823))
|
||||||
|
|
||||||
|
|
||||||
|
Updates to the Docker image
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
- Fix the docker file after a dependency update. ([\#12853](https://github.com/matrix-org/synapse/issues/12853))
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Fix a typo in the Media Admin API documentation. ([\#12715](https://github.com/matrix-org/synapse/issues/12715))
|
||||||
|
- Update the OpenID Connect example for Keycloak to be compatible with newer versions of Keycloak. Contributed by @nhh. ([\#12727](https://github.com/matrix-org/synapse/issues/12727))
|
||||||
|
- Fix typo in server listener documentation. ([\#12742](https://github.com/matrix-org/synapse/issues/12742))
|
||||||
|
- Link to the configuration manual from the welcome page of the documentation. ([\#12748](https://github.com/matrix-org/synapse/issues/12748))
|
||||||
|
- Fix typo in 'run_background_tasks_on' option name in configuration manual documentation. ([\#12749](https://github.com/matrix-org/synapse/issues/12749))
|
||||||
|
- Add information regarding the `rc_invites` ratelimiting option to the configuration docs. ([\#12759](https://github.com/matrix-org/synapse/issues/12759))
|
||||||
|
- Add documentation for cancellation of request processing. ([\#12761](https://github.com/matrix-org/synapse/issues/12761))
|
||||||
|
- Recommend using docker to run tests against postgres. ([\#12765](https://github.com/matrix-org/synapse/issues/12765))
|
||||||
|
- Add missing user directory endpoint from the generic worker documentation. Contributed by @olmari. ([\#12773](https://github.com/matrix-org/synapse/issues/12773))
|
||||||
|
- Add additional info to documentation of config option `cache_autotuning`. ([\#12776](https://github.com/matrix-org/synapse/issues/12776))
|
||||||
|
- Update configuration manual documentation to document size-related suffixes. ([\#12777](https://github.com/matrix-org/synapse/issues/12777))
|
||||||
|
- Fix invalid YAML syntax in the example documentation for the `url_preview_accept_language` config option. ([\#12785](https://github.com/matrix-org/synapse/issues/12785))
|
||||||
|
|
||||||
|
|
||||||
|
Deprecations and Removals
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
- Require a body in POST requests to `/rooms/{roomId}/receipt/{receiptType}/{eventId}`, as required by the [Matrix specification](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidreceiptreceipttypeeventid). This breaks compatibility with Element Android 1.2.0 and earlier: users of those clients will be unable to send read receipts. ([\#12709](https://github.com/matrix-org/synapse/issues/12709))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Improve event caching mechanism to avoid having multiple copies of an event in memory at a time. ([\#10533](https://github.com/matrix-org/synapse/issues/10533))
|
||||||
|
- Add some type hints to datastore. ([\#12477](https://github.com/matrix-org/synapse/issues/12477), [\#12717](https://github.com/matrix-org/synapse/issues/12717), [\#12753](https://github.com/matrix-org/synapse/issues/12753))
|
||||||
|
- Preparation for faster-room-join work: return subsets of room state which we already have, immediately. ([\#12498](https://github.com/matrix-org/synapse/issues/12498))
|
||||||
|
- Replace string literal instances of stream key types with typed constants. ([\#12567](https://github.com/matrix-org/synapse/issues/12567))
|
||||||
|
- Add `@cancellable` decorator, for use on endpoint methods that can be cancelled when clients disconnect. ([\#12586](https://github.com/matrix-org/synapse/issues/12586))
|
||||||
|
- Add ability to cancel disconnected requests to `SynapseRequest`. ([\#12588](https://github.com/matrix-org/synapse/issues/12588))
|
||||||
|
- Add a helper class for testing request cancellation. ([\#12630](https://github.com/matrix-org/synapse/issues/12630))
|
||||||
|
- Improve documentation of the `synapse.push` module. ([\#12676](https://github.com/matrix-org/synapse/issues/12676))
|
||||||
|
- Refactor functions to on `PushRuleEvaluatorForEvent`. ([\#12677](https://github.com/matrix-org/synapse/issues/12677))
|
||||||
|
- Preparation for database schema simplifications: stop writing to `event_reference_hashes`. ([\#12679](https://github.com/matrix-org/synapse/issues/12679))
|
||||||
|
- Remove code which updates unused database column `application_services_state.last_txn`. ([\#12680](https://github.com/matrix-org/synapse/issues/12680))
|
||||||
|
- Refactor `EventContext` class. ([\#12689](https://github.com/matrix-org/synapse/issues/12689))
|
||||||
|
- Remove an unneeded class in the push code. ([\#12691](https://github.com/matrix-org/synapse/issues/12691))
|
||||||
|
- Consolidate parsing of relation information from events. ([\#12693](https://github.com/matrix-org/synapse/issues/12693))
|
||||||
|
- Capture the `Deferred` for request cancellation in `_AsyncResource`. ([\#12694](https://github.com/matrix-org/synapse/issues/12694))
|
||||||
|
- Fixes an incorrect type hint for `Filter._check_event_relations`. ([\#12695](https://github.com/matrix-org/synapse/issues/12695))
|
||||||
|
- Respect the `@cancellable` flag for `DirectServe{Html,Json}Resource`s. ([\#12698](https://github.com/matrix-org/synapse/issues/12698))
|
||||||
|
- Respect the `@cancellable` flag for `RestServlet`s and `BaseFederationServlet`s. ([\#12699](https://github.com/matrix-org/synapse/issues/12699))
|
||||||
|
- Respect the `@cancellable` flag for `ReplicationEndpoint`s. ([\#12700](https://github.com/matrix-org/synapse/issues/12700))
|
||||||
|
- Convert namespace class `Codes` into a string enum. ([\#12703](https://github.com/matrix-org/synapse/issues/12703))
|
||||||
|
- Complain if a federation endpoint has the `@cancellable` flag, since some of the wrapper code may not handle cancellation correctly yet. ([\#12705](https://github.com/matrix-org/synapse/issues/12705))
|
||||||
|
- Enable cancellation of `GET /rooms/$room_id/members`, `GET /rooms/$room_id/state` and `GET /rooms/$room_id/state/$event_type/*` requests. ([\#12708](https://github.com/matrix-org/synapse/issues/12708))
|
||||||
|
- Optimize private read receipt filtering. ([\#12711](https://github.com/matrix-org/synapse/issues/12711))
|
||||||
|
- Add type annotations to increase the number of modules passing `disallow-untyped-defs`. ([\#12716](https://github.com/matrix-org/synapse/issues/12716), [\#12726](https://github.com/matrix-org/synapse/issues/12726))
|
||||||
|
- Drop the logging level of status messages for the URL preview cache expiry job from INFO to DEBUG. ([\#12720](https://github.com/matrix-org/synapse/issues/12720))
|
||||||
|
- Downgrade some OIDC errors to warnings in the logs, to reduce the noise of Sentry reports. ([\#12723](https://github.com/matrix-org/synapse/issues/12723))
|
||||||
|
- Update configs used by Complement to allow more invites/3PID validations during tests. ([\#12731](https://github.com/matrix-org/synapse/issues/12731))
|
||||||
|
- Tidy up and type-hint the database engine modules. ([\#12734](https://github.com/matrix-org/synapse/issues/12734))
|
||||||
|
- Fix a long-standing bug where the user directory background process would fail to make forward progress if a user included a null codepoint in their display name or avatar. ([\#12762](https://github.com/matrix-org/synapse/issues/12762))
|
||||||
|
- Tweak the mypy plugin so that `@cached` can accept `on_invalidate=None`. ([\#12769](https://github.com/matrix-org/synapse/issues/12769))
|
||||||
|
- Move methods that call `add_push_rule` to the `PushRuleStore` class. ([\#12772](https://github.com/matrix-org/synapse/issues/12772))
|
||||||
|
- Make handling of federation Authorization header (more) compliant with RFC7230. ([\#12774](https://github.com/matrix-org/synapse/issues/12774))
|
||||||
|
- Refactor `resolve_state_groups_for_events` to not pull out full state when no state resolution happens. ([\#12775](https://github.com/matrix-org/synapse/issues/12775))
|
||||||
|
- Do not keep going if there are 5 back-to-back background update failures. ([\#12781](https://github.com/matrix-org/synapse/issues/12781))
|
||||||
|
- Fix federation when using the demo scripts. ([\#12783](https://github.com/matrix-org/synapse/issues/12783))
|
||||||
|
- The `hash_password` script now fails when it is called without specifying a config file. ([\#12789](https://github.com/matrix-org/synapse/issues/12789))
|
||||||
|
- Simplify `disallow_untyped_defs` config in `mypy.ini`. ([\#12790](https://github.com/matrix-org/synapse/issues/12790))
|
||||||
|
- Update EventContext `get_current_event_ids` and `get_prev_event_ids` to accept state filters and update calls where possible. ([\#12791](https://github.com/matrix-org/synapse/issues/12791))
|
||||||
|
- Remove Caddy from the Synapse workers image used in Complement. ([\#12818](https://github.com/matrix-org/synapse/issues/12818))
|
||||||
|
- Add Complement's shared registration secret to the Complement worker image. This fixes tests that depend on it. ([\#12819](https://github.com/matrix-org/synapse/issues/12819))
|
||||||
|
- Support registering Application Services when running with workers under Complement. ([\#12826](https://github.com/matrix-org/synapse/issues/12826))
|
||||||
|
- Add some type hints to test files. ([\#12833](https://github.com/matrix-org/synapse/issues/12833))
|
||||||
|
- Disable 'faster room join' Complement tests when testing against Synapse with workers. ([\#12842](https://github.com/matrix-org/synapse/issues/12842))
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.59.1 (2022-05-18)
|
Synapse 1.59.1 (2022-05-18)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
|
|
6
debian/changelog
vendored
6
debian/changelog
vendored
|
@ -1,3 +1,9 @@
|
||||||
|
matrix-synapse-py3 (1.60.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.60.0rc1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 24 May 2022 12:05:01 +0100
|
||||||
|
|
||||||
matrix-synapse-py3 (1.59.1) stable; urgency=medium
|
matrix-synapse-py3 (1.59.1) stable; urgency=medium
|
||||||
|
|
||||||
* New Synapse release 1.59.1.
|
* New Synapse release 1.59.1.
|
||||||
|
|
|
@ -12,6 +12,7 @@ export PYTHONPATH
|
||||||
|
|
||||||
echo "$PYTHONPATH"
|
echo "$PYTHONPATH"
|
||||||
|
|
||||||
|
# Create servers which listen on HTTP at 808x and HTTPS at 848x.
|
||||||
for port in 8080 8081 8082; do
|
for port in 8080 8081 8082; do
|
||||||
echo "Starting server on port $port... "
|
echo "Starting server on port $port... "
|
||||||
|
|
||||||
|
@ -19,10 +20,12 @@ for port in 8080 8081 8082; do
|
||||||
mkdir -p demo/$port
|
mkdir -p demo/$port
|
||||||
pushd demo/$port || exit
|
pushd demo/$port || exit
|
||||||
|
|
||||||
# Generate the configuration for the homeserver at localhost:848x.
|
# Generate the configuration for the homeserver at localhost:848x, note that
|
||||||
|
# the homeserver name needs to match the HTTPS listening port for federation
|
||||||
|
# to properly work..
|
||||||
python3 -m synapse.app.homeserver \
|
python3 -m synapse.app.homeserver \
|
||||||
--generate-config \
|
--generate-config \
|
||||||
--server-name "localhost:$port" \
|
--server-name "localhost:$https_port" \
|
||||||
--config-path "$port.config" \
|
--config-path "$port.config" \
|
||||||
--report-stats no
|
--report-stats no
|
||||||
|
|
||||||
|
|
|
@ -55,7 +55,7 @@ RUN \
|
||||||
# NB: In poetry 1.2 `poetry export` will be moved into a plugin; we'll need to also
|
# NB: In poetry 1.2 `poetry export` will be moved into a plugin; we'll need to also
|
||||||
# pip install poetry-plugin-export (https://github.com/python-poetry/poetry-plugin-export).
|
# pip install poetry-plugin-export (https://github.com/python-poetry/poetry-plugin-export).
|
||||||
RUN --mount=type=cache,target=/root/.cache/pip \
|
RUN --mount=type=cache,target=/root/.cache/pip \
|
||||||
pip install --user git+https://github.com/python-poetry/poetry.git@fb13b3a676f476177f7937ffa480ee5cff9a90a5
|
pip install --user "poetry-core==1.1.0a7" "git+https://github.com/python-poetry/poetry.git@fb13b3a676f476177f7937ffa480ee5cff9a90a5"
|
||||||
|
|
||||||
WORKDIR /synapse
|
WORKDIR /synapse
|
||||||
|
|
||||||
|
|
|
@ -6,12 +6,6 @@
|
||||||
# https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
|
# https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
|
||||||
FROM matrixdotorg/synapse-workers
|
FROM matrixdotorg/synapse-workers
|
||||||
|
|
||||||
# Download a caddy server to stand in front of nginx and terminate TLS using Complement's
|
|
||||||
# custom CA.
|
|
||||||
# We include this near the top of the file in order to cache the result.
|
|
||||||
RUN curl -OL "https://github.com/caddyserver/caddy/releases/download/v2.3.0/caddy_2.3.0_linux_amd64.tar.gz" && \
|
|
||||||
tar xzf caddy_2.3.0_linux_amd64.tar.gz && rm caddy_2.3.0_linux_amd64.tar.gz && mv caddy /root
|
|
||||||
|
|
||||||
# Install postgresql
|
# Install postgresql
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y postgresql-13
|
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y postgresql-13
|
||||||
|
@ -31,16 +25,12 @@ COPY conf-workers/workers-shared.yaml /conf/workers/shared.yaml
|
||||||
|
|
||||||
WORKDIR /data
|
WORKDIR /data
|
||||||
|
|
||||||
# Copy the caddy config
|
|
||||||
COPY conf-workers/caddy.complement.json /root/caddy.json
|
|
||||||
|
|
||||||
COPY conf-workers/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
|
COPY conf-workers/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
|
||||||
COPY conf-workers/caddy.supervisord.conf /etc/supervisor/conf.d/caddy.conf
|
|
||||||
|
|
||||||
# Copy the entrypoint
|
# Copy the entrypoint
|
||||||
COPY conf-workers/start-complement-synapse-workers.sh /
|
COPY conf-workers/start-complement-synapse-workers.sh /
|
||||||
|
|
||||||
# Expose caddy's listener ports
|
# Expose nginx's listener ports
|
||||||
EXPOSE 8008 8448
|
EXPOSE 8008 8448
|
||||||
|
|
||||||
ENTRYPOINT ["/start-complement-synapse-workers.sh"]
|
ENTRYPOINT ["/start-complement-synapse-workers.sh"]
|
||||||
|
|
|
@ -1,72 +0,0 @@
|
||||||
{
|
|
||||||
"apps": {
|
|
||||||
"http": {
|
|
||||||
"servers": {
|
|
||||||
"srv0": {
|
|
||||||
"listen": [
|
|
||||||
":8448"
|
|
||||||
],
|
|
||||||
"routes": [
|
|
||||||
{
|
|
||||||
"match": [
|
|
||||||
{
|
|
||||||
"host": [
|
|
||||||
"{{ server_name }}"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"handle": [
|
|
||||||
{
|
|
||||||
"handler": "subroute",
|
|
||||||
"routes": [
|
|
||||||
{
|
|
||||||
"handle": [
|
|
||||||
{
|
|
||||||
"handler": "reverse_proxy",
|
|
||||||
"upstreams": [
|
|
||||||
{
|
|
||||||
"dial": "localhost:8008"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"terminal": true
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"tls": {
|
|
||||||
"automation": {
|
|
||||||
"policies": [
|
|
||||||
{
|
|
||||||
"subjects": [
|
|
||||||
"{{ server_name }}"
|
|
||||||
],
|
|
||||||
"issuers": [
|
|
||||||
{
|
|
||||||
"module": "internal"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"on_demand": true
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"pki": {
|
|
||||||
"certificate_authorities": {
|
|
||||||
"local": {
|
|
||||||
"name": "Complement CA",
|
|
||||||
"root": {
|
|
||||||
"certificate": "/complement/ca/ca.crt",
|
|
||||||
"private_key": "/complement/ca/ca.key"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,7 +0,0 @@
|
||||||
[program:caddy]
|
|
||||||
command=/usr/local/bin/prefix-log /root/caddy run --config /root/caddy.json
|
|
||||||
autorestart=unexpected
|
|
||||||
stdout_logfile=/dev/stdout
|
|
||||||
stdout_logfile_maxbytes=0
|
|
||||||
stderr_logfile=/dev/stderr
|
|
||||||
stderr_logfile_maxbytes=0
|
|
|
@ -9,9 +9,6 @@ function log {
|
||||||
echo "$d $@"
|
echo "$d $@"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Replace the server name in the caddy config
|
|
||||||
sed -i "s/{{ server_name }}/${SERVER_NAME}/g" /root/caddy.json
|
|
||||||
|
|
||||||
# Set the server name of the homeserver
|
# Set the server name of the homeserver
|
||||||
export SYNAPSE_SERVER_NAME=${SERVER_NAME}
|
export SYNAPSE_SERVER_NAME=${SERVER_NAME}
|
||||||
|
|
||||||
|
@ -39,6 +36,26 @@ export SYNAPSE_WORKER_TYPES="\
|
||||||
appservice, \
|
appservice, \
|
||||||
pusher"
|
pusher"
|
||||||
|
|
||||||
|
# Add Complement's appservice registration directory, if there is one
|
||||||
|
# (It can be absent when there are no application services in this test!)
|
||||||
|
if [ -d /complement/appservice ]; then
|
||||||
|
export SYNAPSE_AS_REGISTRATION_DIR=/complement/appservice
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Generate a TLS key, then generate a certificate by having Complement's CA sign it
|
||||||
|
# Note that both the key and certificate are in PEM format (not DER).
|
||||||
|
openssl genrsa -out /conf/server.tls.key 2048
|
||||||
|
|
||||||
|
openssl req -new -key /conf/server.tls.key -out /conf/server.tls.csr \
|
||||||
|
-subj "/CN=${SERVER_NAME}"
|
||||||
|
|
||||||
|
openssl x509 -req -in /conf/server.tls.csr \
|
||||||
|
-CA /complement/ca/ca.crt -CAkey /complement/ca/ca.key -set_serial 1 \
|
||||||
|
-out /conf/server.tls.crt
|
||||||
|
|
||||||
|
export SYNAPSE_TLS_CERT=/conf/server.tls.crt
|
||||||
|
export SYNAPSE_TLS_KEY=/conf/server.tls.key
|
||||||
|
|
||||||
# Run the script that writes the necessary config files and starts supervisord, which in turn
|
# Run the script that writes the necessary config files and starts supervisord, which in turn
|
||||||
# starts everything else
|
# starts everything else
|
||||||
exec /configure_workers_and_start.py
|
exec /configure_workers_and_start.py
|
||||||
|
|
|
@ -5,6 +5,12 @@ enable_registration: true
|
||||||
enable_registration_without_verification: true
|
enable_registration_without_verification: true
|
||||||
bcrypt_rounds: 4
|
bcrypt_rounds: 4
|
||||||
|
|
||||||
|
## Registration ##
|
||||||
|
|
||||||
|
# Needed by Complement to register admin users
|
||||||
|
# DO NOT USE in a production configuration! This should be a random secret.
|
||||||
|
registration_shared_secret: complement
|
||||||
|
|
||||||
## Federation ##
|
## Federation ##
|
||||||
|
|
||||||
# trust certs signed by Complement's CA
|
# trust certs signed by Complement's CA
|
||||||
|
@ -53,6 +59,18 @@ rc_joins:
|
||||||
per_second: 9999
|
per_second: 9999
|
||||||
burst_count: 9999
|
burst_count: 9999
|
||||||
|
|
||||||
|
rc_3pid_validation:
|
||||||
|
per_second: 1000
|
||||||
|
burst_count: 1000
|
||||||
|
|
||||||
|
rc_invites:
|
||||||
|
per_room:
|
||||||
|
per_second: 1000
|
||||||
|
burst_count: 1000
|
||||||
|
per_user:
|
||||||
|
per_second: 1000
|
||||||
|
burst_count: 1000
|
||||||
|
|
||||||
federation_rr_transactions_per_room_per_second: 9999
|
federation_rr_transactions_per_room_per_second: 9999
|
||||||
|
|
||||||
## Experimental Features ##
|
## Experimental Features ##
|
||||||
|
|
|
@ -87,6 +87,18 @@ rc_joins:
|
||||||
per_second: 9999
|
per_second: 9999
|
||||||
burst_count: 9999
|
burst_count: 9999
|
||||||
|
|
||||||
|
rc_3pid_validation:
|
||||||
|
per_second: 1000
|
||||||
|
burst_count: 1000
|
||||||
|
|
||||||
|
rc_invites:
|
||||||
|
per_room:
|
||||||
|
per_second: 1000
|
||||||
|
burst_count: 1000
|
||||||
|
per_user:
|
||||||
|
per_second: 1000
|
||||||
|
burst_count: 1000
|
||||||
|
|
||||||
federation_rr_transactions_per_room_per_second: 9999
|
federation_rr_transactions_per_room_per_second: 9999
|
||||||
|
|
||||||
## API Configuration ##
|
## API Configuration ##
|
||||||
|
|
|
@ -9,6 +9,22 @@ server {
|
||||||
listen 8008;
|
listen 8008;
|
||||||
listen [::]:8008;
|
listen [::]:8008;
|
||||||
|
|
||||||
|
{% if tls_cert_path is not none and tls_key_path is not none %}
|
||||||
|
listen 8448 ssl;
|
||||||
|
listen [::]:8448 ssl;
|
||||||
|
|
||||||
|
ssl_certificate {{ tls_cert_path }};
|
||||||
|
ssl_certificate_key {{ tls_key_path }};
|
||||||
|
|
||||||
|
# Some directives from cipherlist.eu (fka cipherli.st):
|
||||||
|
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
|
||||||
|
ssl_prefer_server_ciphers on;
|
||||||
|
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
|
||||||
|
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
|
||||||
|
ssl_session_cache shared:SSL:10m;
|
||||||
|
ssl_session_tickets off; # Requires nginx >= 1.5.9
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
server_name localhost;
|
server_name localhost;
|
||||||
|
|
||||||
# Nginx by default only allows file uploads up to 1M in size
|
# Nginx by default only allows file uploads up to 1M in size
|
||||||
|
|
|
@ -6,4 +6,13 @@
|
||||||
redis:
|
redis:
|
||||||
enabled: true
|
enabled: true
|
||||||
|
|
||||||
{{ shared_worker_config }}
|
{% if appservice_registrations is not none %}
|
||||||
|
## Application Services ##
|
||||||
|
# A list of application service config files to use.
|
||||||
|
app_service_config_files:
|
||||||
|
{%- for path in appservice_registrations %}
|
||||||
|
- "{{ path }}"
|
||||||
|
{%- endfor %}
|
||||||
|
{%- endif %}
|
||||||
|
|
||||||
|
{{ shared_worker_config }}
|
||||||
|
|
|
@ -21,6 +21,11 @@
|
||||||
# * SYNAPSE_REPORT_STATS: Whether to report stats.
|
# * SYNAPSE_REPORT_STATS: Whether to report stats.
|
||||||
# * SYNAPSE_WORKER_TYPES: A comma separated list of worker names as specified in WORKER_CONFIG
|
# * SYNAPSE_WORKER_TYPES: A comma separated list of worker names as specified in WORKER_CONFIG
|
||||||
# below. Leave empty for no workers, or set to '*' for all possible workers.
|
# below. Leave empty for no workers, or set to '*' for all possible workers.
|
||||||
|
# * SYNAPSE_AS_REGISTRATION_DIR: If specified, a directory in which .yaml and .yml files
|
||||||
|
# will be treated as Application Service registration files.
|
||||||
|
# * SYNAPSE_TLS_CERT: Path to a TLS certificate in PEM format.
|
||||||
|
# * SYNAPSE_TLS_KEY: Path to a TLS key. If this and SYNAPSE_TLS_CERT are specified,
|
||||||
|
# Nginx will be configured to serve TLS on port 8448.
|
||||||
#
|
#
|
||||||
# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined
|
# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined
|
||||||
# in the project's README), this script may be run multiple times, and functionality should
|
# in the project's README), this script may be run multiple times, and functionality should
|
||||||
|
@ -29,6 +34,7 @@
|
||||||
import os
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
from typing import Any, Dict, List, Mapping, MutableMapping, NoReturn, Set
|
from typing import Any, Dict, List, Mapping, MutableMapping, NoReturn, Set
|
||||||
|
|
||||||
import jinja2
|
import jinja2
|
||||||
|
@ -488,11 +494,23 @@ def generate_worker_files(
|
||||||
master_log_config = generate_worker_log_config(environ, "master", data_dir)
|
master_log_config = generate_worker_log_config(environ, "master", data_dir)
|
||||||
shared_config["log_config"] = master_log_config
|
shared_config["log_config"] = master_log_config
|
||||||
|
|
||||||
|
# Find application service registrations
|
||||||
|
appservice_registrations = None
|
||||||
|
appservice_registration_dir = os.environ.get("SYNAPSE_AS_REGISTRATION_DIR")
|
||||||
|
if appservice_registration_dir:
|
||||||
|
# Scan for all YAML files that should be application service registrations.
|
||||||
|
appservice_registrations = [
|
||||||
|
str(reg_path.resolve())
|
||||||
|
for reg_path in Path(appservice_registration_dir).iterdir()
|
||||||
|
if reg_path.suffix.lower() in (".yaml", ".yml")
|
||||||
|
]
|
||||||
|
|
||||||
# Shared homeserver config
|
# Shared homeserver config
|
||||||
convert(
|
convert(
|
||||||
"/conf/shared.yaml.j2",
|
"/conf/shared.yaml.j2",
|
||||||
"/conf/workers/shared.yaml",
|
"/conf/workers/shared.yaml",
|
||||||
shared_worker_config=yaml.dump(shared_config),
|
shared_worker_config=yaml.dump(shared_config),
|
||||||
|
appservice_registrations=appservice_registrations,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Nginx config
|
# Nginx config
|
||||||
|
@ -501,6 +519,8 @@ def generate_worker_files(
|
||||||
"/etc/nginx/conf.d/matrix-synapse.conf",
|
"/etc/nginx/conf.d/matrix-synapse.conf",
|
||||||
worker_locations=nginx_location_config,
|
worker_locations=nginx_location_config,
|
||||||
upstream_directives=nginx_upstream_config,
|
upstream_directives=nginx_upstream_config,
|
||||||
|
tls_cert_path=os.environ.get("SYNAPSE_TLS_CERT"),
|
||||||
|
tls_key_path=os.environ.get("SYNAPSE_TLS_KEY"),
|
||||||
)
|
)
|
||||||
|
|
||||||
# Supervisord config
|
# Supervisord config
|
||||||
|
|
|
@ -89,6 +89,7 @@
|
||||||
- [Database Schemas](development/database_schema.md)
|
- [Database Schemas](development/database_schema.md)
|
||||||
- [Experimental features](development/experimental_features.md)
|
- [Experimental features](development/experimental_features.md)
|
||||||
- [Synapse Architecture]()
|
- [Synapse Architecture]()
|
||||||
|
- [Cancellation](development/synapse_architecture/cancellation.md)
|
||||||
- [Log Contexts](log_contexts.md)
|
- [Log Contexts](log_contexts.md)
|
||||||
- [Replication](replication.md)
|
- [Replication](replication.md)
|
||||||
- [TCP Replication](tcp_replication.md)
|
- [TCP Replication](tcp_replication.md)
|
||||||
|
|
|
@ -289,7 +289,7 @@ POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
|
||||||
|
|
||||||
URL Parameters
|
URL Parameters
|
||||||
|
|
||||||
* `unix_timestamp_in_ms`: string representing a positive integer - Unix timestamp in milliseconds.
|
* `before_ts`: string representing a positive integer - Unix timestamp in milliseconds.
|
||||||
All cached media that was last accessed before this timestamp will be removed.
|
All cached media that was last accessed before this timestamp will be removed.
|
||||||
|
|
||||||
Response:
|
Response:
|
||||||
|
|
|
@ -206,7 +206,32 @@ This means that we need to run our unit tests against PostgreSQL too. Our CI doe
|
||||||
this automatically for pull requests and release candidates, but it's sometimes
|
this automatically for pull requests and release candidates, but it's sometimes
|
||||||
useful to reproduce this locally.
|
useful to reproduce this locally.
|
||||||
|
|
||||||
To do so, [configure Postgres](../postgres.md) and run `trial` with the
|
#### Using Docker
|
||||||
|
|
||||||
|
The easiest way to do so is to run Postgres via a docker container. In one
|
||||||
|
terminal:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker run --rm -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=postgres -e POSTGRES_DB=postgress -p 5432:5432 postgres:14
|
||||||
|
```
|
||||||
|
|
||||||
|
If you see an error like
|
||||||
|
|
||||||
|
```
|
||||||
|
docker: Error response from daemon: driver failed programming external connectivity on endpoint nice_ride (b57bbe2e251b70015518d00c9981e8cb8346b5c785250341a6c53e3c899875f1): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use.
|
||||||
|
```
|
||||||
|
|
||||||
|
then something is already bound to port 5432. You're probably already running postgres locally.
|
||||||
|
|
||||||
|
Once you have a postgres server running, invoke `trial` in a second terminal:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
SYNAPSE_POSTGRES=1 SYNAPSE_POSTGRES_HOST=127.0.0.1 SYNAPSE_POSTGRES_USER=postgres SYNAPSE_POSTGRES_PASSWORD=mysecretpassword poetry run trial tests
|
||||||
|
````
|
||||||
|
|
||||||
|
#### Using an existing Postgres installation
|
||||||
|
|
||||||
|
If you have postgres already installed on your system, you can run `trial` with the
|
||||||
following environment variables matching your configuration:
|
following environment variables matching your configuration:
|
||||||
|
|
||||||
- `SYNAPSE_POSTGRES` to anything nonempty
|
- `SYNAPSE_POSTGRES` to anything nonempty
|
||||||
|
@ -229,8 +254,8 @@ You don't need to specify the host, user, port or password if your Postgres
|
||||||
server is set to authenticate you over the UNIX socket (i.e. if the `psql` command
|
server is set to authenticate you over the UNIX socket (i.e. if the `psql` command
|
||||||
works without further arguments).
|
works without further arguments).
|
||||||
|
|
||||||
Your Postgres account needs to be able to create databases.
|
Your Postgres account needs to be able to create databases; see the postgres
|
||||||
|
docs for [`ALTER ROLE`](https://www.postgresql.org/docs/current/sql-alterrole.html).
|
||||||
|
|
||||||
## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)).
|
## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)).
|
||||||
|
|
||||||
|
@ -397,8 +422,8 @@ same lightweight approach that the Linux Kernel
|
||||||
[submitting patches process](
|
[submitting patches process](
|
||||||
https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin>),
|
https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin>),
|
||||||
[Docker](https://github.com/docker/docker/blob/master/CONTRIBUTING.md), and many other
|
[Docker](https://github.com/docker/docker/blob/master/CONTRIBUTING.md), and many other
|
||||||
projects use: the DCO (Developer Certificate of Origin:
|
projects use: the DCO ([Developer Certificate of Origin](http://developercertificate.org/)).
|
||||||
http://developercertificate.org/). This is a simple declaration that you wrote
|
This is a simple declaration that you wrote
|
||||||
the contribution or otherwise have the right to contribute it to Matrix:
|
the contribution or otherwise have the right to contribute it to Matrix:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
Requires you to have a [Synapse development environment setup](https://matrix-org.github.io/synapse/develop/development/contributing_guide.html#4-install-the-dependencies).
|
Requires you to have a [Synapse development environment setup](https://matrix-org.github.io/synapse/develop/development/contributing_guide.html#4-install-the-dependencies).
|
||||||
|
|
||||||
The demo setup allows running three federation Synapse servers, with server
|
The demo setup allows running three federation Synapse servers, with server
|
||||||
names `localhost:8080`, `localhost:8081`, and `localhost:8082`.
|
names `localhost:8480`, `localhost:8481`, and `localhost:8482`.
|
||||||
|
|
||||||
You can access them via any Matrix client over HTTP at `localhost:8080`,
|
You can access them via any Matrix client over HTTP at `localhost:8080`,
|
||||||
`localhost:8081`, and `localhost:8082` or over HTTPS at `localhost:8480`,
|
`localhost:8081`, and `localhost:8082` or over HTTPS at `localhost:8480`,
|
||||||
|
@ -20,9 +20,10 @@ and the servers are configured in a highly insecure way, including:
|
||||||
The servers are configured to store their data under `demo/8080`, `demo/8081`, and
|
The servers are configured to store their data under `demo/8080`, `demo/8081`, and
|
||||||
`demo/8082`. This includes configuration, logs, SQLite databases, and media.
|
`demo/8082`. This includes configuration, logs, SQLite databases, and media.
|
||||||
|
|
||||||
Note that when joining a public room on a different HS via "#foo:bar.net", then
|
Note that when joining a public room on a different homeserver via "#foo:bar.net",
|
||||||
you are (in the current impl) joining a room with room_id "foo". This means that
|
then you are (in the current implementation) joining a room with room_id "foo".
|
||||||
it won't work if your HS already has a room with that name.
|
This means that it won't work if your homeserver already has a room with that
|
||||||
|
name.
|
||||||
|
|
||||||
## Using the demo scripts
|
## Using the demo scripts
|
||||||
|
|
||||||
|
|
392
docs/development/synapse_architecture/cancellation.md
Normal file
392
docs/development/synapse_architecture/cancellation.md
Normal file
|
@ -0,0 +1,392 @@
|
||||||
|
# Cancellation
|
||||||
|
Sometimes, requests take a long time to service and clients disconnect
|
||||||
|
before Synapse produces a response. To avoid wasting resources, Synapse
|
||||||
|
can cancel request processing for select endpoints marked with the
|
||||||
|
`@cancellable` decorator.
|
||||||
|
|
||||||
|
Synapse makes use of Twisted's `Deferred.cancel()` feature to make
|
||||||
|
cancellation work. The `@cancellable` decorator does nothing by itself
|
||||||
|
and merely acts as a flag, signalling to developers and other code alike
|
||||||
|
that a method can be cancelled.
|
||||||
|
|
||||||
|
## Enabling cancellation for an endpoint
|
||||||
|
1. Check that the endpoint method, and any `async` functions in its call
|
||||||
|
tree handle cancellation correctly. See
|
||||||
|
[Handling cancellation correctly](#handling-cancellation-correctly)
|
||||||
|
for a list of things to look out for.
|
||||||
|
2. Add the `@cancellable` decorator to the `on_GET/POST/PUT/DELETE`
|
||||||
|
method. It's not recommended to make non-`GET` methods cancellable,
|
||||||
|
since cancellation midway through some database updates is less
|
||||||
|
likely to be handled correctly.
|
||||||
|
|
||||||
|
## Mechanics
|
||||||
|
There are two stages to cancellation: downward propagation of a
|
||||||
|
`cancel()` call, followed by upwards propagation of a `CancelledError`
|
||||||
|
out of a blocked `await`.
|
||||||
|
Both Twisted and asyncio have a cancellation mechanism.
|
||||||
|
|
||||||
|
| | Method | Exception | Exception inherits from |
|
||||||
|
|---------------|---------------------|-----------------------------------------|-------------------------|
|
||||||
|
| Twisted | `Deferred.cancel()` | `twisted.internet.defer.CancelledError` | `Exception` (!) |
|
||||||
|
| asyncio | `Task.cancel()` | `asyncio.CancelledError` | `BaseException` |
|
||||||
|
|
||||||
|
### Deferred.cancel()
|
||||||
|
When Synapse starts handling a request, it runs the async method
|
||||||
|
responsible for handling it using `defer.ensureDeferred`, which returns
|
||||||
|
a `Deferred`. For example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def do_something() -> Deferred[None]:
|
||||||
|
...
|
||||||
|
|
||||||
|
@cancellable
|
||||||
|
async def on_GET() -> Tuple[int, JsonDict]:
|
||||||
|
d = make_deferred_yieldable(do_something())
|
||||||
|
await d
|
||||||
|
return 200, {}
|
||||||
|
|
||||||
|
request = defer.ensureDeferred(on_GET())
|
||||||
|
```
|
||||||
|
|
||||||
|
When a client disconnects early, Synapse checks for the presence of the
|
||||||
|
`@cancellable` decorator on `on_GET`. Since `on_GET` is cancellable,
|
||||||
|
`Deferred.cancel()` is called on the `Deferred` from
|
||||||
|
`defer.ensureDeferred`, ie. `request`. Twisted knows which `Deferred`
|
||||||
|
`request` is waiting on and passes the `cancel()` call on to `d`.
|
||||||
|
|
||||||
|
The `Deferred` being waited on, `d`, may have its own handling for
|
||||||
|
`cancel()` and pass the call on to other `Deferred`s.
|
||||||
|
|
||||||
|
Eventually, a `Deferred` handles the `cancel()` call by resolving itself
|
||||||
|
with a `CancelledError`.
|
||||||
|
|
||||||
|
### CancelledError
|
||||||
|
The `CancelledError` gets raised out of the `await` and bubbles up, as
|
||||||
|
per normal Python exception handling.
|
||||||
|
|
||||||
|
## Handling cancellation correctly
|
||||||
|
In general, when writing code that might be subject to cancellation, two
|
||||||
|
things must be considered:
|
||||||
|
* The effect of `CancelledError`s raised out of `await`s.
|
||||||
|
* The effect of `Deferred`s being `cancel()`ed.
|
||||||
|
|
||||||
|
Examples of code that handles cancellation incorrectly include:
|
||||||
|
* `try-except` blocks which swallow `CancelledError`s.
|
||||||
|
* Code that shares the same `Deferred`, which may be cancelled, between
|
||||||
|
multiple requests.
|
||||||
|
* Code that starts some processing that's exempt from cancellation, but
|
||||||
|
uses a logging context from cancellable code. The logging context
|
||||||
|
will be finished upon cancellation, while the uncancelled processing
|
||||||
|
is still using it.
|
||||||
|
|
||||||
|
Some common patterns are listed below in more detail.
|
||||||
|
|
||||||
|
### `async` function calls
|
||||||
|
Most functions in Synapse are relatively straightforward from a
|
||||||
|
cancellation standpoint: they don't do anything with `Deferred`s and
|
||||||
|
purely call and `await` other `async` functions.
|
||||||
|
|
||||||
|
An `async` function handles cancellation correctly if its own code
|
||||||
|
handles cancellation correctly and all the async function it calls
|
||||||
|
handle cancellation correctly. For example:
|
||||||
|
```python
|
||||||
|
async def do_two_things() -> None:
|
||||||
|
check_something()
|
||||||
|
await do_something()
|
||||||
|
await do_something_else()
|
||||||
|
```
|
||||||
|
`do_two_things` handles cancellation correctly if `do_something` and
|
||||||
|
`do_something_else` handle cancellation correctly.
|
||||||
|
|
||||||
|
That is, when checking whether a function handles cancellation
|
||||||
|
correctly, its implementation and all its `async` function calls need to
|
||||||
|
be checked, recursively.
|
||||||
|
|
||||||
|
As `check_something` is not `async`, it does not need to be checked.
|
||||||
|
|
||||||
|
### CancelledErrors
|
||||||
|
Because Twisted's `CancelledError`s are `Exception`s, it's easy to
|
||||||
|
accidentally catch and suppress them. Care must be taken to ensure that
|
||||||
|
`CancelledError`s are allowed to propagate upwards.
|
||||||
|
|
||||||
|
<table width="100%">
|
||||||
|
<tr>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Bad**:
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
await do_something()
|
||||||
|
except Exception:
|
||||||
|
# `CancelledError` gets swallowed here.
|
||||||
|
logger.info(...)
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Good**:
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
await do_something()
|
||||||
|
except CancelledError:
|
||||||
|
raise
|
||||||
|
except Exception:
|
||||||
|
logger.info(...)
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**OK**:
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
check_something()
|
||||||
|
# A `CancelledError` won't ever be raised here.
|
||||||
|
except Exception:
|
||||||
|
logger.info(...)
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Good**:
|
||||||
|
```python
|
||||||
|
try:
|
||||||
|
await do_something()
|
||||||
|
except ValueError:
|
||||||
|
logger.info(...)
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
#### defer.gatherResults
|
||||||
|
`defer.gatherResults` produces a `Deferred` which:
|
||||||
|
* broadcasts `cancel()` calls to every `Deferred` being waited on.
|
||||||
|
* wraps the first exception it sees in a `FirstError`.
|
||||||
|
|
||||||
|
Together, this means that `CancelledError`s will be wrapped in
|
||||||
|
a `FirstError` unless unwrapped. Such `FirstError`s are liable to be
|
||||||
|
swallowed, so they must be unwrapped.
|
||||||
|
|
||||||
|
<table width="100%">
|
||||||
|
<tr>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Bad**:
|
||||||
|
```python
|
||||||
|
async def do_something() -> None:
|
||||||
|
await make_deferred_yieldable(
|
||||||
|
defer.gatherResults([...], consumeErrors=True)
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await do_something()
|
||||||
|
except CancelledError:
|
||||||
|
raise
|
||||||
|
except Exception:
|
||||||
|
# `FirstError(CancelledError)` gets swallowed here.
|
||||||
|
logger.info(...)
|
||||||
|
```
|
||||||
|
|
||||||
|
</td>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Good**:
|
||||||
|
```python
|
||||||
|
async def do_something() -> None:
|
||||||
|
await make_deferred_yieldable(
|
||||||
|
defer.gatherResults([...], consumeErrors=True)
|
||||||
|
).addErrback(unwrapFirstError)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await do_something()
|
||||||
|
except CancelledError:
|
||||||
|
raise
|
||||||
|
except Exception:
|
||||||
|
logger.info(...)
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
### Creation of `Deferred`s
|
||||||
|
If a function creates a `Deferred`, the effect of cancelling it must be considered. `Deferred`s that get shared are likely to have unintended behaviour when cancelled.
|
||||||
|
|
||||||
|
<table width="100%">
|
||||||
|
<tr>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Bad**:
|
||||||
|
```python
|
||||||
|
cache: Dict[str, Deferred[None]] = {}
|
||||||
|
|
||||||
|
def wait_for_room(room_id: str) -> Deferred[None]:
|
||||||
|
deferred = cache.get(room_id)
|
||||||
|
if deferred is None:
|
||||||
|
deferred = Deferred()
|
||||||
|
cache[room_id] = deferred
|
||||||
|
# `deferred` can have multiple waiters.
|
||||||
|
# All of them will observe a `CancelledError`
|
||||||
|
# if any one of them is cancelled.
|
||||||
|
return make_deferred_yieldable(deferred)
|
||||||
|
|
||||||
|
# Request 1
|
||||||
|
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||||
|
# Request 2
|
||||||
|
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Good**:
|
||||||
|
```python
|
||||||
|
cache: Dict[str, Deferred[None]] = {}
|
||||||
|
|
||||||
|
def wait_for_room(room_id: str) -> Deferred[None]:
|
||||||
|
deferred = cache.get(room_id)
|
||||||
|
if deferred is None:
|
||||||
|
deferred = Deferred()
|
||||||
|
cache[room_id] = deferred
|
||||||
|
# `deferred` will never be cancelled now.
|
||||||
|
# A `CancelledError` will still come out of
|
||||||
|
# the `await`.
|
||||||
|
# `delay_cancellation` may also be used.
|
||||||
|
return make_deferred_yieldable(stop_cancellation(deferred))
|
||||||
|
|
||||||
|
# Request 1
|
||||||
|
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||||
|
# Request 2
|
||||||
|
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
</td>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Good**:
|
||||||
|
```python
|
||||||
|
cache: Dict[str, List[Deferred[None]]] = {}
|
||||||
|
|
||||||
|
def wait_for_room(room_id: str) -> Deferred[None]:
|
||||||
|
if room_id not in cache:
|
||||||
|
cache[room_id] = []
|
||||||
|
# Each request gets its own `Deferred` to wait on.
|
||||||
|
deferred = Deferred()
|
||||||
|
cache[room_id]].append(deferred)
|
||||||
|
return make_deferred_yieldable(deferred)
|
||||||
|
|
||||||
|
# Request 1
|
||||||
|
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||||
|
# Request 2
|
||||||
|
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
### Uncancelled processing
|
||||||
|
Some `async` functions may kick off some `async` processing which is
|
||||||
|
intentionally protected from cancellation, by `stop_cancellation` or
|
||||||
|
other means. If the `async` processing inherits the logcontext of the
|
||||||
|
request which initiated it, care must be taken to ensure that the
|
||||||
|
logcontext is not finished before the `async` processing completes.
|
||||||
|
|
||||||
|
<table width="100%">
|
||||||
|
<tr>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Bad**:
|
||||||
|
```python
|
||||||
|
cache: Optional[ObservableDeferred[None]] = None
|
||||||
|
|
||||||
|
async def do_something_else(
|
||||||
|
to_resolve: Deferred[None]
|
||||||
|
) -> None:
|
||||||
|
await ...
|
||||||
|
logger.info("done!")
|
||||||
|
to_resolve.callback(None)
|
||||||
|
|
||||||
|
async def do_something() -> None:
|
||||||
|
if not cache:
|
||||||
|
to_resolve = Deferred()
|
||||||
|
cache = ObservableDeferred(to_resolve)
|
||||||
|
# `do_something_else` will never be cancelled and
|
||||||
|
# can outlive the `request-1` logging context.
|
||||||
|
run_in_background(do_something_else, to_resolve)
|
||||||
|
|
||||||
|
await make_deferred_yieldable(cache.observe())
|
||||||
|
|
||||||
|
with LoggingContext("request-1"):
|
||||||
|
await do_something()
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
<td width="50%" valign="top">
|
||||||
|
|
||||||
|
**Good**:
|
||||||
|
```python
|
||||||
|
cache: Optional[ObservableDeferred[None]] = None
|
||||||
|
|
||||||
|
async def do_something_else(
|
||||||
|
to_resolve: Deferred[None]
|
||||||
|
) -> None:
|
||||||
|
await ...
|
||||||
|
logger.info("done!")
|
||||||
|
to_resolve.callback(None)
|
||||||
|
|
||||||
|
async def do_something() -> None:
|
||||||
|
if not cache:
|
||||||
|
to_resolve = Deferred()
|
||||||
|
cache = ObservableDeferred(to_resolve)
|
||||||
|
run_in_background(do_something_else, to_resolve)
|
||||||
|
# We'll wait until `do_something_else` is
|
||||||
|
# done before raising a `CancelledError`.
|
||||||
|
await make_deferred_yieldable(
|
||||||
|
delay_cancellation(cache.observe())
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
await make_deferred_yieldable(cache.observe())
|
||||||
|
|
||||||
|
with LoggingContext("request-1"):
|
||||||
|
await do_something()
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td width="50%">
|
||||||
|
|
||||||
|
**OK**:
|
||||||
|
```python
|
||||||
|
cache: Optional[ObservableDeferred[None]] = None
|
||||||
|
|
||||||
|
async def do_something_else(
|
||||||
|
to_resolve: Deferred[None]
|
||||||
|
) -> None:
|
||||||
|
await ...
|
||||||
|
logger.info("done!")
|
||||||
|
to_resolve.callback(None)
|
||||||
|
|
||||||
|
async def do_something() -> None:
|
||||||
|
if not cache:
|
||||||
|
to_resolve = Deferred()
|
||||||
|
cache = ObservableDeferred(to_resolve)
|
||||||
|
# `do_something_else` will get its own independent
|
||||||
|
# logging context. `request-1` will not count any
|
||||||
|
# metrics from `do_something_else`.
|
||||||
|
run_as_background_process(
|
||||||
|
"do_something_else",
|
||||||
|
do_something_else,
|
||||||
|
to_resolve,
|
||||||
|
)
|
||||||
|
|
||||||
|
await make_deferred_yieldable(cache.observe())
|
||||||
|
|
||||||
|
with LoggingContext("request-1"):
|
||||||
|
await do_something()
|
||||||
|
```
|
||||||
|
</td>
|
||||||
|
<td width="50%">
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
|
@ -11,22 +11,29 @@ The available spam checker callbacks are:
|
||||||
### `check_event_for_spam`
|
### `check_event_for_spam`
|
||||||
|
|
||||||
_First introduced in Synapse v1.37.0_
|
_First introduced in Synapse v1.37.0_
|
||||||
|
_Signature extended to support Allow and Code in Synapse v1.60.0_
|
||||||
|
_Boolean and string return value types deprecated in Synapse v1.60.0_
|
||||||
|
|
||||||
```python
|
```python
|
||||||
async def check_event_for_spam(event: "synapse.events.EventBase") -> Union[bool, str]
|
async def check_event_for_spam(event: "synapse.module_api.EventBase") -> Union["synapse.module_api.ALLOW", "synapse.module_api.error.Codes", str, bool]
|
||||||
```
|
```
|
||||||
|
|
||||||
Called when receiving an event from a client or via federation. The callback must return
|
Called when receiving an event from a client or via federation. The callback must return either:
|
||||||
either:
|
- `synapse.module_api.ALLOW`, to allow the operation. Other callbacks
|
||||||
- an error message string, to indicate the event must be rejected because of spam and
|
may still decide to reject it.
|
||||||
give a rejection reason to forward to clients;
|
- `synapse.api.Codes` to reject the operation with an error code. In case
|
||||||
- the boolean `True`, to indicate that the event is spammy, but not provide further details; or
|
of doubt, `synapse.api.error.Codes.FORBIDDEN` is a good error code.
|
||||||
- the booelan `False`, to indicate that the event is not considered spammy.
|
- (deprecated) a `str` to reject the operation and specify an error message. Note that clients
|
||||||
|
typically will not localize the error message to the user's preferred locale.
|
||||||
|
- (deprecated) on `False`, behave as `ALLOW`. Deprecated as confusing, as some
|
||||||
|
callbacks in expect `True` to allow and others `True` to reject.
|
||||||
|
- (deprecated) on `True`, behave as `synapse.api.error.Codes.FORBIDDEN`. Deprecated as confusing, as
|
||||||
|
some callbacks in expect `True` to allow and others `True` to reject.
|
||||||
|
|
||||||
If multiple modules implement this callback, they will be considered in order. If a
|
If multiple modules implement this callback, they will be considered in order. If a
|
||||||
callback returns `False`, Synapse falls through to the next one. The value of the first
|
callback returns `synapse.module_api.ALLOW`, Synapse falls through to the next one. The value of the
|
||||||
callback that does not return `False` will be used. If this happens, Synapse will not call
|
first callback that does not return `synapse.module_api.ALLOW` will be used. If this happens, Synapse
|
||||||
any of the subsequent implementations of this callback.
|
will not call any of the subsequent implementations of this callback.
|
||||||
|
|
||||||
### `user_may_join_room`
|
### `user_may_join_room`
|
||||||
|
|
||||||
|
@ -249,6 +256,24 @@ callback returns `False`, Synapse falls through to the next one. The value of th
|
||||||
callback that does not return `False` will be used. If this happens, Synapse will not call
|
callback that does not return `False` will be used. If this happens, Synapse will not call
|
||||||
any of the subsequent implementations of this callback.
|
any of the subsequent implementations of this callback.
|
||||||
|
|
||||||
|
### `should_drop_federated_event`
|
||||||
|
|
||||||
|
_First introduced in Synapse v1.60.0_
|
||||||
|
|
||||||
|
```python
|
||||||
|
async def should_drop_federated_event(event: "synapse.events.EventBase") -> bool
|
||||||
|
```
|
||||||
|
|
||||||
|
Called when checking whether a remote server can federate an event with us. **Returning
|
||||||
|
`True` from this function will silently drop a federated event and split-brain our view
|
||||||
|
of a room's DAG, and thus you shouldn't use this callback unless you know what you are
|
||||||
|
doing.**
|
||||||
|
|
||||||
|
If multiple modules implement this callback, they will be considered in order. If a
|
||||||
|
callback returns `False`, Synapse falls through to the next one. The value of the first
|
||||||
|
callback that does not return `False` will be used. If this happens, Synapse will not call
|
||||||
|
any of the subsequent implementations of this callback.
|
||||||
|
|
||||||
## Example
|
## Example
|
||||||
|
|
||||||
The example below is a module that implements the spam checker callback
|
The example below is a module that implements the spam checker callback
|
||||||
|
|
|
@ -159,7 +159,7 @@ Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to
|
||||||
oidc_providers:
|
oidc_providers:
|
||||||
- idp_id: keycloak
|
- idp_id: keycloak
|
||||||
idp_name: "My KeyCloak server"
|
idp_name: "My KeyCloak server"
|
||||||
issuer: "https://127.0.0.1:8443/auth/realms/{realm_name}"
|
issuer: "https://127.0.0.1:8443/realms/{realm_name}"
|
||||||
client_id: "synapse"
|
client_id: "synapse"
|
||||||
client_secret: "copy secret generated from above"
|
client_secret: "copy secret generated from above"
|
||||||
scopes: ["openid", "profile"]
|
scopes: ["openid", "profile"]
|
||||||
|
@ -293,7 +293,7 @@ can be used to retrieve information on the authenticated user. As the Synapse
|
||||||
login mechanism needs an attribute to uniquely identify users, and that endpoint
|
login mechanism needs an attribute to uniquely identify users, and that endpoint
|
||||||
does not return a `sub` property, an alternative `subject_claim` has to be set.
|
does not return a `sub` property, an alternative `subject_claim` has to be set.
|
||||||
|
|
||||||
1. Create a new OAuth application: https://github.com/settings/applications/new.
|
1. Create a new OAuth application: [https://github.com/settings/applications/new](https://github.com/settings/applications/new).
|
||||||
2. Set the callback URL to `[synapse public baseurl]/_synapse/client/oidc/callback`.
|
2. Set the callback URL to `[synapse public baseurl]/_synapse/client/oidc/callback`.
|
||||||
|
|
||||||
Synapse config:
|
Synapse config:
|
||||||
|
@ -322,10 +322,10 @@ oidc_providers:
|
||||||
|
|
||||||
[Google][google-idp] is an OpenID certified authentication and authorisation provider.
|
[Google][google-idp] is an OpenID certified authentication and authorisation provider.
|
||||||
|
|
||||||
1. Set up a project in the Google API Console (see
|
1. Set up a project in the Google API Console (see
|
||||||
https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup).
|
[documentation](https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup)).
|
||||||
2. Add an "OAuth Client ID" for a Web Application under "Credentials".
|
3. Add an "OAuth Client ID" for a Web Application under "Credentials".
|
||||||
3. Copy the Client ID and Client Secret, and add the following to your synapse config:
|
4. Copy the Client ID and Client Secret, and add the following to your synapse config:
|
||||||
```yaml
|
```yaml
|
||||||
oidc_providers:
|
oidc_providers:
|
||||||
- idp_id: google
|
- idp_id: google
|
||||||
|
@ -501,8 +501,8 @@ As well as the private key file, you will need:
|
||||||
* Team ID: a 10-character ID associated with your developer account.
|
* Team ID: a 10-character ID associated with your developer account.
|
||||||
* Key ID: the 10-character identifier for the key.
|
* Key ID: the 10-character identifier for the key.
|
||||||
|
|
||||||
https://help.apple.com/developer-account/?lang=en#/dev77c875b7e has more
|
[Apple's developer documentation](https://help.apple.com/developer-account/?lang=en#/dev77c875b7e)
|
||||||
documentation on setting up SiWA.
|
has more information on setting up SiWA.
|
||||||
|
|
||||||
The synapse config will look like this:
|
The synapse config will look like this:
|
||||||
|
|
||||||
|
@ -535,8 +535,8 @@ needed to add OAuth2 capabilities to your Django projects. It supports
|
||||||
|
|
||||||
Configuration on Django's side:
|
Configuration on Django's side:
|
||||||
|
|
||||||
1. Add an application: https://example.com/admin/oauth2_provider/application/add/ and choose parameters like this:
|
1. Add an application: `https://example.com/admin/oauth2_provider/application/add/` and choose parameters like this:
|
||||||
* `Redirect uris`: https://synapse.example.com/_synapse/client/oidc/callback
|
* `Redirect uris`: `https://synapse.example.com/_synapse/client/oidc/callback`
|
||||||
* `Client type`: `Confidential`
|
* `Client type`: `Confidential`
|
||||||
* `Authorization grant type`: `Authorization code`
|
* `Authorization grant type`: `Authorization code`
|
||||||
* `Algorithm`: `HMAC with SHA-2 256`
|
* `Algorithm`: `HMAC with SHA-2 256`
|
||||||
|
|
|
@ -289,7 +289,7 @@ presence:
|
||||||
# federation: the server-server API (/_matrix/federation). Also implies
|
# federation: the server-server API (/_matrix/federation). Also implies
|
||||||
# 'media', 'keys', 'openid'
|
# 'media', 'keys', 'openid'
|
||||||
#
|
#
|
||||||
# keys: the key discovery API (/_matrix/keys).
|
# keys: the key discovery API (/_matrix/key).
|
||||||
#
|
#
|
||||||
# media: the media API (/_matrix/media).
|
# media: the media API (/_matrix/media).
|
||||||
#
|
#
|
||||||
|
@ -730,6 +730,12 @@ retention:
|
||||||
# A cache 'factor' is a multiplier that can be applied to each of
|
# A cache 'factor' is a multiplier that can be applied to each of
|
||||||
# Synapse's caches in order to increase or decrease the maximum
|
# Synapse's caches in order to increase or decrease the maximum
|
||||||
# number of entries that can be stored.
|
# number of entries that can be stored.
|
||||||
|
#
|
||||||
|
# The configuration for cache factors (caches.global_factor and
|
||||||
|
# caches.per_cache_factors) can be reloaded while the application is running,
|
||||||
|
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
|
||||||
|
# the caching config will NOT be applied after a SIGHUP is received; a restart
|
||||||
|
# is necessary.
|
||||||
|
|
||||||
# The number of events to cache in memory. Not affected by
|
# The number of events to cache in memory. Not affected by
|
||||||
# caches.global_factor.
|
# caches.global_factor.
|
||||||
|
@ -778,6 +784,24 @@ caches:
|
||||||
#
|
#
|
||||||
#cache_entry_ttl: 30m
|
#cache_entry_ttl: 30m
|
||||||
|
|
||||||
|
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
|
||||||
|
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
|
||||||
|
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
|
||||||
|
# this option, and all three of the options must be specified for this feature to work.
|
||||||
|
#cache_autotuning:
|
||||||
|
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
|
||||||
|
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||||
|
# the flag below, or until the `min_cache_ttl` is hit.
|
||||||
|
#max_cache_memory_usage: 1024M
|
||||||
|
|
||||||
|
# This flag sets a rough target for the desired memory usage of the caches.
|
||||||
|
#target_cache_memory_usage: 758M
|
||||||
|
|
||||||
|
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
|
||||||
|
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
|
||||||
|
# from being emptied while Synapse is evicting due to memory.
|
||||||
|
#min_cache_ttl: 5m
|
||||||
|
|
||||||
# Controls how long the results of a /sync request are cached for after
|
# Controls how long the results of a /sync request are cached for after
|
||||||
# a successful response is returned. A higher duration can help clients with
|
# a successful response is returned. A higher duration can help clients with
|
||||||
# intermittent connections, at the cost of higher memory usage.
|
# intermittent connections, at the cost of higher memory usage.
|
||||||
|
@ -2462,6 +2486,40 @@ push:
|
||||||
#
|
#
|
||||||
#encryption_enabled_by_default_for_room_type: invite
|
#encryption_enabled_by_default_for_room_type: invite
|
||||||
|
|
||||||
|
# Override the default power levels for rooms created on this server, per
|
||||||
|
# room creation preset.
|
||||||
|
#
|
||||||
|
# The appropriate dictionary for the room preset will be applied on top
|
||||||
|
# of the existing power levels content.
|
||||||
|
#
|
||||||
|
# Useful if you know that your users need special permissions in rooms
|
||||||
|
# that they create (e.g. to send particular types of state events without
|
||||||
|
# needing an elevated power level). This takes the same shape as the
|
||||||
|
# `power_level_content_override` parameter in the /createRoom API, but
|
||||||
|
# is applied before that parameter.
|
||||||
|
#
|
||||||
|
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
|
||||||
|
# and `public_chat`. Inside each of those should be any of the
|
||||||
|
# properties allowed in `power_level_content_override` in the
|
||||||
|
# /createRoom API. If any property is missing, its default value will
|
||||||
|
# continue to be used. If any property is present, it will overwrite
|
||||||
|
# the existing default completely (so if the `events` property exists,
|
||||||
|
# the default event power levels will be ignored).
|
||||||
|
#
|
||||||
|
#default_power_level_content_override:
|
||||||
|
# private_chat:
|
||||||
|
# "events":
|
||||||
|
# "com.example.myeventtype" : 0
|
||||||
|
# "m.room.avatar": 50
|
||||||
|
# "m.room.canonical_alias": 50
|
||||||
|
# "m.room.encryption": 100
|
||||||
|
# "m.room.history_visibility": 100
|
||||||
|
# "m.room.name": 50
|
||||||
|
# "m.room.power_levels": 100
|
||||||
|
# "m.room.server_acl": 100
|
||||||
|
# "m.room.tombstone": 100
|
||||||
|
# "events_default": 1
|
||||||
|
|
||||||
|
|
||||||
# Uncomment to allow non-server-admin users to create groups on this server
|
# Uncomment to allow non-server-admin users to create groups on this server
|
||||||
#
|
#
|
||||||
|
|
119
docs/upgrade.md
119
docs/upgrade.md
|
@ -89,6 +89,125 @@ process, for example:
|
||||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Upgrading to v1.60.0
|
||||||
|
|
||||||
|
## Adding a new unique index to `state_group_edges` could fail if your database is corrupted
|
||||||
|
|
||||||
|
This release of Synapse will add a unique index to the `state_group_edges` table, in order
|
||||||
|
to prevent accidentally introducing duplicate information (for example, because a database
|
||||||
|
backup was restored multiple times).
|
||||||
|
|
||||||
|
Duplicate rows being present in this table could cause drastic performance problems; see
|
||||||
|
[issue 11779](https://github.com/matrix-org/synapse/issues/11779) for more details.
|
||||||
|
|
||||||
|
If your Synapse database already has had duplicate rows introduced into this table,
|
||||||
|
this could fail, with either of these errors:
|
||||||
|
|
||||||
|
|
||||||
|
**On Postgres:**
|
||||||
|
```
|
||||||
|
synapse.storage.background_updates - 623 - INFO - background_updates-0 - Adding index state_group_edges_unique_idx to state_group_edges
|
||||||
|
synapse.storage.background_updates - 282 - ERROR - background_updates-0 - Error doing update
|
||||||
|
...
|
||||||
|
psycopg2.errors.UniqueViolation: could not create unique index "state_group_edges_unique_idx"
|
||||||
|
DETAIL: Key (state_group, prev_state_group)=(2, 1) is duplicated.
|
||||||
|
```
|
||||||
|
(The numbers may be different.)
|
||||||
|
|
||||||
|
**On SQLite:**
|
||||||
|
```
|
||||||
|
synapse.storage.background_updates - 623 - INFO - background_updates-0 - Adding index state_group_edges_unique_idx to state_group_edges
|
||||||
|
synapse.storage.background_updates - 282 - ERROR - background_updates-0 - Error doing update
|
||||||
|
...
|
||||||
|
sqlite3.IntegrityError: UNIQUE constraint failed: state_group_edges.state_group, state_group_edges.prev_state_group
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary><b>Expand this section for steps to resolve this problem</b></summary>
|
||||||
|
|
||||||
|
### On Postgres
|
||||||
|
|
||||||
|
Connect to your database with `psql`.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
BEGIN;
|
||||||
|
DELETE FROM state_group_edges WHERE (ctid, state_group, prev_state_group) IN (
|
||||||
|
SELECT row_id, state_group, prev_state_group
|
||||||
|
FROM (
|
||||||
|
SELECT
|
||||||
|
ctid AS row_id,
|
||||||
|
MIN(ctid) OVER (PARTITION BY state_group, prev_state_group) AS min_row_id,
|
||||||
|
state_group,
|
||||||
|
prev_state_group
|
||||||
|
FROM state_group_edges
|
||||||
|
) AS t1
|
||||||
|
WHERE row_id <> min_row_id
|
||||||
|
);
|
||||||
|
COMMIT;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### On SQLite
|
||||||
|
|
||||||
|
At the command-line, use `sqlite3 path/to/your-homeserver-database.db`:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
BEGIN;
|
||||||
|
DELETE FROM state_group_edges WHERE (rowid, state_group, prev_state_group) IN (
|
||||||
|
SELECT row_id, state_group, prev_state_group
|
||||||
|
FROM (
|
||||||
|
SELECT
|
||||||
|
rowid AS row_id,
|
||||||
|
MIN(rowid) OVER (PARTITION BY state_group, prev_state_group) AS min_row_id,
|
||||||
|
state_group,
|
||||||
|
prev_state_group
|
||||||
|
FROM state_group_edges
|
||||||
|
)
|
||||||
|
WHERE row_id <> min_row_id
|
||||||
|
);
|
||||||
|
COMMIT;
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### For more details
|
||||||
|
|
||||||
|
[This comment on issue 11779](https://github.com/matrix-org/synapse/issues/11779#issuecomment-1131545970)
|
||||||
|
has queries that can be used to check a database for this problem in advance.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## SpamChecker API's `check_event_for_spam` has a new signature.
|
||||||
|
|
||||||
|
The previous signature has been deprecated.
|
||||||
|
|
||||||
|
Whereas `check_event_for_spam` callbacks used to return `Union[str, bool]`, they should now return `Union["synapse.module_api.Allow", "synapse.module_api.errors.Codes"]`.
|
||||||
|
|
||||||
|
This is part of an ongoing refactoring of the SpamChecker API to make it less ambiguous and more powerful.
|
||||||
|
|
||||||
|
If your module implements `check_event_for_spam` as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
async def check_event_for_spam(event):
|
||||||
|
if ...:
|
||||||
|
# Event is spam
|
||||||
|
return True
|
||||||
|
# Event is not spam
|
||||||
|
return False
|
||||||
|
```
|
||||||
|
|
||||||
|
you should rewrite it as follows:
|
||||||
|
|
||||||
|
```python
|
||||||
|
async def check_event_for_spam(event):
|
||||||
|
if ...:
|
||||||
|
# Event is spam, mark it as forbidden (you may use some more precise error
|
||||||
|
# code if it is useful).
|
||||||
|
return synapse.module_api.errors.Codes.FORBIDDEN
|
||||||
|
# Event is not spam, mark it as `ALLOW`.
|
||||||
|
return synapse.module_api.ALLOW
|
||||||
|
```
|
||||||
|
|
||||||
# Upgrading to v1.59.0
|
# Upgrading to v1.59.0
|
||||||
|
|
||||||
## Device name lookup over federation has been disabled by default
|
## Device name lookup over federation has been disabled by default
|
||||||
|
|
|
@ -23,6 +23,14 @@ followed by a letter. Letters have the following meanings:
|
||||||
For example, setting `redaction_retention_period: 5m` would remove redacted
|
For example, setting `redaction_retention_period: 5m` would remove redacted
|
||||||
messages from the database after 5 minutes, rather than 5 months.
|
messages from the database after 5 minutes, rather than 5 months.
|
||||||
|
|
||||||
|
In addition, configuration options referring to size use the following suffixes:
|
||||||
|
|
||||||
|
* `M` = MiB, or 1,048,576 bytes
|
||||||
|
* `K` = KiB, or 1024 bytes
|
||||||
|
|
||||||
|
For example, setting `max_avatar_size: 10M` means that Synapse will not accept files larger than 10,485,760 bytes
|
||||||
|
for a user avatar.
|
||||||
|
|
||||||
### YAML
|
### YAML
|
||||||
The configuration file is a [YAML](https://yaml.org/) file, which means that certain syntax rules
|
The configuration file is a [YAML](https://yaml.org/) file, which means that certain syntax rules
|
||||||
apply if you want your config file to be read properly. A few helpful things to know:
|
apply if you want your config file to be read properly. A few helpful things to know:
|
||||||
|
@ -467,13 +475,13 @@ Sub-options for each listener include:
|
||||||
|
|
||||||
Valid resource names are:
|
Valid resource names are:
|
||||||
|
|
||||||
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies 'media' and 'static'.
|
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies `media` and `static`.
|
||||||
|
|
||||||
* `consent`: user consent forms (/_matrix/consent). See [here](../../consent_tracking.md) for more.
|
* `consent`: user consent forms (/_matrix/consent). See [here](../../consent_tracking.md) for more.
|
||||||
|
|
||||||
* `federation`: the server-server API (/_matrix/federation). Also implies `media`, `keys`, `openid`
|
* `federation`: the server-server API (/_matrix/federation). Also implies `media`, `keys`, `openid`
|
||||||
|
|
||||||
* `keys`: the key discovery API (/_matrix/keys).
|
* `keys`: the key discovery API (/_matrix/key).
|
||||||
|
|
||||||
* `media`: the media API (/_matrix/media).
|
* `media`: the media API (/_matrix/media).
|
||||||
|
|
||||||
|
@ -1119,7 +1127,22 @@ Caching can be configured through the following sub-options:
|
||||||
with intermittent connections, at the cost of higher memory usage.
|
with intermittent connections, at the cost of higher memory usage.
|
||||||
By default, this is zero, which means that sync responses are not cached
|
By default, this is zero, which means that sync responses are not cached
|
||||||
at all.
|
at all.
|
||||||
|
* `cache_autotuning` and its sub-options `max_cache_memory_usage`, `target_cache_memory_usage`, and
|
||||||
|
`min_cache_ttl` work in conjunction with each other to maintain a balance between cache memory
|
||||||
|
usage and cache entry availability. You must be using [jemalloc](https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu)
|
||||||
|
to utilize this option, and all three of the options must be specified for this feature to work. This option
|
||||||
|
defaults to off, enable it by providing values for the sub-options listed below. Please note that the feature will not work
|
||||||
|
and may cause unstable behavior (such as excessive emptying of caches or exceptions) if all of the values are not provided.
|
||||||
|
Please see the [Config Conventions](#config-conventions) for information on how to specify memory size and cache expiry
|
||||||
|
durations.
|
||||||
|
* `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
|
||||||
|
They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||||
|
the setting below, or until the `min_cache_ttl` is hit. There is no default value for this option.
|
||||||
|
* `target_memory_usage` sets a rough target for the desired memory usage of the caches. There is no default value
|
||||||
|
for this option.
|
||||||
|
* `min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
|
||||||
|
caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
|
||||||
|
from being emptied while Synapse is evicting due to memory. There is no default value for this option.
|
||||||
|
|
||||||
Example configuration:
|
Example configuration:
|
||||||
```yaml
|
```yaml
|
||||||
|
@ -1127,9 +1150,29 @@ caches:
|
||||||
global_factor: 1.0
|
global_factor: 1.0
|
||||||
per_cache_factors:
|
per_cache_factors:
|
||||||
get_users_who_share_room_with_user: 2.0
|
get_users_who_share_room_with_user: 2.0
|
||||||
expire_caches: false
|
|
||||||
sync_response_cache_duration: 2m
|
sync_response_cache_duration: 2m
|
||||||
|
cache_autotuning:
|
||||||
|
max_cache_memory_usage: 1024M
|
||||||
|
target_cache_memory_usage: 758M
|
||||||
|
min_cache_ttl: 5m
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Reloading cache factors
|
||||||
|
|
||||||
|
The cache factors (i.e. `caches.global_factor` and `caches.per_cache_factors`) may be reloaded at any time by sending a
|
||||||
|
[`SIGHUP`](https://en.wikipedia.org/wiki/SIGHUP) signal to Synapse using e.g.
|
||||||
|
|
||||||
|
```commandline
|
||||||
|
kill -HUP [PID_OF_SYNAPSE_PROCESS]
|
||||||
|
```
|
||||||
|
|
||||||
|
If you are running multiple workers, you must individually update the worker
|
||||||
|
config file and send this signal to each worker process.
|
||||||
|
|
||||||
|
If you're using the [example systemd service](https://github.com/matrix-org/synapse/blob/develop/contrib/systemd/matrix-synapse.service)
|
||||||
|
file in Synapse's `contrib` directory, you can send a `SIGHUP` signal by using
|
||||||
|
`systemctl reload matrix-synapse`.
|
||||||
|
|
||||||
---
|
---
|
||||||
## Database ##
|
## Database ##
|
||||||
Config options related to database settings.
|
Config options related to database settings.
|
||||||
|
@ -1164,7 +1207,7 @@ For more information on using Synapse with Postgres,
|
||||||
see [here](../../postgres.md).
|
see [here](../../postgres.md).
|
||||||
|
|
||||||
Example SQLite configuration:
|
Example SQLite configuration:
|
||||||
```
|
```yaml
|
||||||
database:
|
database:
|
||||||
name: sqlite3
|
name: sqlite3
|
||||||
args:
|
args:
|
||||||
|
@ -1172,7 +1215,7 @@ database:
|
||||||
```
|
```
|
||||||
|
|
||||||
Example Postgres configuration:
|
Example Postgres configuration:
|
||||||
```
|
```yaml
|
||||||
database:
|
database:
|
||||||
name: psycopg2
|
name: psycopg2
|
||||||
txn_limit: 10000
|
txn_limit: 10000
|
||||||
|
@ -1327,6 +1370,20 @@ This option sets ratelimiting how often invites can be sent in a room or to a
|
||||||
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10` and
|
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10` and
|
||||||
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`.
|
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`.
|
||||||
|
|
||||||
|
Client requests that invite user(s) when [creating a
|
||||||
|
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom)
|
||||||
|
will count against the `rc_invites.per_room` limit, whereas
|
||||||
|
client requests to [invite a single user to a
|
||||||
|
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidinvite)
|
||||||
|
will count against both the `rc_invites.per_user` and `rc_invites.per_room` limits.
|
||||||
|
|
||||||
|
Federation requests to invite a user will count against the `rc_invites.per_user`
|
||||||
|
limit only, as Synapse presumes ratelimiting by room will be done by the sending server.
|
||||||
|
|
||||||
|
The `rc_invites.per_user` limit applies to the *receiver* of the invite, rather than the
|
||||||
|
sender, meaning that a `rc_invite.per_user.burst_count` of 5 mandates that a single user
|
||||||
|
cannot *receive* more than a burst of 5 invites at a time.
|
||||||
|
|
||||||
Example configuration:
|
Example configuration:
|
||||||
```yaml
|
```yaml
|
||||||
rc_invites:
|
rc_invites:
|
||||||
|
@ -1635,10 +1692,10 @@ Defaults to "en".
|
||||||
Example configuration:
|
Example configuration:
|
||||||
```yaml
|
```yaml
|
||||||
url_preview_accept_language:
|
url_preview_accept_language:
|
||||||
- en-UK
|
- 'en-UK'
|
||||||
- en-US;q=0.9
|
- 'en-US;q=0.9'
|
||||||
- fr;q=0.8
|
- 'fr;q=0.8'
|
||||||
- *;q=0.7
|
- '*;q=0.7'
|
||||||
```
|
```
|
||||||
----
|
----
|
||||||
Config option: `oembed`
|
Config option: `oembed`
|
||||||
|
@ -3298,6 +3355,32 @@ room_list_publication_rules:
|
||||||
room_id: "*"
|
room_id: "*"
|
||||||
action: allow
|
action: allow
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
Config option: `default_power_level_content_override`
|
||||||
|
|
||||||
|
The `default_power_level_content_override` option controls the default power
|
||||||
|
levels for rooms.
|
||||||
|
|
||||||
|
Useful if you know that your users need special permissions in rooms
|
||||||
|
that they create (e.g. to send particular types of state events without
|
||||||
|
needing an elevated power level). This takes the same shape as the
|
||||||
|
`power_level_content_override` parameter in the /createRoom API, but
|
||||||
|
is applied before that parameter.
|
||||||
|
|
||||||
|
Note that each key provided inside a preset (for example `events` in the example
|
||||||
|
below) will overwrite all existing defaults inside that key. So in the example
|
||||||
|
below, newly-created private_chat rooms will have no rules for any event types
|
||||||
|
except `com.example.foo`.
|
||||||
|
|
||||||
|
Example configuration:
|
||||||
|
```yaml
|
||||||
|
default_power_level_content_override:
|
||||||
|
private_chat: { "events": { "com.example.foo" : 0 } }
|
||||||
|
trusted_private_chat: null
|
||||||
|
public_chat: null
|
||||||
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
## Opentracing ##
|
## Opentracing ##
|
||||||
Configuration options related to Opentracing support.
|
Configuration options related to Opentracing support.
|
||||||
|
@ -3398,7 +3481,7 @@ stream_writers:
|
||||||
typing: worker1
|
typing: worker1
|
||||||
```
|
```
|
||||||
---
|
---
|
||||||
Config option: `run_background_task_on`
|
Config option: `run_background_tasks_on`
|
||||||
|
|
||||||
The worker that is used to run background tasks (e.g. cleaning up expired
|
The worker that is used to run background tasks (e.g. cleaning up expired
|
||||||
data). If not provided this defaults to the main process.
|
data). If not provided this defaults to the main process.
|
||||||
|
|
|
@ -7,10 +7,10 @@ team.
|
||||||
## Installing and using Synapse
|
## Installing and using Synapse
|
||||||
|
|
||||||
This documentation covers topics for **installation**, **configuration** and
|
This documentation covers topics for **installation**, **configuration** and
|
||||||
**maintainence** of your Synapse process:
|
**maintenance** of your Synapse process:
|
||||||
|
|
||||||
* Learn how to [install](setup/installation.md) and
|
* Learn how to [install](setup/installation.md) and
|
||||||
[configure](usage/configuration/index.html) your own instance, perhaps with [Single
|
[configure](usage/configuration/config_documentation.md) your own instance, perhaps with [Single
|
||||||
Sign-On](usage/configuration/user_authentication/index.html).
|
Sign-On](usage/configuration/user_authentication/index.html).
|
||||||
|
|
||||||
* See how to [upgrade](upgrade.md) between Synapse versions.
|
* See how to [upgrade](upgrade.md) between Synapse versions.
|
||||||
|
@ -65,7 +65,7 @@ following documentation:
|
||||||
|
|
||||||
Want to help keep Synapse going but don't know how to code? Synapse is a
|
Want to help keep Synapse going but don't know how to code? Synapse is a
|
||||||
[Matrix.org Foundation](https://matrix.org) project. Consider becoming a
|
[Matrix.org Foundation](https://matrix.org) project. Consider becoming a
|
||||||
supportor on [Liberapay](https://liberapay.com/matrixdotorg),
|
supporter on [Liberapay](https://liberapay.com/matrixdotorg),
|
||||||
[Patreon](https://patreon.com/matrixdotorg) or through
|
[Patreon](https://patreon.com/matrixdotorg) or through
|
||||||
[PayPal](https://paypal.me/matrixdotorg) via a one-time donation.
|
[PayPal](https://paypal.me/matrixdotorg) via a one-time donation.
|
||||||
|
|
||||||
|
|
|
@ -251,6 +251,8 @@ information.
|
||||||
# Presence requests
|
# Presence requests
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
|
||||||
|
|
||||||
|
# User directory search requests
|
||||||
|
^/_matrix/client/(r0|v3|unstable)/user_directory/search$
|
||||||
|
|
||||||
Additionally, the following REST endpoints can be handled for GET requests:
|
Additionally, the following REST endpoints can be handled for GET requests:
|
||||||
|
|
||||||
|
@ -448,6 +450,14 @@ update_user_directory_from_worker: worker_name
|
||||||
This work cannot be load-balanced; please ensure the main process is restarted
|
This work cannot be load-balanced; please ensure the main process is restarted
|
||||||
after setting this option in the shared configuration!
|
after setting this option in the shared configuration!
|
||||||
|
|
||||||
|
User directory updates allow REST endpoints matching the following regular
|
||||||
|
expressions to work:
|
||||||
|
|
||||||
|
^/_matrix/client/(r0|v3|unstable)/user_directory/search$
|
||||||
|
|
||||||
|
The above endpoints can be routed to any worker, though you may choose to route
|
||||||
|
it to the chosen user directory worker.
|
||||||
|
|
||||||
This style of configuration supersedes the legacy `synapse.app.user_dir`
|
This style of configuration supersedes the legacy `synapse.app.user_dir`
|
||||||
worker application type.
|
worker application type.
|
||||||
|
|
||||||
|
|
140
mypy.ini
140
mypy.ini
|
@ -10,6 +10,7 @@ warn_unreachable = True
|
||||||
warn_unused_ignores = True
|
warn_unused_ignores = True
|
||||||
local_partial_types = True
|
local_partial_types = True
|
||||||
no_implicit_optional = True
|
no_implicit_optional = True
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
files =
|
files =
|
||||||
docker/,
|
docker/,
|
||||||
|
@ -27,9 +28,6 @@ exclude = (?x)
|
||||||
|synapse/storage/databases/__init__.py
|
|synapse/storage/databases/__init__.py
|
||||||
|synapse/storage/databases/main/cache.py
|
|synapse/storage/databases/main/cache.py
|
||||||
|synapse/storage/databases/main/devices.py
|
|synapse/storage/databases/main/devices.py
|
||||||
|synapse/storage/databases/main/event_federation.py
|
|
||||||
|synapse/storage/databases/main/push_rule.py
|
|
||||||
|synapse/storage/databases/main/roommember.py
|
|
||||||
|synapse/storage/schema/
|
|synapse/storage/schema/
|
||||||
|
|
||||||
|tests/api/test_auth.py
|
|tests/api/test_auth.py
|
||||||
|
@ -43,16 +41,11 @@ exclude = (?x)
|
||||||
|tests/events/test_utils.py
|
|tests/events/test_utils.py
|
||||||
|tests/federation/test_federation_catch_up.py
|
|tests/federation/test_federation_catch_up.py
|
||||||
|tests/federation/test_federation_sender.py
|
|tests/federation/test_federation_sender.py
|
||||||
|tests/federation/test_federation_server.py
|
|
||||||
|tests/federation/transport/test_knocking.py
|
|tests/federation/transport/test_knocking.py
|
||||||
|tests/federation/transport/test_server.py
|
|
||||||
|tests/handlers/test_typing.py
|
|tests/handlers/test_typing.py
|
||||||
|tests/http/federation/test_matrix_federation_agent.py
|
|tests/http/federation/test_matrix_federation_agent.py
|
||||||
|tests/http/federation/test_srv_resolver.py
|
|tests/http/federation/test_srv_resolver.py
|
||||||
|tests/http/test_fedclient.py
|
|
||||||
|tests/http/test_proxyagent.py
|
|tests/http/test_proxyagent.py
|
||||||
|tests/http/test_servlet.py
|
|
||||||
|tests/http/test_site.py
|
|
||||||
|tests/logging/__init__.py
|
|tests/logging/__init__.py
|
||||||
|tests/logging/test_terse_json.py
|
|tests/logging/test_terse_json.py
|
||||||
|tests/module_api/test_api.py
|
|tests/module_api/test_api.py
|
||||||
|
@ -61,12 +54,9 @@ exclude = (?x)
|
||||||
|tests/push/test_push_rule_evaluator.py
|
|tests/push/test_push_rule_evaluator.py
|
||||||
|tests/rest/client/test_transactions.py
|
|tests/rest/client/test_transactions.py
|
||||||
|tests/rest/media/v1/test_media_storage.py
|
|tests/rest/media/v1/test_media_storage.py
|
||||||
|tests/scripts/test_new_matrix_user.py
|
|
||||||
|tests/server.py
|
|tests/server.py
|
||||||
|tests/server_notices/test_resource_limits_server_notices.py
|
|tests/server_notices/test_resource_limits_server_notices.py
|
||||||
|tests/state/test_v2.py
|
|tests/state/test_v2.py
|
||||||
|tests/storage/test_base.py
|
|
||||||
|tests/storage/test_roommember.py
|
|
||||||
|tests/test_metrics.py
|
|tests/test_metrics.py
|
||||||
|tests/test_server.py
|
|tests/test_server.py
|
||||||
|tests/test_state.py
|
|tests/test_state.py
|
||||||
|
@ -89,131 +79,39 @@ exclude = (?x)
|
||||||
|tests/utils.py
|
|tests/utils.py
|
||||||
)$
|
)$
|
||||||
|
|
||||||
[mypy-synapse._scripts.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.api.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.app.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.appservice.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.config.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.crypto.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.event_auth]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.events.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.federation.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.federation.transport.client]
|
[mypy-synapse.federation.transport.client]
|
||||||
disallow_untyped_defs = False
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
[mypy-synapse.handlers.*]
|
[mypy-synapse.http.client]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
[mypy-synapse.http.server]
|
[mypy-synapse.http.matrixfederationclient]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
[mypy-synapse.logging.context]
|
[mypy-synapse.logging.opentracing]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
[mypy-synapse.metrics.*]
|
[mypy-synapse.logging.scopecontextmanager]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
[mypy-synapse.metrics._reactor_metrics]
|
[mypy-synapse.metrics._reactor_metrics]
|
||||||
|
disallow_untyped_defs = False
|
||||||
# This module imports select.epoll. That exists on Linux, but doesn't on macOS.
|
# This module imports select.epoll. That exists on Linux, but doesn't on macOS.
|
||||||
# See https://github.com/matrix-org/synapse/pull/11771.
|
# See https://github.com/matrix-org/synapse/pull/11771.
|
||||||
warn_unused_ignores = False
|
warn_unused_ignores = False
|
||||||
|
|
||||||
[mypy-synapse.module_api.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.notifier]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.push.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.replication.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.rest.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.server_notices.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.state.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.account_data]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.client_ips]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.directory]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.e2e_room_keys]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.end_to_end_keys]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.event_push_actions]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.events_bg_updates]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.events_worker]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.room]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.room_batch]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.profile]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.stats]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.state_deltas]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.transactions]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.user_erasure_store]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.storage.util.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.streams.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.*]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.caches.treecache]
|
[mypy-synapse.util.caches.treecache]
|
||||||
disallow_untyped_defs = False
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
|
[mypy-synapse.server]
|
||||||
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
|
[mypy-synapse.storage.database]
|
||||||
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
|
[mypy-tests.*]
|
||||||
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
[mypy-tests.handlers.test_user_directory]
|
[mypy-tests.handlers.test_user_directory]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
|
|
@ -54,7 +54,7 @@ skip_gitignore = true
|
||||||
|
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "matrix-synapse"
|
name = "matrix-synapse"
|
||||||
version = "1.59.1"
|
version = "1.60.0rc1"
|
||||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||||
license = "Apache-2.0"
|
license = "Apache-2.0"
|
||||||
|
|
|
@ -45,6 +45,8 @@ docker build -t matrixdotorg/synapse -f "docker/Dockerfile" .
|
||||||
|
|
||||||
extra_test_args=()
|
extra_test_args=()
|
||||||
|
|
||||||
|
test_tags="synapse_blacklist,msc2716,msc3030"
|
||||||
|
|
||||||
# If we're using workers, modify the docker files slightly.
|
# If we're using workers, modify the docker files slightly.
|
||||||
if [[ -n "$WORKERS" ]]; then
|
if [[ -n "$WORKERS" ]]; then
|
||||||
# Build the workers docker image (from the base Synapse image).
|
# Build the workers docker image (from the base Synapse image).
|
||||||
|
@ -65,6 +67,10 @@ if [[ -n "$WORKERS" ]]; then
|
||||||
else
|
else
|
||||||
export COMPLEMENT_BASE_IMAGE=complement-synapse
|
export COMPLEMENT_BASE_IMAGE=complement-synapse
|
||||||
COMPLEMENT_DOCKERFILE=Dockerfile
|
COMPLEMENT_DOCKERFILE=Dockerfile
|
||||||
|
|
||||||
|
# We only test faster room joins on monoliths, because they are purposefully
|
||||||
|
# being developed without worker support to start with.
|
||||||
|
test_tags="$test_tags,faster_joins"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Build the Complement image from the Synapse image we just built.
|
# Build the Complement image from the Synapse image we just built.
|
||||||
|
@ -73,4 +79,5 @@ docker build -t $COMPLEMENT_BASE_IMAGE -f "docker/complement/$COMPLEMENT_DOCKERF
|
||||||
# Run the tests!
|
# Run the tests!
|
||||||
echo "Images built; running complement"
|
echo "Images built; running complement"
|
||||||
cd "$COMPLEMENT_DIR"
|
cd "$COMPLEMENT_DIR"
|
||||||
go test -v -tags synapse_blacklist,msc2716,msc3030,faster_joins -count=1 "${extra_test_args[@]}" "$@" ./tests/...
|
|
||||||
|
go test -v -tags $test_tags -count=1 "${extra_test_args[@]}" "$@" ./tests/...
|
||||||
|
|
|
@ -21,7 +21,7 @@ from typing import Callable, Optional, Type
|
||||||
from mypy.nodes import ARG_NAMED_OPT
|
from mypy.nodes import ARG_NAMED_OPT
|
||||||
from mypy.plugin import MethodSigContext, Plugin
|
from mypy.plugin import MethodSigContext, Plugin
|
||||||
from mypy.typeops import bind_self
|
from mypy.typeops import bind_self
|
||||||
from mypy.types import CallableType, NoneType
|
from mypy.types import CallableType, NoneType, UnionType
|
||||||
|
|
||||||
|
|
||||||
class SynapsePlugin(Plugin):
|
class SynapsePlugin(Plugin):
|
||||||
|
@ -72,13 +72,20 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||||
|
|
||||||
# Third, we add an optional "on_invalidate" argument.
|
# Third, we add an optional "on_invalidate" argument.
|
||||||
#
|
#
|
||||||
# This is a callable which accepts no input and returns nothing.
|
# This is a either
|
||||||
calltyp = CallableType(
|
# - a callable which accepts no input and returns nothing, or
|
||||||
arg_types=[],
|
# - None.
|
||||||
arg_kinds=[],
|
calltyp = UnionType(
|
||||||
arg_names=[],
|
[
|
||||||
ret_type=NoneType(),
|
NoneType(),
|
||||||
fallback=ctx.api.named_generic_type("builtins.function", []),
|
CallableType(
|
||||||
|
arg_types=[],
|
||||||
|
arg_kinds=[],
|
||||||
|
arg_names=[],
|
||||||
|
ret_type=NoneType(),
|
||||||
|
fallback=ctx.api.named_generic_type("builtins.function", []),
|
||||||
|
),
|
||||||
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
arg_types.append(calltyp)
|
arg_types.append(calltyp)
|
||||||
|
@ -95,7 +102,7 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||||
|
|
||||||
|
|
||||||
def plugin(version: str) -> Type[SynapsePlugin]:
|
def plugin(version: str) -> Type[SynapsePlugin]:
|
||||||
# This is the entry point of the plugin, and let's us deal with the fact
|
# This is the entry point of the plugin, and lets us deal with the fact
|
||||||
# that the mypy plugin interface is *not* stable by looking at the version
|
# that the mypy plugin interface is *not* stable by looking at the version
|
||||||
# string.
|
# string.
|
||||||
#
|
#
|
||||||
|
|
|
@ -46,14 +46,14 @@ def main() -> None:
|
||||||
"Path to server config file. "
|
"Path to server config file. "
|
||||||
"Used to read in bcrypt_rounds and password_pepper."
|
"Used to read in bcrypt_rounds and password_pepper."
|
||||||
),
|
),
|
||||||
|
required=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
if "config" in args and args.config:
|
config = yaml.safe_load(args.config)
|
||||||
config = yaml.safe_load(args.config)
|
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
|
||||||
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
|
password_config = config.get("password_config", None) or {}
|
||||||
password_config = config.get("password_config", None) or {}
|
password_pepper = password_config.get("pepper", password_pepper)
|
||||||
password_pepper = password_config.get("pepper", password_pepper)
|
|
||||||
password = args.password
|
password = args.password
|
||||||
|
|
||||||
if not password:
|
if not password:
|
||||||
|
|
|
@ -65,6 +65,8 @@ class JoinRules:
|
||||||
PRIVATE: Final = "private"
|
PRIVATE: Final = "private"
|
||||||
# As defined for MSC3083.
|
# As defined for MSC3083.
|
||||||
RESTRICTED: Final = "restricted"
|
RESTRICTED: Final = "restricted"
|
||||||
|
# As defined for MSC3787.
|
||||||
|
KNOCK_RESTRICTED: Final = "knock_restricted"
|
||||||
|
|
||||||
|
|
||||||
class RestrictedJoinRuleTypes:
|
class RestrictedJoinRuleTypes:
|
||||||
|
|
|
@ -17,6 +17,7 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
import typing
|
import typing
|
||||||
|
from enum import Enum
|
||||||
from http import HTTPStatus
|
from http import HTTPStatus
|
||||||
from typing import Any, Dict, List, Optional, Union
|
from typing import Any, Dict, List, Optional, Union
|
||||||
|
|
||||||
|
@ -30,7 +31,11 @@ if typing.TYPE_CHECKING:
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class Codes:
|
class Codes(str, Enum):
|
||||||
|
"""
|
||||||
|
All known error codes, as an enum of strings.
|
||||||
|
"""
|
||||||
|
|
||||||
UNRECOGNIZED = "M_UNRECOGNIZED"
|
UNRECOGNIZED = "M_UNRECOGNIZED"
|
||||||
UNAUTHORIZED = "M_UNAUTHORIZED"
|
UNAUTHORIZED = "M_UNAUTHORIZED"
|
||||||
FORBIDDEN = "M_FORBIDDEN"
|
FORBIDDEN = "M_FORBIDDEN"
|
||||||
|
|
|
@ -19,6 +19,7 @@ from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
Awaitable,
|
Awaitable,
|
||||||
Callable,
|
Callable,
|
||||||
|
Collection,
|
||||||
Dict,
|
Dict,
|
||||||
Iterable,
|
Iterable,
|
||||||
List,
|
List,
|
||||||
|
@ -444,9 +445,9 @@ class Filter:
|
||||||
return room_ids
|
return room_ids
|
||||||
|
|
||||||
async def _check_event_relations(
|
async def _check_event_relations(
|
||||||
self, events: Iterable[FilterEvent]
|
self, events: Collection[FilterEvent]
|
||||||
) -> List[FilterEvent]:
|
) -> List[FilterEvent]:
|
||||||
# The event IDs to check, mypy doesn't understand the ifinstance check.
|
# The event IDs to check, mypy doesn't understand the isinstance check.
|
||||||
event_ids = [event.event_id for event in events if isinstance(event, EventBase)] # type: ignore[attr-defined]
|
event_ids = [event.event_id for event in events if isinstance(event, EventBase)] # type: ignore[attr-defined]
|
||||||
event_ids_to_keep = set(
|
event_ids_to_keep = set(
|
||||||
await self._store.events_have_relations(
|
await self._store.events_have_relations(
|
||||||
|
|
|
@ -81,6 +81,9 @@ class RoomVersion:
|
||||||
msc2716_historical: bool
|
msc2716_historical: bool
|
||||||
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
|
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
|
||||||
msc2716_redactions: bool
|
msc2716_redactions: bool
|
||||||
|
# MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of
|
||||||
|
# knocks and restricted join rules into the same join condition.
|
||||||
|
msc3787_knock_restricted_join_rule: bool
|
||||||
|
|
||||||
|
|
||||||
class RoomVersions:
|
class RoomVersions:
|
||||||
|
@ -99,6 +102,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=False,
|
msc2403_knocking=False,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
V2 = RoomVersion(
|
V2 = RoomVersion(
|
||||||
"2",
|
"2",
|
||||||
|
@ -115,6 +119,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=False,
|
msc2403_knocking=False,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
V3 = RoomVersion(
|
V3 = RoomVersion(
|
||||||
"3",
|
"3",
|
||||||
|
@ -131,6 +136,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=False,
|
msc2403_knocking=False,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
V4 = RoomVersion(
|
V4 = RoomVersion(
|
||||||
"4",
|
"4",
|
||||||
|
@ -147,6 +153,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=False,
|
msc2403_knocking=False,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
V5 = RoomVersion(
|
V5 = RoomVersion(
|
||||||
"5",
|
"5",
|
||||||
|
@ -163,6 +170,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=False,
|
msc2403_knocking=False,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
V6 = RoomVersion(
|
V6 = RoomVersion(
|
||||||
"6",
|
"6",
|
||||||
|
@ -179,6 +187,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=False,
|
msc2403_knocking=False,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
MSC2176 = RoomVersion(
|
MSC2176 = RoomVersion(
|
||||||
"org.matrix.msc2176",
|
"org.matrix.msc2176",
|
||||||
|
@ -195,6 +204,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=False,
|
msc2403_knocking=False,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
V7 = RoomVersion(
|
V7 = RoomVersion(
|
||||||
"7",
|
"7",
|
||||||
|
@ -211,6 +221,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=True,
|
msc2403_knocking=True,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
V8 = RoomVersion(
|
V8 = RoomVersion(
|
||||||
"8",
|
"8",
|
||||||
|
@ -227,6 +238,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=True,
|
msc2403_knocking=True,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
V9 = RoomVersion(
|
V9 = RoomVersion(
|
||||||
"9",
|
"9",
|
||||||
|
@ -243,6 +255,7 @@ class RoomVersions:
|
||||||
msc2403_knocking=True,
|
msc2403_knocking=True,
|
||||||
msc2716_historical=False,
|
msc2716_historical=False,
|
||||||
msc2716_redactions=False,
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
)
|
)
|
||||||
MSC2716v3 = RoomVersion(
|
MSC2716v3 = RoomVersion(
|
||||||
"org.matrix.msc2716v3",
|
"org.matrix.msc2716v3",
|
||||||
|
@ -259,6 +272,24 @@ class RoomVersions:
|
||||||
msc2403_knocking=True,
|
msc2403_knocking=True,
|
||||||
msc2716_historical=True,
|
msc2716_historical=True,
|
||||||
msc2716_redactions=True,
|
msc2716_redactions=True,
|
||||||
|
msc3787_knock_restricted_join_rule=False,
|
||||||
|
)
|
||||||
|
MSC3787 = RoomVersion(
|
||||||
|
"org.matrix.msc3787",
|
||||||
|
RoomDisposition.UNSTABLE,
|
||||||
|
EventFormatVersions.V3,
|
||||||
|
StateResolutionVersions.V2,
|
||||||
|
enforce_key_validity=True,
|
||||||
|
special_case_aliases_auth=False,
|
||||||
|
strict_canonicaljson=True,
|
||||||
|
limit_notifications_power_levels=True,
|
||||||
|
msc2176_redaction_rules=False,
|
||||||
|
msc3083_join_rules=True,
|
||||||
|
msc3375_redaction_rules=True,
|
||||||
|
msc2403_knocking=True,
|
||||||
|
msc2716_historical=False,
|
||||||
|
msc2716_redactions=False,
|
||||||
|
msc3787_knock_restricted_join_rule=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -276,6 +307,7 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
|
||||||
RoomVersions.V8,
|
RoomVersions.V8,
|
||||||
RoomVersions.V9,
|
RoomVersions.V9,
|
||||||
RoomVersions.MSC2716v3,
|
RoomVersions.MSC2716v3,
|
||||||
|
RoomVersions.MSC3787,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -49,9 +49,12 @@ from twisted.logger import LoggingFile, LogLevel
|
||||||
from twisted.protocols.tls import TLSMemoryBIOFactory
|
from twisted.protocols.tls import TLSMemoryBIOFactory
|
||||||
from twisted.python.threadpool import ThreadPool
|
from twisted.python.threadpool import ThreadPool
|
||||||
|
|
||||||
|
import synapse.util.caches
|
||||||
from synapse.api.constants import MAX_PDU_SIZE
|
from synapse.api.constants import MAX_PDU_SIZE
|
||||||
from synapse.app import check_bind_error
|
from synapse.app import check_bind_error
|
||||||
from synapse.app.phone_stats_home import start_phone_stats_home
|
from synapse.app.phone_stats_home import start_phone_stats_home
|
||||||
|
from synapse.config import ConfigError
|
||||||
|
from synapse.config._base import format_config_error
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.config.server import ManholeConfig
|
from synapse.config.server import ManholeConfig
|
||||||
from synapse.crypto import context_factory
|
from synapse.crypto import context_factory
|
||||||
|
@ -432,6 +435,10 @@ async def start(hs: "HomeServer") -> None:
|
||||||
signal.signal(signal.SIGHUP, run_sighup)
|
signal.signal(signal.SIGHUP, run_sighup)
|
||||||
|
|
||||||
register_sighup(refresh_certificate, hs)
|
register_sighup(refresh_certificate, hs)
|
||||||
|
register_sighup(reload_cache_config, hs.config)
|
||||||
|
|
||||||
|
# Apply the cache config.
|
||||||
|
hs.config.caches.resize_all_caches()
|
||||||
|
|
||||||
# Load the certificate from disk.
|
# Load the certificate from disk.
|
||||||
refresh_certificate(hs)
|
refresh_certificate(hs)
|
||||||
|
@ -486,6 +493,43 @@ async def start(hs: "HomeServer") -> None:
|
||||||
atexit.register(gc.freeze)
|
atexit.register(gc.freeze)
|
||||||
|
|
||||||
|
|
||||||
|
def reload_cache_config(config: HomeServerConfig) -> None:
|
||||||
|
"""Reload cache config from disk and immediately apply it.resize caches accordingly.
|
||||||
|
|
||||||
|
If the config is invalid, a `ConfigError` is logged and no changes are made.
|
||||||
|
|
||||||
|
Otherwise, this:
|
||||||
|
- replaces the `caches` section on the given `config` object,
|
||||||
|
- resizes all caches according to the new cache factors, and
|
||||||
|
|
||||||
|
Note that the following cache config keys are read, but not applied:
|
||||||
|
- event_cache_size: used to set a max_size and _original_max_size on
|
||||||
|
EventsWorkerStore._get_event_cache when it is created. We'd have to update
|
||||||
|
the _original_max_size (and maybe
|
||||||
|
- sync_response_cache_duration: would have to update the timeout_sec attribute on
|
||||||
|
HomeServer -> SyncHandler -> ResponseCache.
|
||||||
|
- track_memory_usage. This affects synapse.util.caches.TRACK_MEMORY_USAGE which
|
||||||
|
influences Synapse's self-reported metrics.
|
||||||
|
|
||||||
|
Also, the HTTPConnectionPool in SimpleHTTPClient sets its maxPersistentPerHost
|
||||||
|
parameter based on the global_factor. This won't be applied on a config reload.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
previous_cache_config = config.reload_config_section("caches")
|
||||||
|
except ConfigError as e:
|
||||||
|
logger.warning("Failed to reload cache config")
|
||||||
|
for f in format_config_error(e):
|
||||||
|
logger.warning(f)
|
||||||
|
else:
|
||||||
|
logger.debug(
|
||||||
|
"New cache config. Was:\n %s\nNow:\n",
|
||||||
|
previous_cache_config.__dict__,
|
||||||
|
config.caches.__dict__,
|
||||||
|
)
|
||||||
|
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
|
||||||
|
config.caches.resize_all_caches()
|
||||||
|
|
||||||
|
|
||||||
def setup_sentry(hs: "HomeServer") -> None:
|
def setup_sentry(hs: "HomeServer") -> None:
|
||||||
"""Enable sentry integration, if enabled in configuration"""
|
"""Enable sentry integration, if enabled in configuration"""
|
||||||
|
|
||||||
|
|
|
@ -16,7 +16,7 @@
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
from typing import Dict, Iterable, Iterator, List
|
from typing import Dict, Iterable, List
|
||||||
|
|
||||||
from matrix_common.versionstring import get_distribution_version_string
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
|
|
||||||
|
@ -45,7 +45,7 @@ from synapse.app._base import (
|
||||||
redirect_stdio_to_logs,
|
redirect_stdio_to_logs,
|
||||||
register_start,
|
register_start,
|
||||||
)
|
)
|
||||||
from synapse.config._base import ConfigError
|
from synapse.config._base import ConfigError, format_config_error
|
||||||
from synapse.config.emailconfig import ThreepidBehaviour
|
from synapse.config.emailconfig import ThreepidBehaviour
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.config.server import ListenerConfig
|
from synapse.config.server import ListenerConfig
|
||||||
|
@ -399,38 +399,6 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||||
return hs
|
return hs
|
||||||
|
|
||||||
|
|
||||||
def format_config_error(e: ConfigError) -> Iterator[str]:
|
|
||||||
"""
|
|
||||||
Formats a config error neatly
|
|
||||||
|
|
||||||
The idea is to format the immediate error, plus the "causes" of those errors,
|
|
||||||
hopefully in a way that makes sense to the user. For example:
|
|
||||||
|
|
||||||
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
|
|
||||||
Failed to parse config for module 'JinjaOidcMappingProvider':
|
|
||||||
invalid jinja template:
|
|
||||||
unexpected end of template, expected 'end of print statement'.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
e: the error to be formatted
|
|
||||||
|
|
||||||
Returns: An iterator which yields string fragments to be formatted
|
|
||||||
"""
|
|
||||||
yield "Error in configuration"
|
|
||||||
|
|
||||||
if e.path:
|
|
||||||
yield " at '%s'" % (".".join(e.path),)
|
|
||||||
|
|
||||||
yield ":\n %s" % (e.msg,)
|
|
||||||
|
|
||||||
parent_e = e.__cause__
|
|
||||||
indent = 1
|
|
||||||
while parent_e:
|
|
||||||
indent += 1
|
|
||||||
yield ":\n%s%s" % (" " * indent, str(parent_e))
|
|
||||||
parent_e = parent_e.__cause__
|
|
||||||
|
|
||||||
|
|
||||||
def run(hs: HomeServer) -> None:
|
def run(hs: HomeServer) -> None:
|
||||||
_base.start_reactor(
|
_base.start_reactor(
|
||||||
"synapse-homeserver",
|
"synapse-homeserver",
|
||||||
|
|
|
@ -16,14 +16,18 @@
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import errno
|
import errno
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
from hashlib import sha256
|
from hashlib import sha256
|
||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
|
ClassVar,
|
||||||
|
Collection,
|
||||||
Dict,
|
Dict,
|
||||||
Iterable,
|
Iterable,
|
||||||
|
Iterator,
|
||||||
List,
|
List,
|
||||||
MutableMapping,
|
MutableMapping,
|
||||||
Optional,
|
Optional,
|
||||||
|
@ -40,6 +44,8 @@ import yaml
|
||||||
|
|
||||||
from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter
|
from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class ConfigError(Exception):
|
class ConfigError(Exception):
|
||||||
"""Represents a problem parsing the configuration
|
"""Represents a problem parsing the configuration
|
||||||
|
@ -55,6 +61,38 @@ class ConfigError(Exception):
|
||||||
self.path = path
|
self.path = path
|
||||||
|
|
||||||
|
|
||||||
|
def format_config_error(e: ConfigError) -> Iterator[str]:
|
||||||
|
"""
|
||||||
|
Formats a config error neatly
|
||||||
|
|
||||||
|
The idea is to format the immediate error, plus the "causes" of those errors,
|
||||||
|
hopefully in a way that makes sense to the user. For example:
|
||||||
|
|
||||||
|
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
|
||||||
|
Failed to parse config for module 'JinjaOidcMappingProvider':
|
||||||
|
invalid jinja template:
|
||||||
|
unexpected end of template, expected 'end of print statement'.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
e: the error to be formatted
|
||||||
|
|
||||||
|
Returns: An iterator which yields string fragments to be formatted
|
||||||
|
"""
|
||||||
|
yield "Error in configuration"
|
||||||
|
|
||||||
|
if e.path:
|
||||||
|
yield " at '%s'" % (".".join(e.path),)
|
||||||
|
|
||||||
|
yield ":\n %s" % (e.msg,)
|
||||||
|
|
||||||
|
parent_e = e.__cause__
|
||||||
|
indent = 1
|
||||||
|
while parent_e:
|
||||||
|
indent += 1
|
||||||
|
yield ":\n%s%s" % (" " * indent, str(parent_e))
|
||||||
|
parent_e = parent_e.__cause__
|
||||||
|
|
||||||
|
|
||||||
# We split these messages out to allow packages to override with package
|
# We split these messages out to allow packages to override with package
|
||||||
# specific instructions.
|
# specific instructions.
|
||||||
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS = """\
|
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS = """\
|
||||||
|
@ -119,7 +157,7 @@ class Config:
|
||||||
defined in subclasses.
|
defined in subclasses.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
section: str
|
section: ClassVar[str]
|
||||||
|
|
||||||
def __init__(self, root_config: "RootConfig" = None):
|
def __init__(self, root_config: "RootConfig" = None):
|
||||||
self.root = root_config
|
self.root = root_config
|
||||||
|
@ -309,9 +347,12 @@ class RootConfig:
|
||||||
class, lower-cased and with "Config" removed.
|
class, lower-cased and with "Config" removed.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
config_classes = []
|
config_classes: List[Type[Config]] = []
|
||||||
|
|
||||||
|
def __init__(self, config_files: Collection[str] = ()):
|
||||||
|
# Capture absolute paths here, so we can reload config after we daemonize.
|
||||||
|
self.config_files = [os.path.abspath(path) for path in config_files]
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
for config_class in self.config_classes:
|
for config_class in self.config_classes:
|
||||||
if config_class.section is None:
|
if config_class.section is None:
|
||||||
raise ValueError("%r requires a section name" % (config_class,))
|
raise ValueError("%r requires a section name" % (config_class,))
|
||||||
|
@ -512,12 +553,10 @@ class RootConfig:
|
||||||
object from parser.parse_args(..)`
|
object from parser.parse_args(..)`
|
||||||
"""
|
"""
|
||||||
|
|
||||||
obj = cls()
|
|
||||||
|
|
||||||
config_args = parser.parse_args(argv)
|
config_args = parser.parse_args(argv)
|
||||||
|
|
||||||
config_files = find_config_files(search_paths=config_args.config_path)
|
config_files = find_config_files(search_paths=config_args.config_path)
|
||||||
|
obj = cls(config_files)
|
||||||
if not config_files:
|
if not config_files:
|
||||||
parser.error("Must supply a config file.")
|
parser.error("Must supply a config file.")
|
||||||
|
|
||||||
|
@ -627,7 +666,7 @@ class RootConfig:
|
||||||
|
|
||||||
generate_missing_configs = config_args.generate_missing_configs
|
generate_missing_configs = config_args.generate_missing_configs
|
||||||
|
|
||||||
obj = cls()
|
obj = cls(config_files)
|
||||||
|
|
||||||
if config_args.generate_config:
|
if config_args.generate_config:
|
||||||
if config_args.report_stats is None:
|
if config_args.report_stats is None:
|
||||||
|
@ -727,6 +766,34 @@ class RootConfig:
|
||||||
) -> None:
|
) -> None:
|
||||||
self.invoke_all("generate_files", config_dict, config_dir_path)
|
self.invoke_all("generate_files", config_dict, config_dir_path)
|
||||||
|
|
||||||
|
def reload_config_section(self, section_name: str) -> Config:
|
||||||
|
"""Reconstruct the given config section, leaving all others unchanged.
|
||||||
|
|
||||||
|
This works in three steps:
|
||||||
|
|
||||||
|
1. Create a new instance of the relevant `Config` subclass.
|
||||||
|
2. Call `read_config` on that instance to parse the new config.
|
||||||
|
3. Replace the existing config instance with the new one.
|
||||||
|
|
||||||
|
:raises ValueError: if the given `section` does not exist.
|
||||||
|
:raises ConfigError: for any other problems reloading config.
|
||||||
|
|
||||||
|
:returns: the previous config object, which no longer has a reference to this
|
||||||
|
RootConfig.
|
||||||
|
"""
|
||||||
|
existing_config: Optional[Config] = getattr(self, section_name, None)
|
||||||
|
if existing_config is None:
|
||||||
|
raise ValueError(f"Unknown config section '{section_name}'")
|
||||||
|
logger.info("Reloading config section '%s'", section_name)
|
||||||
|
|
||||||
|
new_config_data = read_config_files(self.config_files)
|
||||||
|
new_config = type(existing_config)(self)
|
||||||
|
new_config.read_config(new_config_data)
|
||||||
|
setattr(self, section_name, new_config)
|
||||||
|
|
||||||
|
existing_config.root = None
|
||||||
|
return existing_config
|
||||||
|
|
||||||
|
|
||||||
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
|
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
|
||||||
"""Read the config files into a dict
|
"""Read the config files into a dict
|
||||||
|
|
|
@ -1,15 +1,19 @@
|
||||||
import argparse
|
import argparse
|
||||||
from typing import (
|
from typing import (
|
||||||
Any,
|
Any,
|
||||||
|
Collection,
|
||||||
Dict,
|
Dict,
|
||||||
Iterable,
|
Iterable,
|
||||||
|
Iterator,
|
||||||
List,
|
List,
|
||||||
|
Literal,
|
||||||
MutableMapping,
|
MutableMapping,
|
||||||
Optional,
|
Optional,
|
||||||
Tuple,
|
Tuple,
|
||||||
Type,
|
Type,
|
||||||
TypeVar,
|
TypeVar,
|
||||||
Union,
|
Union,
|
||||||
|
overload,
|
||||||
)
|
)
|
||||||
|
|
||||||
import jinja2
|
import jinja2
|
||||||
|
@ -65,6 +69,8 @@ class ConfigError(Exception):
|
||||||
self.msg = msg
|
self.msg = msg
|
||||||
self.path = path
|
self.path = path
|
||||||
|
|
||||||
|
def format_config_error(e: ConfigError) -> Iterator[str]: ...
|
||||||
|
|
||||||
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
|
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
|
||||||
MISSING_REPORT_STATS_SPIEL: str
|
MISSING_REPORT_STATS_SPIEL: str
|
||||||
MISSING_SERVER_NAME: str
|
MISSING_SERVER_NAME: str
|
||||||
|
@ -119,7 +125,8 @@ class RootConfig:
|
||||||
background_updates: background_updates.BackgroundUpdateConfig
|
background_updates: background_updates.BackgroundUpdateConfig
|
||||||
|
|
||||||
config_classes: List[Type["Config"]] = ...
|
config_classes: List[Type["Config"]] = ...
|
||||||
def __init__(self) -> None: ...
|
config_files: List[str]
|
||||||
|
def __init__(self, config_files: Collection[str] = ...) -> None: ...
|
||||||
def invoke_all(
|
def invoke_all(
|
||||||
self, func_name: str, *args: Any, **kwargs: Any
|
self, func_name: str, *args: Any, **kwargs: Any
|
||||||
) -> MutableMapping[str, Any]: ...
|
) -> MutableMapping[str, Any]: ...
|
||||||
|
@ -159,6 +166,12 @@ class RootConfig:
|
||||||
def generate_missing_files(
|
def generate_missing_files(
|
||||||
self, config_dict: dict, config_dir_path: str
|
self, config_dict: dict, config_dir_path: str
|
||||||
) -> None: ...
|
) -> None: ...
|
||||||
|
@overload
|
||||||
|
def reload_config_section(
|
||||||
|
self, section_name: Literal["caches"]
|
||||||
|
) -> cache.CacheConfig: ...
|
||||||
|
@overload
|
||||||
|
def reload_config_section(self, section_name: str) -> Config: ...
|
||||||
|
|
||||||
class Config:
|
class Config:
|
||||||
root: RootConfig
|
root: RootConfig
|
||||||
|
|
|
@ -69,11 +69,11 @@ def _canonicalise_cache_name(cache_name: str) -> str:
|
||||||
def add_resizable_cache(
|
def add_resizable_cache(
|
||||||
cache_name: str, cache_resize_callback: Callable[[float], None]
|
cache_name: str, cache_resize_callback: Callable[[float], None]
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Register a cache that's size can dynamically change
|
"""Register a cache whose size can dynamically change
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
cache_name: A reference to the cache
|
cache_name: A reference to the cache
|
||||||
cache_resize_callback: A callback function that will be ran whenever
|
cache_resize_callback: A callback function that will run whenever
|
||||||
the cache needs to be resized
|
the cache needs to be resized
|
||||||
"""
|
"""
|
||||||
# Some caches have '*' in them which we strip out.
|
# Some caches have '*' in them which we strip out.
|
||||||
|
@ -96,6 +96,13 @@ class CacheConfig(Config):
|
||||||
section = "caches"
|
section = "caches"
|
||||||
_environ = os.environ
|
_environ = os.environ
|
||||||
|
|
||||||
|
event_cache_size: int
|
||||||
|
cache_factors: Dict[str, float]
|
||||||
|
global_factor: float
|
||||||
|
track_memory_usage: bool
|
||||||
|
expiry_time_msec: Optional[int]
|
||||||
|
sync_response_cache_duration: int
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def reset() -> None:
|
def reset() -> None:
|
||||||
"""Resets the caches to their defaults. Used for tests."""
|
"""Resets the caches to their defaults. Used for tests."""
|
||||||
|
@ -115,6 +122,12 @@ class CacheConfig(Config):
|
||||||
# A cache 'factor' is a multiplier that can be applied to each of
|
# A cache 'factor' is a multiplier that can be applied to each of
|
||||||
# Synapse's caches in order to increase or decrease the maximum
|
# Synapse's caches in order to increase or decrease the maximum
|
||||||
# number of entries that can be stored.
|
# number of entries that can be stored.
|
||||||
|
#
|
||||||
|
# The configuration for cache factors (caches.global_factor and
|
||||||
|
# caches.per_cache_factors) can be reloaded while the application is running,
|
||||||
|
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
|
||||||
|
# the caching config will NOT be applied after a SIGHUP is received; a restart
|
||||||
|
# is necessary.
|
||||||
|
|
||||||
# The number of events to cache in memory. Not affected by
|
# The number of events to cache in memory. Not affected by
|
||||||
# caches.global_factor.
|
# caches.global_factor.
|
||||||
|
@ -163,6 +176,24 @@ class CacheConfig(Config):
|
||||||
#
|
#
|
||||||
#cache_entry_ttl: 30m
|
#cache_entry_ttl: 30m
|
||||||
|
|
||||||
|
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
|
||||||
|
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
|
||||||
|
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
|
||||||
|
# this option, and all three of the options must be specified for this feature to work.
|
||||||
|
#cache_autotuning:
|
||||||
|
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
|
||||||
|
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||||
|
# the flag below, or until the `min_cache_ttl` is hit.
|
||||||
|
#max_cache_memory_usage: 1024M
|
||||||
|
|
||||||
|
# This flag sets a rough target for the desired memory usage of the caches.
|
||||||
|
#target_cache_memory_usage: 758M
|
||||||
|
|
||||||
|
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
|
||||||
|
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
|
||||||
|
# from being emptied while Synapse is evicting due to memory.
|
||||||
|
#min_cache_ttl: 5m
|
||||||
|
|
||||||
# Controls how long the results of a /sync request are cached for after
|
# Controls how long the results of a /sync request are cached for after
|
||||||
# a successful response is returned. A higher duration can help clients with
|
# a successful response is returned. A higher duration can help clients with
|
||||||
# intermittent connections, at the cost of higher memory usage.
|
# intermittent connections, at the cost of higher memory usage.
|
||||||
|
@ -174,21 +205,21 @@ class CacheConfig(Config):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
|
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
|
||||||
|
"""Populate this config object with values from `config`.
|
||||||
|
|
||||||
|
This method does NOT resize existing or future caches: use `resize_all_caches`.
|
||||||
|
We use two separate methods so that we can reject bad config before applying it.
|
||||||
|
"""
|
||||||
self.event_cache_size = self.parse_size(
|
self.event_cache_size = self.parse_size(
|
||||||
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
|
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
|
||||||
)
|
)
|
||||||
self.cache_factors: Dict[str, float] = {}
|
self.cache_factors = {}
|
||||||
|
|
||||||
cache_config = config.get("caches") or {}
|
cache_config = config.get("caches") or {}
|
||||||
self.global_factor = cache_config.get(
|
self.global_factor = cache_config.get("global_factor", _DEFAULT_FACTOR_SIZE)
|
||||||
"global_factor", properties.default_factor_size
|
|
||||||
)
|
|
||||||
if not isinstance(self.global_factor, (int, float)):
|
if not isinstance(self.global_factor, (int, float)):
|
||||||
raise ConfigError("caches.global_factor must be a number.")
|
raise ConfigError("caches.global_factor must be a number.")
|
||||||
|
|
||||||
# Set the global one so that it's reflected in new caches
|
|
||||||
properties.default_factor_size = self.global_factor
|
|
||||||
|
|
||||||
# Load cache factors from the config
|
# Load cache factors from the config
|
||||||
individual_factors = cache_config.get("per_cache_factors") or {}
|
individual_factors = cache_config.get("per_cache_factors") or {}
|
||||||
if not isinstance(individual_factors, dict):
|
if not isinstance(individual_factors, dict):
|
||||||
|
@ -230,7 +261,7 @@ class CacheConfig(Config):
|
||||||
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")
|
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")
|
||||||
|
|
||||||
if expire_caches:
|
if expire_caches:
|
||||||
self.expiry_time_msec: Optional[int] = self.parse_duration(cache_entry_ttl)
|
self.expiry_time_msec = self.parse_duration(cache_entry_ttl)
|
||||||
else:
|
else:
|
||||||
self.expiry_time_msec = None
|
self.expiry_time_msec = None
|
||||||
|
|
||||||
|
@ -250,23 +281,38 @@ class CacheConfig(Config):
|
||||||
)
|
)
|
||||||
self.expiry_time_msec = self.parse_duration(expiry_time)
|
self.expiry_time_msec = self.parse_duration(expiry_time)
|
||||||
|
|
||||||
|
self.cache_autotuning = cache_config.get("cache_autotuning")
|
||||||
|
if self.cache_autotuning:
|
||||||
|
max_memory_usage = self.cache_autotuning.get("max_cache_memory_usage")
|
||||||
|
self.cache_autotuning["max_cache_memory_usage"] = self.parse_size(
|
||||||
|
max_memory_usage
|
||||||
|
)
|
||||||
|
|
||||||
|
target_mem_size = self.cache_autotuning.get("target_cache_memory_usage")
|
||||||
|
self.cache_autotuning["target_cache_memory_usage"] = self.parse_size(
|
||||||
|
target_mem_size
|
||||||
|
)
|
||||||
|
|
||||||
|
min_cache_ttl = self.cache_autotuning.get("min_cache_ttl")
|
||||||
|
self.cache_autotuning["min_cache_ttl"] = self.parse_duration(min_cache_ttl)
|
||||||
|
|
||||||
self.sync_response_cache_duration = self.parse_duration(
|
self.sync_response_cache_duration = self.parse_duration(
|
||||||
cache_config.get("sync_response_cache_duration", 0)
|
cache_config.get("sync_response_cache_duration", 0)
|
||||||
)
|
)
|
||||||
|
|
||||||
# Resize all caches (if necessary) with the new factors we've loaded
|
|
||||||
self.resize_all_caches()
|
|
||||||
|
|
||||||
# Store this function so that it can be called from other classes without
|
|
||||||
# needing an instance of Config
|
|
||||||
properties.resize_all_caches_func = self.resize_all_caches
|
|
||||||
|
|
||||||
def resize_all_caches(self) -> None:
|
def resize_all_caches(self) -> None:
|
||||||
"""Ensure all cache sizes are up to date
|
"""Ensure all cache sizes are up-to-date.
|
||||||
|
|
||||||
For each cache, run the mapped callback function with either
|
For each cache, run the mapped callback function with either
|
||||||
a specific cache factor or the default, global one.
|
a specific cache factor or the default, global one.
|
||||||
"""
|
"""
|
||||||
|
# Set the global factor size, so that new caches are appropriately sized.
|
||||||
|
properties.default_factor_size = self.global_factor
|
||||||
|
|
||||||
|
# Store this function so that it can be called from other classes without
|
||||||
|
# needing an instance of CacheConfig
|
||||||
|
properties.resize_all_caches_func = self.resize_all_caches
|
||||||
|
|
||||||
# block other threads from modifying _CACHES while we iterate it.
|
# block other threads from modifying _CACHES while we iterate it.
|
||||||
with _CACHES_LOCK:
|
with _CACHES_LOCK:
|
||||||
for cache_name, callback in _CACHES.items():
|
for cache_name, callback in _CACHES.items():
|
||||||
|
|
|
@ -57,9 +57,9 @@ class OembedConfig(Config):
|
||||||
"""
|
"""
|
||||||
# Whether to use the packaged providers.json file.
|
# Whether to use the packaged providers.json file.
|
||||||
if not oembed_config.get("disable_default_providers") or False:
|
if not oembed_config.get("disable_default_providers") or False:
|
||||||
providers = json.load(
|
with pkg_resources.resource_stream("synapse", "res/providers.json") as s:
|
||||||
pkg_resources.resource_stream("synapse", "res/providers.json")
|
providers = json.load(s)
|
||||||
)
|
|
||||||
yield from self._parse_and_validate_provider(
|
yield from self._parse_and_validate_provider(
|
||||||
providers, config_path=("oembed",)
|
providers, config_path=("oembed",)
|
||||||
)
|
)
|
||||||
|
|
|
@ -63,6 +63,19 @@ class RoomConfig(Config):
|
||||||
"Invalid value for encryption_enabled_by_default_for_room_type"
|
"Invalid value for encryption_enabled_by_default_for_room_type"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.default_power_level_content_override = config.get(
|
||||||
|
"default_power_level_content_override",
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
if self.default_power_level_content_override is not None:
|
||||||
|
for preset in self.default_power_level_content_override:
|
||||||
|
if preset not in vars(RoomCreationPreset).values():
|
||||||
|
raise ConfigError(
|
||||||
|
"Unrecognised room preset %s in default_power_level_content_override"
|
||||||
|
% preset
|
||||||
|
)
|
||||||
|
# We validate the actual overrides when we try to apply them.
|
||||||
|
|
||||||
def generate_config_section(self, **kwargs: Any) -> str:
|
def generate_config_section(self, **kwargs: Any) -> str:
|
||||||
return """\
|
return """\
|
||||||
## Rooms ##
|
## Rooms ##
|
||||||
|
@ -83,4 +96,38 @@ class RoomConfig(Config):
|
||||||
# will also not affect rooms created by other servers.
|
# will also not affect rooms created by other servers.
|
||||||
#
|
#
|
||||||
#encryption_enabled_by_default_for_room_type: invite
|
#encryption_enabled_by_default_for_room_type: invite
|
||||||
|
|
||||||
|
# Override the default power levels for rooms created on this server, per
|
||||||
|
# room creation preset.
|
||||||
|
#
|
||||||
|
# The appropriate dictionary for the room preset will be applied on top
|
||||||
|
# of the existing power levels content.
|
||||||
|
#
|
||||||
|
# Useful if you know that your users need special permissions in rooms
|
||||||
|
# that they create (e.g. to send particular types of state events without
|
||||||
|
# needing an elevated power level). This takes the same shape as the
|
||||||
|
# `power_level_content_override` parameter in the /createRoom API, but
|
||||||
|
# is applied before that parameter.
|
||||||
|
#
|
||||||
|
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
|
||||||
|
# and `public_chat`. Inside each of those should be any of the
|
||||||
|
# properties allowed in `power_level_content_override` in the
|
||||||
|
# /createRoom API. If any property is missing, its default value will
|
||||||
|
# continue to be used. If any property is present, it will overwrite
|
||||||
|
# the existing default completely (so if the `events` property exists,
|
||||||
|
# the default event power levels will be ignored).
|
||||||
|
#
|
||||||
|
#default_power_level_content_override:
|
||||||
|
# private_chat:
|
||||||
|
# "events":
|
||||||
|
# "com.example.myeventtype" : 0
|
||||||
|
# "m.room.avatar": 50
|
||||||
|
# "m.room.canonical_alias": 50
|
||||||
|
# "m.room.encryption": 100
|
||||||
|
# "m.room.history_visibility": 100
|
||||||
|
# "m.room.name": 50
|
||||||
|
# "m.room.power_levels": 100
|
||||||
|
# "m.room.server_acl": 100
|
||||||
|
# "m.room.tombstone": 100
|
||||||
|
# "events_default": 1
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -996,7 +996,7 @@ class ServerConfig(Config):
|
||||||
# federation: the server-server API (/_matrix/federation). Also implies
|
# federation: the server-server API (/_matrix/federation). Also implies
|
||||||
# 'media', 'keys', 'openid'
|
# 'media', 'keys', 'openid'
|
||||||
#
|
#
|
||||||
# keys: the key discovery API (/_matrix/keys).
|
# keys: the key discovery API (/_matrix/key).
|
||||||
#
|
#
|
||||||
# media: the media API (/_matrix/media).
|
# media: the media API (/_matrix/media).
|
||||||
#
|
#
|
||||||
|
|
|
@ -414,7 +414,12 @@ def _is_membership_change_allowed(
|
||||||
raise AuthError(403, "You are banned from this room")
|
raise AuthError(403, "You are banned from this room")
|
||||||
elif join_rule == JoinRules.PUBLIC:
|
elif join_rule == JoinRules.PUBLIC:
|
||||||
pass
|
pass
|
||||||
elif room_version.msc3083_join_rules and join_rule == JoinRules.RESTRICTED:
|
elif (
|
||||||
|
room_version.msc3083_join_rules and join_rule == JoinRules.RESTRICTED
|
||||||
|
) or (
|
||||||
|
room_version.msc3787_knock_restricted_join_rule
|
||||||
|
and join_rule == JoinRules.KNOCK_RESTRICTED
|
||||||
|
):
|
||||||
# This is the same as public, but the event must contain a reference
|
# This is the same as public, but the event must contain a reference
|
||||||
# to the server who authorised the join. If the event does not contain
|
# to the server who authorised the join. If the event does not contain
|
||||||
# the proper content it is rejected.
|
# the proper content it is rejected.
|
||||||
|
@ -440,8 +445,13 @@ def _is_membership_change_allowed(
|
||||||
if authorising_user_level < invite_level:
|
if authorising_user_level < invite_level:
|
||||||
raise AuthError(403, "Join event authorised by invalid server.")
|
raise AuthError(403, "Join event authorised by invalid server.")
|
||||||
|
|
||||||
elif join_rule == JoinRules.INVITE or (
|
elif (
|
||||||
room_version.msc2403_knocking and join_rule == JoinRules.KNOCK
|
join_rule == JoinRules.INVITE
|
||||||
|
or (room_version.msc2403_knocking and join_rule == JoinRules.KNOCK)
|
||||||
|
or (
|
||||||
|
room_version.msc3787_knock_restricted_join_rule
|
||||||
|
and join_rule == JoinRules.KNOCK_RESTRICTED
|
||||||
|
)
|
||||||
):
|
):
|
||||||
if not caller_in_room and not caller_invited:
|
if not caller_in_room and not caller_invited:
|
||||||
raise AuthError(403, "You are not invited to this room.")
|
raise AuthError(403, "You are not invited to this room.")
|
||||||
|
@ -462,7 +472,10 @@ def _is_membership_change_allowed(
|
||||||
if user_level < ban_level or user_level <= target_level:
|
if user_level < ban_level or user_level <= target_level:
|
||||||
raise AuthError(403, "You don't have permission to ban")
|
raise AuthError(403, "You don't have permission to ban")
|
||||||
elif room_version.msc2403_knocking and Membership.KNOCK == membership:
|
elif room_version.msc2403_knocking and Membership.KNOCK == membership:
|
||||||
if join_rule != JoinRules.KNOCK:
|
if join_rule != JoinRules.KNOCK and (
|
||||||
|
not room_version.msc3787_knock_restricted_join_rule
|
||||||
|
or join_rule != JoinRules.KNOCK_RESTRICTED
|
||||||
|
):
|
||||||
raise AuthError(403, "You don't have permission to knock")
|
raise AuthError(403, "You don't have permission to knock")
|
||||||
elif target_user_id != event.user_id:
|
elif target_user_id != event.user_id:
|
||||||
raise AuthError(403, "You cannot knock for other users")
|
raise AuthError(403, "You cannot knock for other users")
|
||||||
|
|
|
@ -15,6 +15,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import abc
|
import abc
|
||||||
|
import collections.abc
|
||||||
import os
|
import os
|
||||||
from typing import (
|
from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
|
@ -32,9 +33,11 @@ from typing import (
|
||||||
overload,
|
overload,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
import attr
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
from unpaddedbase64 import encode_base64
|
from unpaddedbase64 import encode_base64
|
||||||
|
|
||||||
|
from synapse.api.constants import RelationTypes
|
||||||
from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
|
from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
|
||||||
from synapse.types import JsonDict, RoomStreamToken
|
from synapse.types import JsonDict, RoomStreamToken
|
||||||
from synapse.util.caches import intern_dict
|
from synapse.util.caches import intern_dict
|
||||||
|
@ -615,3 +618,45 @@ def make_event_from_dict(
|
||||||
return event_type(
|
return event_type(
|
||||||
event_dict, room_version, internal_metadata_dict or {}, rejected_reason
|
event_dict, room_version, internal_metadata_dict or {}, rejected_reason
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class _EventRelation:
|
||||||
|
# The target event of the relation.
|
||||||
|
parent_id: str
|
||||||
|
# The relation type.
|
||||||
|
rel_type: str
|
||||||
|
# The aggregation key. Will be None if the rel_type is not m.annotation or is
|
||||||
|
# not a string.
|
||||||
|
aggregation_key: Optional[str]
|
||||||
|
|
||||||
|
|
||||||
|
def relation_from_event(event: EventBase) -> Optional[_EventRelation]:
|
||||||
|
"""
|
||||||
|
Attempt to parse relation information an event.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The event relation information, if it is valid. None, otherwise.
|
||||||
|
"""
|
||||||
|
relation = event.content.get("m.relates_to")
|
||||||
|
if not relation or not isinstance(relation, collections.abc.Mapping):
|
||||||
|
# No relation information.
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Relations must have a type and parent event ID.
|
||||||
|
rel_type = relation.get("rel_type")
|
||||||
|
if not isinstance(rel_type, str):
|
||||||
|
return None
|
||||||
|
|
||||||
|
parent_id = relation.get("event_id")
|
||||||
|
if not isinstance(parent_id, str):
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Annotations have a key field.
|
||||||
|
aggregation_key = None
|
||||||
|
if rel_type == RelationTypes.ANNOTATION:
|
||||||
|
aggregation_key = relation.get("key")
|
||||||
|
if not isinstance(aggregation_key, str):
|
||||||
|
aggregation_key = None
|
||||||
|
|
||||||
|
return _EventRelation(parent_id, rel_type, aggregation_key)
|
||||||
|
|
|
@ -15,17 +15,16 @@ from typing import TYPE_CHECKING, List, Optional, Tuple, Union
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
from frozendict import frozendict
|
from frozendict import frozendict
|
||||||
|
from typing_extensions import Literal
|
||||||
from twisted.internet.defer import Deferred
|
|
||||||
|
|
||||||
from synapse.appservice import ApplicationService
|
from synapse.appservice import ApplicationService
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
|
||||||
from synapse.types import JsonDict, StateMap
|
from synapse.types import JsonDict, StateMap
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.storage import Storage
|
from synapse.storage import Storage
|
||||||
from synapse.storage.databases.main import DataStore
|
from synapse.storage.databases.main import DataStore
|
||||||
|
from synapse.storage.state import StateFilter
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, auto_attribs=True)
|
@attr.s(slots=True, auto_attribs=True)
|
||||||
|
@ -60,6 +59,9 @@ class EventContext:
|
||||||
If ``state_group`` is None (ie, the event is an outlier),
|
If ``state_group`` is None (ie, the event is an outlier),
|
||||||
``state_group_before_event`` will always also be ``None``.
|
``state_group_before_event`` will always also be ``None``.
|
||||||
|
|
||||||
|
state_delta_due_to_event: If `state_group` and `state_group_before_event` are not None
|
||||||
|
then this is the delta of the state between the two groups.
|
||||||
|
|
||||||
prev_group: If it is known, ``state_group``'s prev_group. Note that this being
|
prev_group: If it is known, ``state_group``'s prev_group. Note that this being
|
||||||
None does not necessarily mean that ``state_group`` does not have
|
None does not necessarily mean that ``state_group`` does not have
|
||||||
a prev_group!
|
a prev_group!
|
||||||
|
@ -78,73 +80,47 @@ class EventContext:
|
||||||
app_service: If this event is being sent by a (local) application service, that
|
app_service: If this event is being sent by a (local) application service, that
|
||||||
app service.
|
app service.
|
||||||
|
|
||||||
_current_state_ids: The room state map, including this event - ie, the state
|
|
||||||
in ``state_group``.
|
|
||||||
|
|
||||||
(type, state_key) -> event_id
|
|
||||||
|
|
||||||
For an outlier, this is {}
|
|
||||||
|
|
||||||
Note that this is a private attribute: it should be accessed via
|
|
||||||
``get_current_state_ids``. _AsyncEventContext impl calculates this
|
|
||||||
on-demand: it will be None until that happens.
|
|
||||||
|
|
||||||
_prev_state_ids: The room state map, excluding this event - ie, the state
|
|
||||||
in ``state_group_before_event``. For a non-state
|
|
||||||
event, this will be the same as _current_state_events.
|
|
||||||
|
|
||||||
Note that it is a completely different thing to prev_group!
|
|
||||||
|
|
||||||
(type, state_key) -> event_id
|
|
||||||
|
|
||||||
For an outlier, this is {}
|
|
||||||
|
|
||||||
As with _current_state_ids, this is a private attribute. It should be
|
|
||||||
accessed via get_prev_state_ids.
|
|
||||||
|
|
||||||
partial_state: if True, we may be storing this event with a temporary,
|
partial_state: if True, we may be storing this event with a temporary,
|
||||||
incomplete state.
|
incomplete state.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
rejected: Union[bool, str] = False
|
_storage: "Storage"
|
||||||
|
rejected: Union[Literal[False], str] = False
|
||||||
_state_group: Optional[int] = None
|
_state_group: Optional[int] = None
|
||||||
state_group_before_event: Optional[int] = None
|
state_group_before_event: Optional[int] = None
|
||||||
|
_state_delta_due_to_event: Optional[StateMap[str]] = None
|
||||||
prev_group: Optional[int] = None
|
prev_group: Optional[int] = None
|
||||||
delta_ids: Optional[StateMap[str]] = None
|
delta_ids: Optional[StateMap[str]] = None
|
||||||
app_service: Optional[ApplicationService] = None
|
app_service: Optional[ApplicationService] = None
|
||||||
|
|
||||||
_current_state_ids: Optional[StateMap[str]] = None
|
|
||||||
_prev_state_ids: Optional[StateMap[str]] = None
|
|
||||||
|
|
||||||
partial_state: bool = False
|
partial_state: bool = False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def with_state(
|
def with_state(
|
||||||
|
storage: "Storage",
|
||||||
state_group: Optional[int],
|
state_group: Optional[int],
|
||||||
state_group_before_event: Optional[int],
|
state_group_before_event: Optional[int],
|
||||||
current_state_ids: Optional[StateMap[str]],
|
state_delta_due_to_event: Optional[StateMap[str]],
|
||||||
prev_state_ids: Optional[StateMap[str]],
|
|
||||||
partial_state: bool,
|
partial_state: bool,
|
||||||
prev_group: Optional[int] = None,
|
prev_group: Optional[int] = None,
|
||||||
delta_ids: Optional[StateMap[str]] = None,
|
delta_ids: Optional[StateMap[str]] = None,
|
||||||
) -> "EventContext":
|
) -> "EventContext":
|
||||||
return EventContext(
|
return EventContext(
|
||||||
current_state_ids=current_state_ids,
|
storage=storage,
|
||||||
prev_state_ids=prev_state_ids,
|
|
||||||
state_group=state_group,
|
state_group=state_group,
|
||||||
state_group_before_event=state_group_before_event,
|
state_group_before_event=state_group_before_event,
|
||||||
|
state_delta_due_to_event=state_delta_due_to_event,
|
||||||
prev_group=prev_group,
|
prev_group=prev_group,
|
||||||
delta_ids=delta_ids,
|
delta_ids=delta_ids,
|
||||||
partial_state=partial_state,
|
partial_state=partial_state,
|
||||||
)
|
)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def for_outlier() -> "EventContext":
|
def for_outlier(
|
||||||
|
storage: "Storage",
|
||||||
|
) -> "EventContext":
|
||||||
"""Return an EventContext instance suitable for persisting an outlier event"""
|
"""Return an EventContext instance suitable for persisting an outlier event"""
|
||||||
return EventContext(
|
return EventContext(storage=storage)
|
||||||
current_state_ids={},
|
|
||||||
prev_state_ids={},
|
|
||||||
)
|
|
||||||
|
|
||||||
async def serialize(self, event: EventBase, store: "DataStore") -> JsonDict:
|
async def serialize(self, event: EventBase, store: "DataStore") -> JsonDict:
|
||||||
"""Converts self to a type that can be serialized as JSON, and then
|
"""Converts self to a type that can be serialized as JSON, and then
|
||||||
|
@ -157,24 +133,14 @@ class EventContext:
|
||||||
The serialized event.
|
The serialized event.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# We don't serialize the full state dicts, instead they get pulled out
|
|
||||||
# of the DB on the other side. However, the other side can't figure out
|
|
||||||
# the prev_state_ids, so if we're a state event we include the event
|
|
||||||
# id that we replaced in the state.
|
|
||||||
if event.is_state():
|
|
||||||
prev_state_ids = await self.get_prev_state_ids()
|
|
||||||
prev_state_id = prev_state_ids.get((event.type, event.state_key))
|
|
||||||
else:
|
|
||||||
prev_state_id = None
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"prev_state_id": prev_state_id,
|
|
||||||
"event_type": event.type,
|
|
||||||
"event_state_key": event.get_state_key(),
|
|
||||||
"state_group": self._state_group,
|
"state_group": self._state_group,
|
||||||
"state_group_before_event": self.state_group_before_event,
|
"state_group_before_event": self.state_group_before_event,
|
||||||
"rejected": self.rejected,
|
"rejected": self.rejected,
|
||||||
"prev_group": self.prev_group,
|
"prev_group": self.prev_group,
|
||||||
|
"state_delta_due_to_event": _encode_state_dict(
|
||||||
|
self._state_delta_due_to_event
|
||||||
|
),
|
||||||
"delta_ids": _encode_state_dict(self.delta_ids),
|
"delta_ids": _encode_state_dict(self.delta_ids),
|
||||||
"app_service_id": self.app_service.id if self.app_service else None,
|
"app_service_id": self.app_service.id if self.app_service else None,
|
||||||
"partial_state": self.partial_state,
|
"partial_state": self.partial_state,
|
||||||
|
@ -192,16 +158,16 @@ class EventContext:
|
||||||
Returns:
|
Returns:
|
||||||
The event context.
|
The event context.
|
||||||
"""
|
"""
|
||||||
context = _AsyncEventContextImpl(
|
context = EventContext(
|
||||||
# We use the state_group and prev_state_id stuff to pull the
|
# We use the state_group and prev_state_id stuff to pull the
|
||||||
# current_state_ids out of the DB and construct prev_state_ids.
|
# current_state_ids out of the DB and construct prev_state_ids.
|
||||||
storage=storage,
|
storage=storage,
|
||||||
prev_state_id=input["prev_state_id"],
|
|
||||||
event_type=input["event_type"],
|
|
||||||
event_state_key=input["event_state_key"],
|
|
||||||
state_group=input["state_group"],
|
state_group=input["state_group"],
|
||||||
state_group_before_event=input["state_group_before_event"],
|
state_group_before_event=input["state_group_before_event"],
|
||||||
prev_group=input["prev_group"],
|
prev_group=input["prev_group"],
|
||||||
|
state_delta_due_to_event=_decode_state_dict(
|
||||||
|
input["state_delta_due_to_event"]
|
||||||
|
),
|
||||||
delta_ids=_decode_state_dict(input["delta_ids"]),
|
delta_ids=_decode_state_dict(input["delta_ids"]),
|
||||||
rejected=input["rejected"],
|
rejected=input["rejected"],
|
||||||
partial_state=input.get("partial_state", False),
|
partial_state=input.get("partial_state", False),
|
||||||
|
@ -231,7 +197,9 @@ class EventContext:
|
||||||
|
|
||||||
return self._state_group
|
return self._state_group
|
||||||
|
|
||||||
async def get_current_state_ids(self) -> Optional[StateMap[str]]:
|
async def get_current_state_ids(
|
||||||
|
self, state_filter: Optional["StateFilter"] = None
|
||||||
|
) -> Optional[StateMap[str]]:
|
||||||
"""
|
"""
|
||||||
Gets the room state map, including this event - ie, the state in ``state_group``
|
Gets the room state map, including this event - ie, the state in ``state_group``
|
||||||
|
|
||||||
|
@ -239,6 +207,9 @@ class EventContext:
|
||||||
not make it into the room state. This method will raise an exception if
|
not make it into the room state. This method will raise an exception if
|
||||||
``rejected`` is set.
|
``rejected`` is set.
|
||||||
|
|
||||||
|
Arg:
|
||||||
|
state_filter: specifies the type of state event to fetch from DB, example: EventTypes.JoinRules
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Returns None if state_group is None, which happens when the associated
|
Returns None if state_group is None, which happens when the associated
|
||||||
event is an outlier.
|
event is an outlier.
|
||||||
|
@ -249,15 +220,27 @@ class EventContext:
|
||||||
if self.rejected:
|
if self.rejected:
|
||||||
raise RuntimeError("Attempt to access state_ids of rejected event")
|
raise RuntimeError("Attempt to access state_ids of rejected event")
|
||||||
|
|
||||||
await self._ensure_fetched()
|
assert self._state_delta_due_to_event is not None
|
||||||
return self._current_state_ids
|
|
||||||
|
|
||||||
async def get_prev_state_ids(self) -> StateMap[str]:
|
prev_state_ids = await self.get_prev_state_ids(state_filter)
|
||||||
|
|
||||||
|
if self._state_delta_due_to_event:
|
||||||
|
prev_state_ids = dict(prev_state_ids)
|
||||||
|
prev_state_ids.update(self._state_delta_due_to_event)
|
||||||
|
|
||||||
|
return prev_state_ids
|
||||||
|
|
||||||
|
async def get_prev_state_ids(
|
||||||
|
self, state_filter: Optional["StateFilter"] = None
|
||||||
|
) -> StateMap[str]:
|
||||||
"""
|
"""
|
||||||
Gets the room state map, excluding this event.
|
Gets the room state map, excluding this event.
|
||||||
|
|
||||||
For a non-state event, this will be the same as get_current_state_ids().
|
For a non-state event, this will be the same as get_current_state_ids().
|
||||||
|
|
||||||
|
Args:
|
||||||
|
state_filter: specifies the type of state event to fetch from DB, example: EventTypes.JoinRules
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Returns {} if state_group is None, which happens when the associated
|
Returns {} if state_group is None, which happens when the associated
|
||||||
event is an outlier.
|
event is an outlier.
|
||||||
|
@ -265,94 +248,10 @@ class EventContext:
|
||||||
Maps a (type, state_key) to the event ID of the state event matching
|
Maps a (type, state_key) to the event ID of the state event matching
|
||||||
this tuple.
|
this tuple.
|
||||||
"""
|
"""
|
||||||
await self._ensure_fetched()
|
assert self.state_group_before_event is not None
|
||||||
# There *should* be previous state IDs now.
|
return await self._storage.state.get_state_ids_for_group(
|
||||||
assert self._prev_state_ids is not None
|
self.state_group_before_event, state_filter
|
||||||
return self._prev_state_ids
|
|
||||||
|
|
||||||
def get_cached_current_state_ids(self) -> Optional[StateMap[str]]:
|
|
||||||
"""Gets the current state IDs if we have them already cached.
|
|
||||||
|
|
||||||
It is an error to access this for a rejected event, since rejected state should
|
|
||||||
not make it into the room state. This method will raise an exception if
|
|
||||||
``rejected`` is set.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Returns None if we haven't cached the state or if state_group is None
|
|
||||||
(which happens when the associated event is an outlier).
|
|
||||||
|
|
||||||
Otherwise, returns the the current state IDs.
|
|
||||||
"""
|
|
||||||
if self.rejected:
|
|
||||||
raise RuntimeError("Attempt to access state_ids of rejected event")
|
|
||||||
|
|
||||||
return self._current_state_ids
|
|
||||||
|
|
||||||
async def _ensure_fetched(self) -> None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True)
|
|
||||||
class _AsyncEventContextImpl(EventContext):
|
|
||||||
"""
|
|
||||||
An implementation of EventContext which fetches _current_state_ids and
|
|
||||||
_prev_state_ids from the database on demand.
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
|
|
||||||
_storage
|
|
||||||
|
|
||||||
_fetching_state_deferred: Resolves when *_state_ids have been calculated.
|
|
||||||
None if we haven't started calculating yet
|
|
||||||
|
|
||||||
_event_type: The type of the event the context is associated with.
|
|
||||||
|
|
||||||
_event_state_key: The state_key of the event the context is associated with.
|
|
||||||
|
|
||||||
_prev_state_id: If the event associated with the context is a state event,
|
|
||||||
then `_prev_state_id` is the event_id of the state that was replaced.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# This needs to have a default as we're inheriting
|
|
||||||
_storage: "Storage" = attr.ib(default=None)
|
|
||||||
_prev_state_id: Optional[str] = attr.ib(default=None)
|
|
||||||
_event_type: str = attr.ib(default=None)
|
|
||||||
_event_state_key: Optional[str] = attr.ib(default=None)
|
|
||||||
_fetching_state_deferred: Optional["Deferred[None]"] = attr.ib(default=None)
|
|
||||||
|
|
||||||
async def _ensure_fetched(self) -> None:
|
|
||||||
if not self._fetching_state_deferred:
|
|
||||||
self._fetching_state_deferred = run_in_background(self._fill_out_state)
|
|
||||||
|
|
||||||
await make_deferred_yieldable(self._fetching_state_deferred)
|
|
||||||
|
|
||||||
async def _fill_out_state(self) -> None:
|
|
||||||
"""Called to populate the _current_state_ids and _prev_state_ids
|
|
||||||
attributes by loading from the database.
|
|
||||||
"""
|
|
||||||
if self.state_group is None:
|
|
||||||
# No state group means the event is an outlier. Usually the state_ids dicts are also
|
|
||||||
# pre-set to empty dicts, but they get reset when the context is serialized, so set
|
|
||||||
# them to empty dicts again here.
|
|
||||||
self._current_state_ids = {}
|
|
||||||
self._prev_state_ids = {}
|
|
||||||
return
|
|
||||||
|
|
||||||
current_state_ids = await self._storage.state.get_state_ids_for_group(
|
|
||||||
self.state_group
|
|
||||||
)
|
)
|
||||||
# Set this separately so mypy knows current_state_ids is not None.
|
|
||||||
self._current_state_ids = current_state_ids
|
|
||||||
if self._event_state_key is not None:
|
|
||||||
self._prev_state_ids = dict(current_state_ids)
|
|
||||||
|
|
||||||
key = (self._event_type, self._event_state_key)
|
|
||||||
if self._prev_state_id:
|
|
||||||
self._prev_state_ids[key] = self._prev_state_id
|
|
||||||
else:
|
|
||||||
self._prev_state_ids.pop(key, None)
|
|
||||||
else:
|
|
||||||
self._prev_state_ids = current_state_ids
|
|
||||||
|
|
||||||
|
|
||||||
def _encode_state_dict(
|
def _encode_state_dict(
|
||||||
|
|
|
@ -27,11 +27,13 @@ from typing import (
|
||||||
Union,
|
Union,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
from synapse.api.errors import Codes
|
||||||
from synapse.rest.media.v1._base import FileInfo
|
from synapse.rest.media.v1._base import FileInfo
|
||||||
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
|
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
|
||||||
from synapse.spam_checker_api import RegistrationBehaviour
|
from synapse.spam_checker_api import Allow, Decision, RegistrationBehaviour
|
||||||
from synapse.types import RoomAlias, UserProfile
|
from synapse.types import RoomAlias, UserProfile
|
||||||
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
|
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
|
||||||
|
from synapse.util.metrics import Measure
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
import synapse.events
|
import synapse.events
|
||||||
|
@ -39,7 +41,21 @@ if TYPE_CHECKING:
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
CHECK_EVENT_FOR_SPAM_CALLBACK = Callable[
|
CHECK_EVENT_FOR_SPAM_CALLBACK = Callable[
|
||||||
|
["synapse.events.EventBase"],
|
||||||
|
Awaitable[
|
||||||
|
Union[
|
||||||
|
Allow,
|
||||||
|
Codes,
|
||||||
|
# Deprecated
|
||||||
|
bool,
|
||||||
|
# Deprecated
|
||||||
|
str,
|
||||||
|
]
|
||||||
|
],
|
||||||
|
]
|
||||||
|
SHOULD_DROP_FEDERATED_EVENT_CALLBACK = Callable[
|
||||||
["synapse.events.EventBase"],
|
["synapse.events.EventBase"],
|
||||||
Awaitable[Union[bool, str]],
|
Awaitable[Union[bool, str]],
|
||||||
]
|
]
|
||||||
|
@ -162,8 +178,14 @@ def load_legacy_spam_checkers(hs: "synapse.server.HomeServer") -> None:
|
||||||
|
|
||||||
|
|
||||||
class SpamChecker:
|
class SpamChecker:
|
||||||
def __init__(self) -> None:
|
def __init__(self, hs: "synapse.server.HomeServer") -> None:
|
||||||
|
self.hs = hs
|
||||||
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
self._check_event_for_spam_callbacks: List[CHECK_EVENT_FOR_SPAM_CALLBACK] = []
|
self._check_event_for_spam_callbacks: List[CHECK_EVENT_FOR_SPAM_CALLBACK] = []
|
||||||
|
self._should_drop_federated_event_callbacks: List[
|
||||||
|
SHOULD_DROP_FEDERATED_EVENT_CALLBACK
|
||||||
|
] = []
|
||||||
self._user_may_join_room_callbacks: List[USER_MAY_JOIN_ROOM_CALLBACK] = []
|
self._user_may_join_room_callbacks: List[USER_MAY_JOIN_ROOM_CALLBACK] = []
|
||||||
self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = []
|
self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = []
|
||||||
self._user_may_send_3pid_invite_callbacks: List[
|
self._user_may_send_3pid_invite_callbacks: List[
|
||||||
|
@ -187,6 +209,9 @@ class SpamChecker:
|
||||||
def register_callbacks(
|
def register_callbacks(
|
||||||
self,
|
self,
|
||||||
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
|
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
|
||||||
|
should_drop_federated_event: Optional[
|
||||||
|
SHOULD_DROP_FEDERATED_EVENT_CALLBACK
|
||||||
|
] = None,
|
||||||
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
|
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
|
||||||
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
|
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
|
||||||
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
|
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
|
||||||
|
@ -205,6 +230,11 @@ class SpamChecker:
|
||||||
if check_event_for_spam is not None:
|
if check_event_for_spam is not None:
|
||||||
self._check_event_for_spam_callbacks.append(check_event_for_spam)
|
self._check_event_for_spam_callbacks.append(check_event_for_spam)
|
||||||
|
|
||||||
|
if should_drop_federated_event is not None:
|
||||||
|
self._should_drop_federated_event_callbacks.append(
|
||||||
|
should_drop_federated_event
|
||||||
|
)
|
||||||
|
|
||||||
if user_may_join_room is not None:
|
if user_may_join_room is not None:
|
||||||
self._user_may_join_room_callbacks.append(user_may_join_room)
|
self._user_may_join_room_callbacks.append(user_may_join_room)
|
||||||
|
|
||||||
|
@ -240,7 +270,7 @@ class SpamChecker:
|
||||||
|
|
||||||
async def check_event_for_spam(
|
async def check_event_for_spam(
|
||||||
self, event: "synapse.events.EventBase"
|
self, event: "synapse.events.EventBase"
|
||||||
) -> Union[bool, str]:
|
) -> Union[Decision, str]:
|
||||||
"""Checks if a given event is considered "spammy" by this server.
|
"""Checks if a given event is considered "spammy" by this server.
|
||||||
|
|
||||||
If the server considers an event spammy, then it will be rejected if
|
If the server considers an event spammy, then it will be rejected if
|
||||||
|
@ -251,11 +281,57 @@ class SpamChecker:
|
||||||
event: the event to be checked
|
event: the event to be checked
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
True or a string if the event is spammy. If a string is returned it
|
- on `ALLOW`, the event is considered good (non-spammy) and should
|
||||||
will be used as the error message returned to the user.
|
be let through. Other spamcheck filters may still reject it.
|
||||||
|
- on `Code`, the event is considered spammy and is rejected with a specific
|
||||||
|
error message/code.
|
||||||
|
- on `str`, the event is considered spammy and the string is used as error
|
||||||
|
message. This usage is generally discouraged as it doesn't support
|
||||||
|
internationalization.
|
||||||
"""
|
"""
|
||||||
for callback in self._check_event_for_spam_callbacks:
|
for callback in self._check_event_for_spam_callbacks:
|
||||||
res: Union[bool, str] = await delay_cancellation(callback(event))
|
with Measure(
|
||||||
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
|
):
|
||||||
|
res: Union[Decision, str, bool] = await delay_cancellation(
|
||||||
|
callback(event)
|
||||||
|
)
|
||||||
|
if res is False or res is Allow.ALLOW:
|
||||||
|
# This spam-checker accepts the event.
|
||||||
|
# Other spam-checkers may reject it, though.
|
||||||
|
continue
|
||||||
|
elif res is True:
|
||||||
|
# This spam-checker rejects the event with deprecated
|
||||||
|
# return value `True`
|
||||||
|
return Codes.FORBIDDEN
|
||||||
|
else:
|
||||||
|
# This spam-checker rejects the event either with a `str`
|
||||||
|
# or with a `Codes`. In either case, we stop here.
|
||||||
|
return res
|
||||||
|
|
||||||
|
# No spam-checker has rejected the event, let it pass.
|
||||||
|
return Allow.ALLOW
|
||||||
|
|
||||||
|
async def should_drop_federated_event(
|
||||||
|
self, event: "synapse.events.EventBase"
|
||||||
|
) -> Union[bool, str]:
|
||||||
|
"""Checks if a given federated event is considered "spammy" by this
|
||||||
|
server.
|
||||||
|
|
||||||
|
If the server considers an event spammy, it will be silently dropped,
|
||||||
|
and in doing so will split-brain our view of the room's DAG.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event: the event to be checked
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if the event should be silently dropped
|
||||||
|
"""
|
||||||
|
for callback in self._should_drop_federated_event_callbacks:
|
||||||
|
with Measure(
|
||||||
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
|
):
|
||||||
|
res: Union[bool, str] = await delay_cancellation(callback(event))
|
||||||
if res:
|
if res:
|
||||||
return res
|
return res
|
||||||
|
|
||||||
|
@ -276,9 +352,12 @@ class SpamChecker:
|
||||||
Whether the user may join the room
|
Whether the user may join the room
|
||||||
"""
|
"""
|
||||||
for callback in self._user_may_join_room_callbacks:
|
for callback in self._user_may_join_room_callbacks:
|
||||||
may_join_room = await delay_cancellation(
|
with Measure(
|
||||||
callback(user_id, room_id, is_invited)
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
)
|
):
|
||||||
|
may_join_room = await delay_cancellation(
|
||||||
|
callback(user_id, room_id, is_invited)
|
||||||
|
)
|
||||||
if may_join_room is False:
|
if may_join_room is False:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -300,9 +379,12 @@ class SpamChecker:
|
||||||
True if the user may send an invite, otherwise False
|
True if the user may send an invite, otherwise False
|
||||||
"""
|
"""
|
||||||
for callback in self._user_may_invite_callbacks:
|
for callback in self._user_may_invite_callbacks:
|
||||||
may_invite = await delay_cancellation(
|
with Measure(
|
||||||
callback(inviter_userid, invitee_userid, room_id)
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
)
|
):
|
||||||
|
may_invite = await delay_cancellation(
|
||||||
|
callback(inviter_userid, invitee_userid, room_id)
|
||||||
|
)
|
||||||
if may_invite is False:
|
if may_invite is False:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -328,9 +410,12 @@ class SpamChecker:
|
||||||
True if the user may send the invite, otherwise False
|
True if the user may send the invite, otherwise False
|
||||||
"""
|
"""
|
||||||
for callback in self._user_may_send_3pid_invite_callbacks:
|
for callback in self._user_may_send_3pid_invite_callbacks:
|
||||||
may_send_3pid_invite = await delay_cancellation(
|
with Measure(
|
||||||
callback(inviter_userid, medium, address, room_id)
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
)
|
):
|
||||||
|
may_send_3pid_invite = await delay_cancellation(
|
||||||
|
callback(inviter_userid, medium, address, room_id)
|
||||||
|
)
|
||||||
if may_send_3pid_invite is False:
|
if may_send_3pid_invite is False:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -348,7 +433,10 @@ class SpamChecker:
|
||||||
True if the user may create a room, otherwise False
|
True if the user may create a room, otherwise False
|
||||||
"""
|
"""
|
||||||
for callback in self._user_may_create_room_callbacks:
|
for callback in self._user_may_create_room_callbacks:
|
||||||
may_create_room = await delay_cancellation(callback(userid))
|
with Measure(
|
||||||
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
|
):
|
||||||
|
may_create_room = await delay_cancellation(callback(userid))
|
||||||
if may_create_room is False:
|
if may_create_room is False:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -369,9 +457,12 @@ class SpamChecker:
|
||||||
True if the user may create a room alias, otherwise False
|
True if the user may create a room alias, otherwise False
|
||||||
"""
|
"""
|
||||||
for callback in self._user_may_create_room_alias_callbacks:
|
for callback in self._user_may_create_room_alias_callbacks:
|
||||||
may_create_room_alias = await delay_cancellation(
|
with Measure(
|
||||||
callback(userid, room_alias)
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
)
|
):
|
||||||
|
may_create_room_alias = await delay_cancellation(
|
||||||
|
callback(userid, room_alias)
|
||||||
|
)
|
||||||
if may_create_room_alias is False:
|
if may_create_room_alias is False:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -390,7 +481,10 @@ class SpamChecker:
|
||||||
True if the user may publish the room, otherwise False
|
True if the user may publish the room, otherwise False
|
||||||
"""
|
"""
|
||||||
for callback in self._user_may_publish_room_callbacks:
|
for callback in self._user_may_publish_room_callbacks:
|
||||||
may_publish_room = await delay_cancellation(callback(userid, room_id))
|
with Measure(
|
||||||
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
|
):
|
||||||
|
may_publish_room = await delay_cancellation(callback(userid, room_id))
|
||||||
if may_publish_room is False:
|
if may_publish_room is False:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -412,9 +506,13 @@ class SpamChecker:
|
||||||
True if the user is spammy.
|
True if the user is spammy.
|
||||||
"""
|
"""
|
||||||
for callback in self._check_username_for_spam_callbacks:
|
for callback in self._check_username_for_spam_callbacks:
|
||||||
# Make a copy of the user profile object to ensure the spam checker cannot
|
with Measure(
|
||||||
# modify it.
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
if await delay_cancellation(callback(user_profile.copy())):
|
):
|
||||||
|
# Make a copy of the user profile object to ensure the spam checker cannot
|
||||||
|
# modify it.
|
||||||
|
res = await delay_cancellation(callback(user_profile.copy()))
|
||||||
|
if res:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
@ -442,9 +540,12 @@ class SpamChecker:
|
||||||
"""
|
"""
|
||||||
|
|
||||||
for callback in self._check_registration_for_spam_callbacks:
|
for callback in self._check_registration_for_spam_callbacks:
|
||||||
behaviour = await delay_cancellation(
|
with Measure(
|
||||||
callback(email_threepid, username, request_info, auth_provider_id)
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
)
|
):
|
||||||
|
behaviour = await delay_cancellation(
|
||||||
|
callback(email_threepid, username, request_info, auth_provider_id)
|
||||||
|
)
|
||||||
assert isinstance(behaviour, RegistrationBehaviour)
|
assert isinstance(behaviour, RegistrationBehaviour)
|
||||||
if behaviour != RegistrationBehaviour.ALLOW:
|
if behaviour != RegistrationBehaviour.ALLOW:
|
||||||
return behaviour
|
return behaviour
|
||||||
|
@ -486,7 +587,10 @@ class SpamChecker:
|
||||||
"""
|
"""
|
||||||
|
|
||||||
for callback in self._check_media_file_for_spam_callbacks:
|
for callback in self._check_media_file_for_spam_callbacks:
|
||||||
spam = await delay_cancellation(callback(file_wrapper, file_info))
|
with Measure(
|
||||||
|
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||||
|
):
|
||||||
|
spam = await delay_cancellation(callback(file_wrapper, file_info))
|
||||||
if spam:
|
if spam:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
|
@ -15,6 +15,7 @@
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
|
import synapse
|
||||||
from synapse.api.constants import MAX_DEPTH, EventContentFields, EventTypes, Membership
|
from synapse.api.constants import MAX_DEPTH, EventContentFields, EventTypes, Membership
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, SynapseError
|
||||||
from synapse.api.room_versions import EventFormatVersions, RoomVersion
|
from synapse.api.room_versions import EventFormatVersions, RoomVersion
|
||||||
|
@ -98,9 +99,9 @@ class FederationBase:
|
||||||
)
|
)
|
||||||
return redacted_event
|
return redacted_event
|
||||||
|
|
||||||
result = await self.spam_checker.check_event_for_spam(pdu)
|
spam_check = await self.spam_checker.check_event_for_spam(pdu)
|
||||||
|
|
||||||
if result:
|
if spam_check is not synapse.spam_checker_api.Allow.ALLOW:
|
||||||
logger.warning("Event contains spam, soft-failing %s", pdu.event_id)
|
logger.warning("Event contains spam, soft-failing %s", pdu.event_id)
|
||||||
# we redact (to save disk space) as well as soft-failing (to stop
|
# we redact (to save disk space) as well as soft-failing (to stop
|
||||||
# using the event in prev_events).
|
# using the event in prev_events).
|
||||||
|
|
|
@ -110,6 +110,7 @@ class FederationServer(FederationBase):
|
||||||
|
|
||||||
self.handler = hs.get_federation_handler()
|
self.handler = hs.get_federation_handler()
|
||||||
self.storage = hs.get_storage()
|
self.storage = hs.get_storage()
|
||||||
|
self._spam_checker = hs.get_spam_checker()
|
||||||
self._federation_event_handler = hs.get_federation_event_handler()
|
self._federation_event_handler = hs.get_federation_event_handler()
|
||||||
self.state = hs.get_state_handler()
|
self.state = hs.get_state_handler()
|
||||||
self._event_auth_handler = hs.get_event_auth_handler()
|
self._event_auth_handler = hs.get_event_auth_handler()
|
||||||
|
@ -1019,6 +1020,12 @@ class FederationServer(FederationBase):
|
||||||
except SynapseError as e:
|
except SynapseError as e:
|
||||||
raise FederationError("ERROR", e.code, e.msg, affected=pdu.event_id)
|
raise FederationError("ERROR", e.code, e.msg, affected=pdu.event_id)
|
||||||
|
|
||||||
|
if await self._spam_checker.should_drop_federated_event(pdu):
|
||||||
|
logger.warning(
|
||||||
|
"Unstaged federated event contains spam, dropping %s", pdu.event_id
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
# Add the event to our staging area
|
# Add the event to our staging area
|
||||||
await self.store.insert_received_event_to_staging(origin, pdu)
|
await self.store.insert_received_event_to_staging(origin, pdu)
|
||||||
|
|
||||||
|
@ -1032,6 +1039,41 @@ class FederationServer(FederationBase):
|
||||||
pdu.room_id, room_version, lock, origin, pdu
|
pdu.room_id, room_version, lock, origin, pdu
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def _get_next_nonspam_staged_event_for_room(
|
||||||
|
self, room_id: str, room_version: RoomVersion
|
||||||
|
) -> Optional[Tuple[str, EventBase]]:
|
||||||
|
"""Fetch the first non-spam event from staging queue.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
room_id: the room to fetch the first non-spam event in.
|
||||||
|
room_version: the version of the room.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The first non-spam event in that room.
|
||||||
|
"""
|
||||||
|
|
||||||
|
while True:
|
||||||
|
# We need to do this check outside the lock to avoid a race between
|
||||||
|
# a new event being inserted by another instance and it attempting
|
||||||
|
# to acquire the lock.
|
||||||
|
next = await self.store.get_next_staged_event_for_room(
|
||||||
|
room_id, room_version
|
||||||
|
)
|
||||||
|
|
||||||
|
if next is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
origin, event = next
|
||||||
|
|
||||||
|
if await self._spam_checker.should_drop_federated_event(event):
|
||||||
|
logger.warning(
|
||||||
|
"Staged federated event contains spam, dropping %s",
|
||||||
|
event.event_id,
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
return next
|
||||||
|
|
||||||
@wrap_as_background_process("_process_incoming_pdus_in_room_inner")
|
@wrap_as_background_process("_process_incoming_pdus_in_room_inner")
|
||||||
async def _process_incoming_pdus_in_room_inner(
|
async def _process_incoming_pdus_in_room_inner(
|
||||||
self,
|
self,
|
||||||
|
@ -1109,12 +1151,10 @@ class FederationServer(FederationBase):
|
||||||
(self._clock.time_msec() - received_ts) / 1000
|
(self._clock.time_msec() - received_ts) / 1000
|
||||||
)
|
)
|
||||||
|
|
||||||
# We need to do this check outside the lock to avoid a race between
|
next = await self._get_next_nonspam_staged_event_for_room(
|
||||||
# a new event being inserted by another instance and it attempting
|
|
||||||
# to acquire the lock.
|
|
||||||
next = await self.store.get_next_staged_event_for_room(
|
|
||||||
room_id, room_version
|
room_id, room_version
|
||||||
)
|
)
|
||||||
|
|
||||||
if not next:
|
if not next:
|
||||||
break
|
break
|
||||||
|
|
||||||
|
|
|
@ -15,7 +15,17 @@
|
||||||
import abc
|
import abc
|
||||||
import logging
|
import logging
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Set, Tuple
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Collection,
|
||||||
|
Dict,
|
||||||
|
Hashable,
|
||||||
|
Iterable,
|
||||||
|
List,
|
||||||
|
Optional,
|
||||||
|
Set,
|
||||||
|
Tuple,
|
||||||
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
|
@ -409,7 +419,7 @@ class FederationSender(AbstractFederationSender):
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
destinations: Optional[Set[str]] = None
|
destinations: Optional[Collection[str]] = None
|
||||||
if not event.prev_event_ids():
|
if not event.prev_event_ids():
|
||||||
# If there are no prev event IDs then the state is empty
|
# If there are no prev event IDs then the state is empty
|
||||||
# and so no remote servers in the room
|
# and so no remote servers in the room
|
||||||
|
@ -444,7 +454,7 @@ class FederationSender(AbstractFederationSender):
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
destinations = {
|
sharded_destinations = {
|
||||||
d
|
d
|
||||||
for d in destinations
|
for d in destinations
|
||||||
if self._federation_shard_config.should_handle(
|
if self._federation_shard_config.should_handle(
|
||||||
|
@ -456,12 +466,12 @@ class FederationSender(AbstractFederationSender):
|
||||||
# If we are sending the event on behalf of another server
|
# If we are sending the event on behalf of another server
|
||||||
# then it already has the event and there is no reason to
|
# then it already has the event and there is no reason to
|
||||||
# send the event to it.
|
# send the event to it.
|
||||||
destinations.discard(send_on_behalf_of)
|
sharded_destinations.discard(send_on_behalf_of)
|
||||||
|
|
||||||
logger.debug("Sending %s to %r", event, destinations)
|
logger.debug("Sending %s to %r", event, sharded_destinations)
|
||||||
|
|
||||||
if destinations:
|
if sharded_destinations:
|
||||||
await self._send_pdu(event, destinations)
|
await self._send_pdu(event, sharded_destinations)
|
||||||
|
|
||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
ts = await self.store.get_received_ts(event.event_id)
|
ts = await self.store.get_received_ts(event.event_id)
|
||||||
|
|
|
@ -21,7 +21,7 @@ from typing import TYPE_CHECKING, Any, Awaitable, Callable, Dict, Optional, Tupl
|
||||||
|
|
||||||
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
|
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
|
||||||
from synapse.api.urls import FEDERATION_V1_PREFIX
|
from synapse.api.urls import FEDERATION_V1_PREFIX
|
||||||
from synapse.http.server import HttpServer, ServletCallback
|
from synapse.http.server import HttpServer, ServletCallback, is_method_cancellable
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.context import run_in_background
|
from synapse.logging.context import run_in_background
|
||||||
|
@ -169,14 +169,16 @@ def _parse_auth_header(header_bytes: bytes) -> Tuple[str, str, str, Optional[str
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
header_str = header_bytes.decode("utf-8")
|
header_str = header_bytes.decode("utf-8")
|
||||||
params = header_str.split(" ")[1].split(",")
|
params = re.split(" +", header_str)[1].split(",")
|
||||||
param_dict: Dict[str, str] = {
|
param_dict: Dict[str, str] = {
|
||||||
k: v for k, v in [param.split("=", maxsplit=1) for param in params]
|
k.lower(): v for k, v in [param.split("=", maxsplit=1) for param in params]
|
||||||
}
|
}
|
||||||
|
|
||||||
def strip_quotes(value: str) -> str:
|
def strip_quotes(value: str) -> str:
|
||||||
if value.startswith('"'):
|
if value.startswith('"'):
|
||||||
return value[1:-1]
|
return re.sub(
|
||||||
|
"\\\\(.)", lambda matchobj: matchobj.group(1), value[1:-1]
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
return value
|
return value
|
||||||
|
|
||||||
|
@ -373,6 +375,17 @@ class BaseFederationServlet:
|
||||||
if code is None:
|
if code is None:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if is_method_cancellable(code):
|
||||||
|
# The wrapper added by `self._wrap` will inherit the cancellable flag,
|
||||||
|
# but the wrapper itself does not support cancellation yet.
|
||||||
|
# Once resolved, the cancellation tests in
|
||||||
|
# `tests/federation/transport/server/test__base.py` can be re-enabled.
|
||||||
|
raise Exception(
|
||||||
|
f"{self.__class__.__name__}.on_{method} has been marked as "
|
||||||
|
"cancellable, but federation servlets do not support cancellation "
|
||||||
|
"yet."
|
||||||
|
)
|
||||||
|
|
||||||
server.register_paths(
|
server.register_paths(
|
||||||
method,
|
method,
|
||||||
(pattern,),
|
(pattern,),
|
||||||
|
|
|
@ -934,7 +934,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||||
# Before deleting the group lets kick everyone out of it
|
# Before deleting the group lets kick everyone out of it
|
||||||
users = await self.store.get_users_in_group(group_id, include_private=True)
|
users = await self.store.get_users_in_group(group_id, include_private=True)
|
||||||
|
|
||||||
async def _kick_user_from_group(user_id):
|
async def _kick_user_from_group(user_id: str) -> None:
|
||||||
if self.hs.is_mine_id(user_id):
|
if self.hs.is_mine_id(user_id):
|
||||||
groups_local = self.hs.get_groups_local_handler()
|
groups_local = self.hs.get_groups_local_handler()
|
||||||
assert isinstance(
|
assert isinstance(
|
||||||
|
|
|
@ -23,7 +23,7 @@ from synapse.replication.http.account_data import (
|
||||||
ReplicationUserAccountDataRestServlet,
|
ReplicationUserAccountDataRestServlet,
|
||||||
)
|
)
|
||||||
from synapse.streams import EventSource
|
from synapse.streams import EventSource
|
||||||
from synapse.types import JsonDict, UserID
|
from synapse.types import JsonDict, StreamKeyType, UserID
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
@ -105,7 +105,7 @@ class AccountDataHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
self._notifier.on_new_event(
|
self._notifier.on_new_event(
|
||||||
"account_data_key", max_stream_id, users=[user_id]
|
StreamKeyType.ACCOUNT_DATA, max_stream_id, users=[user_id]
|
||||||
)
|
)
|
||||||
|
|
||||||
await self._notify_modules(user_id, room_id, account_data_type, content)
|
await self._notify_modules(user_id, room_id, account_data_type, content)
|
||||||
|
@ -141,7 +141,7 @@ class AccountDataHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
self._notifier.on_new_event(
|
self._notifier.on_new_event(
|
||||||
"account_data_key", max_stream_id, users=[user_id]
|
StreamKeyType.ACCOUNT_DATA, max_stream_id, users=[user_id]
|
||||||
)
|
)
|
||||||
|
|
||||||
await self._notify_modules(user_id, None, account_data_type, content)
|
await self._notify_modules(user_id, None, account_data_type, content)
|
||||||
|
@ -176,7 +176,7 @@ class AccountDataHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
self._notifier.on_new_event(
|
self._notifier.on_new_event(
|
||||||
"account_data_key", max_stream_id, users=[user_id]
|
StreamKeyType.ACCOUNT_DATA, max_stream_id, users=[user_id]
|
||||||
)
|
)
|
||||||
return max_stream_id
|
return max_stream_id
|
||||||
else:
|
else:
|
||||||
|
@ -201,7 +201,7 @@ class AccountDataHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
self._notifier.on_new_event(
|
self._notifier.on_new_event(
|
||||||
"account_data_key", max_stream_id, users=[user_id]
|
StreamKeyType.ACCOUNT_DATA, max_stream_id, users=[user_id]
|
||||||
)
|
)
|
||||||
return max_stream_id
|
return max_stream_id
|
||||||
else:
|
else:
|
||||||
|
|
|
@ -38,6 +38,7 @@ from synapse.types import (
|
||||||
JsonDict,
|
JsonDict,
|
||||||
RoomAlias,
|
RoomAlias,
|
||||||
RoomStreamToken,
|
RoomStreamToken,
|
||||||
|
StreamKeyType,
|
||||||
UserID,
|
UserID,
|
||||||
)
|
)
|
||||||
from synapse.util.async_helpers import Linearizer
|
from synapse.util.async_helpers import Linearizer
|
||||||
|
@ -213,8 +214,8 @@ class ApplicationServicesHandler:
|
||||||
Args:
|
Args:
|
||||||
stream_key: The stream the event came from.
|
stream_key: The stream the event came from.
|
||||||
|
|
||||||
`stream_key` can be "typing_key", "receipt_key", "presence_key",
|
`stream_key` can be StreamKeyType.TYPING, StreamKeyType.RECEIPT, StreamKeyType.PRESENCE,
|
||||||
"to_device_key" or "device_list_key". Any other value for `stream_key`
|
StreamKeyType.TO_DEVICE or StreamKeyType.DEVICE_LIST. Any other value for `stream_key`
|
||||||
will cause this function to return early.
|
will cause this function to return early.
|
||||||
|
|
||||||
Ephemeral events will only be pushed to appservices that have opted into
|
Ephemeral events will only be pushed to appservices that have opted into
|
||||||
|
@ -235,11 +236,11 @@ class ApplicationServicesHandler:
|
||||||
# Only the following streams are currently supported.
|
# Only the following streams are currently supported.
|
||||||
# FIXME: We should use constants for these values.
|
# FIXME: We should use constants for these values.
|
||||||
if stream_key not in (
|
if stream_key not in (
|
||||||
"typing_key",
|
StreamKeyType.TYPING,
|
||||||
"receipt_key",
|
StreamKeyType.RECEIPT,
|
||||||
"presence_key",
|
StreamKeyType.PRESENCE,
|
||||||
"to_device_key",
|
StreamKeyType.TO_DEVICE,
|
||||||
"device_list_key",
|
StreamKeyType.DEVICE_LIST,
|
||||||
):
|
):
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -258,14 +259,14 @@ class ApplicationServicesHandler:
|
||||||
|
|
||||||
# Ignore to-device messages if the feature flag is not enabled
|
# Ignore to-device messages if the feature flag is not enabled
|
||||||
if (
|
if (
|
||||||
stream_key == "to_device_key"
|
stream_key == StreamKeyType.TO_DEVICE
|
||||||
and not self._msc2409_to_device_messages_enabled
|
and not self._msc2409_to_device_messages_enabled
|
||||||
):
|
):
|
||||||
return
|
return
|
||||||
|
|
||||||
# Ignore device lists if the feature flag is not enabled
|
# Ignore device lists if the feature flag is not enabled
|
||||||
if (
|
if (
|
||||||
stream_key == "device_list_key"
|
stream_key == StreamKeyType.DEVICE_LIST
|
||||||
and not self._msc3202_transaction_extensions_enabled
|
and not self._msc3202_transaction_extensions_enabled
|
||||||
):
|
):
|
||||||
return
|
return
|
||||||
|
@ -283,15 +284,15 @@ class ApplicationServicesHandler:
|
||||||
if (
|
if (
|
||||||
stream_key
|
stream_key
|
||||||
in (
|
in (
|
||||||
"typing_key",
|
StreamKeyType.TYPING,
|
||||||
"receipt_key",
|
StreamKeyType.RECEIPT,
|
||||||
"presence_key",
|
StreamKeyType.PRESENCE,
|
||||||
"to_device_key",
|
StreamKeyType.TO_DEVICE,
|
||||||
)
|
)
|
||||||
and service.supports_ephemeral
|
and service.supports_ephemeral
|
||||||
)
|
)
|
||||||
or (
|
or (
|
||||||
stream_key == "device_list_key"
|
stream_key == StreamKeyType.DEVICE_LIST
|
||||||
and service.msc3202_transaction_extensions
|
and service.msc3202_transaction_extensions
|
||||||
)
|
)
|
||||||
]
|
]
|
||||||
|
@ -317,7 +318,7 @@ class ApplicationServicesHandler:
|
||||||
logger.debug("Checking interested services for %s", stream_key)
|
logger.debug("Checking interested services for %s", stream_key)
|
||||||
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
||||||
for service in services:
|
for service in services:
|
||||||
if stream_key == "typing_key":
|
if stream_key == StreamKeyType.TYPING:
|
||||||
# Note that we don't persist the token (via set_appservice_stream_type_pos)
|
# Note that we don't persist the token (via set_appservice_stream_type_pos)
|
||||||
# for typing_key due to performance reasons and due to their highly
|
# for typing_key due to performance reasons and due to their highly
|
||||||
# ephemeral nature.
|
# ephemeral nature.
|
||||||
|
@ -333,7 +334,7 @@ class ApplicationServicesHandler:
|
||||||
async with self._ephemeral_events_linearizer.queue(
|
async with self._ephemeral_events_linearizer.queue(
|
||||||
(service.id, stream_key)
|
(service.id, stream_key)
|
||||||
):
|
):
|
||||||
if stream_key == "receipt_key":
|
if stream_key == StreamKeyType.RECEIPT:
|
||||||
events = await self._handle_receipts(service, new_token)
|
events = await self._handle_receipts(service, new_token)
|
||||||
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
|
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
|
||||||
|
|
||||||
|
@ -342,7 +343,7 @@ class ApplicationServicesHandler:
|
||||||
service, "read_receipt", new_token
|
service, "read_receipt", new_token
|
||||||
)
|
)
|
||||||
|
|
||||||
elif stream_key == "presence_key":
|
elif stream_key == StreamKeyType.PRESENCE:
|
||||||
events = await self._handle_presence(service, users, new_token)
|
events = await self._handle_presence(service, users, new_token)
|
||||||
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
|
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
|
||||||
|
|
||||||
|
@ -351,7 +352,7 @@ class ApplicationServicesHandler:
|
||||||
service, "presence", new_token
|
service, "presence", new_token
|
||||||
)
|
)
|
||||||
|
|
||||||
elif stream_key == "to_device_key":
|
elif stream_key == StreamKeyType.TO_DEVICE:
|
||||||
# Retrieve a list of to-device message events, as well as the
|
# Retrieve a list of to-device message events, as well as the
|
||||||
# maximum stream token of the messages we were able to retrieve.
|
# maximum stream token of the messages we were able to retrieve.
|
||||||
to_device_messages = await self._get_to_device_messages(
|
to_device_messages = await self._get_to_device_messages(
|
||||||
|
@ -366,7 +367,7 @@ class ApplicationServicesHandler:
|
||||||
service, "to_device", new_token
|
service, "to_device", new_token
|
||||||
)
|
)
|
||||||
|
|
||||||
elif stream_key == "device_list_key":
|
elif stream_key == StreamKeyType.DEVICE_LIST:
|
||||||
device_list_summary = await self._get_device_list_summary(
|
device_list_summary = await self._get_device_list_summary(
|
||||||
service, new_token
|
service, new_token
|
||||||
)
|
)
|
||||||
|
|
|
@ -43,6 +43,7 @@ from synapse.metrics.background_process_metrics import (
|
||||||
)
|
)
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
JsonDict,
|
JsonDict,
|
||||||
|
StreamKeyType,
|
||||||
StreamToken,
|
StreamToken,
|
||||||
UserID,
|
UserID,
|
||||||
get_domain_from_id,
|
get_domain_from_id,
|
||||||
|
@ -502,7 +503,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||||
# specify the user ID too since the user should always get their own device list
|
# specify the user ID too since the user should always get their own device list
|
||||||
# updates, even if they aren't in any rooms.
|
# updates, even if they aren't in any rooms.
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"device_list_key", position, users={user_id}, rooms=room_ids
|
StreamKeyType.DEVICE_LIST, position, users={user_id}, rooms=room_ids
|
||||||
)
|
)
|
||||||
|
|
||||||
# We may need to do some processing asynchronously for local user IDs.
|
# We may need to do some processing asynchronously for local user IDs.
|
||||||
|
@ -523,7 +524,9 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||||
from_user_id, user_ids
|
from_user_id, user_ids
|
||||||
)
|
)
|
||||||
|
|
||||||
self.notifier.on_new_event("device_list_key", position, users=[from_user_id])
|
self.notifier.on_new_event(
|
||||||
|
StreamKeyType.DEVICE_LIST, position, users=[from_user_id]
|
||||||
|
)
|
||||||
|
|
||||||
async def user_left_room(self, user: UserID, room_id: str) -> None:
|
async def user_left_room(self, user: UserID, room_id: str) -> None:
|
||||||
user_id = user.to_string()
|
user_id = user.to_string()
|
||||||
|
|
|
@ -26,7 +26,7 @@ from synapse.logging.opentracing import (
|
||||||
set_tag,
|
set_tag,
|
||||||
)
|
)
|
||||||
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
|
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
|
||||||
from synapse.types import JsonDict, Requester, UserID, get_domain_from_id
|
from synapse.types import JsonDict, Requester, StreamKeyType, UserID, get_domain_from_id
|
||||||
from synapse.util import json_encoder
|
from synapse.util import json_encoder
|
||||||
from synapse.util.stringutils import random_string
|
from synapse.util.stringutils import random_string
|
||||||
|
|
||||||
|
@ -151,7 +151,7 @@ class DeviceMessageHandler:
|
||||||
# Notify listeners that there are new to-device messages to process,
|
# Notify listeners that there are new to-device messages to process,
|
||||||
# handing them the latest stream id.
|
# handing them the latest stream id.
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"to_device_key", last_stream_id, users=local_messages.keys()
|
StreamKeyType.TO_DEVICE, last_stream_id, users=local_messages.keys()
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _check_for_unknown_devices(
|
async def _check_for_unknown_devices(
|
||||||
|
@ -285,7 +285,7 @@ class DeviceMessageHandler:
|
||||||
# Notify listeners that there are new to-device messages to process,
|
# Notify listeners that there are new to-device messages to process,
|
||||||
# handing them the latest stream id.
|
# handing them the latest stream id.
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"to_device_key", last_stream_id, users=local_messages.keys()
|
StreamKeyType.TO_DEVICE, last_stream_id, users=local_messages.keys()
|
||||||
)
|
)
|
||||||
|
|
||||||
if self.federation_sender:
|
if self.federation_sender:
|
||||||
|
|
|
@ -73,6 +73,9 @@ class DirectoryHandler:
|
||||||
if wchar in room_alias.localpart:
|
if wchar in room_alias.localpart:
|
||||||
raise SynapseError(400, "Invalid characters in room alias")
|
raise SynapseError(400, "Invalid characters in room alias")
|
||||||
|
|
||||||
|
if ":" in room_alias.localpart:
|
||||||
|
raise SynapseError(400, "Invalid character in room alias localpart: ':'.")
|
||||||
|
|
||||||
if not self.hs.is_mine(room_alias):
|
if not self.hs.is_mine(room_alias):
|
||||||
raise SynapseError(400, "Room alias must be local")
|
raise SynapseError(400, "Room alias must be local")
|
||||||
# TODO(erikj): Change this.
|
# TODO(erikj): Change this.
|
||||||
|
|
|
@ -15,7 +15,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple
|
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
from canonicaljson import encode_canonical_json
|
from canonicaljson import encode_canonical_json
|
||||||
|
@ -1105,22 +1105,19 @@ class E2eKeysHandler:
|
||||||
# can request over federation
|
# can request over federation
|
||||||
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
|
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
|
||||||
|
|
||||||
(
|
cross_signing_keys = await self._retrieve_cross_signing_keys_for_remote_user(
|
||||||
key,
|
user, key_type
|
||||||
key_id,
|
)
|
||||||
verify_key,
|
if cross_signing_keys is None:
|
||||||
) = await self._retrieve_cross_signing_keys_for_remote_user(user, key_type)
|
|
||||||
|
|
||||||
if key is None:
|
|
||||||
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
|
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
|
||||||
|
|
||||||
return key, key_id, verify_key
|
return cross_signing_keys
|
||||||
|
|
||||||
async def _retrieve_cross_signing_keys_for_remote_user(
|
async def _retrieve_cross_signing_keys_for_remote_user(
|
||||||
self,
|
self,
|
||||||
user: UserID,
|
user: UserID,
|
||||||
desired_key_type: str,
|
desired_key_type: str,
|
||||||
) -> Tuple[Optional[dict], Optional[str], Optional[VerifyKey]]:
|
) -> Optional[Tuple[Dict[str, Any], str, VerifyKey]]:
|
||||||
"""Queries cross-signing keys for a remote user and saves them to the database
|
"""Queries cross-signing keys for a remote user and saves them to the database
|
||||||
|
|
||||||
Only the key specified by `key_type` will be returned, while all retrieved keys
|
Only the key specified by `key_type` will be returned, while all retrieved keys
|
||||||
|
@ -1146,12 +1143,10 @@ class E2eKeysHandler:
|
||||||
type(e),
|
type(e),
|
||||||
e,
|
e,
|
||||||
)
|
)
|
||||||
return None, None, None
|
return None
|
||||||
|
|
||||||
# Process each of the retrieved cross-signing keys
|
# Process each of the retrieved cross-signing keys
|
||||||
desired_key = None
|
desired_key_data = None
|
||||||
desired_key_id = None
|
|
||||||
desired_verify_key = None
|
|
||||||
retrieved_device_ids = []
|
retrieved_device_ids = []
|
||||||
for key_type in ["master", "self_signing"]:
|
for key_type in ["master", "self_signing"]:
|
||||||
key_content = remote_result.get(key_type + "_key")
|
key_content = remote_result.get(key_type + "_key")
|
||||||
|
@ -1196,9 +1191,7 @@ class E2eKeysHandler:
|
||||||
|
|
||||||
# If this is the desired key type, save it and its ID/VerifyKey
|
# If this is the desired key type, save it and its ID/VerifyKey
|
||||||
if key_type == desired_key_type:
|
if key_type == desired_key_type:
|
||||||
desired_key = key_content
|
desired_key_data = key_content, key_id, verify_key
|
||||||
desired_verify_key = verify_key
|
|
||||||
desired_key_id = key_id
|
|
||||||
|
|
||||||
# At the same time, store this key in the db for subsequent queries
|
# At the same time, store this key in the db for subsequent queries
|
||||||
await self.store.set_e2e_cross_signing_key(
|
await self.store.set_e2e_cross_signing_key(
|
||||||
|
@ -1212,7 +1205,7 @@ class E2eKeysHandler:
|
||||||
user.to_string(), retrieved_device_ids
|
user.to_string(), retrieved_device_ids
|
||||||
)
|
)
|
||||||
|
|
||||||
return desired_key, desired_key_id, desired_verify_key
|
return desired_key_data
|
||||||
|
|
||||||
|
|
||||||
def _check_cross_signing_key(
|
def _check_cross_signing_key(
|
||||||
|
|
|
@ -241,7 +241,15 @@ class EventAuthHandler:
|
||||||
|
|
||||||
# If the join rule is not restricted, this doesn't apply.
|
# If the join rule is not restricted, this doesn't apply.
|
||||||
join_rules_event = await self._store.get_event(join_rules_event_id)
|
join_rules_event = await self._store.get_event(join_rules_event_id)
|
||||||
return join_rules_event.content.get("join_rule") == JoinRules.RESTRICTED
|
content_join_rule = join_rules_event.content.get("join_rule")
|
||||||
|
if content_join_rule == JoinRules.RESTRICTED:
|
||||||
|
return True
|
||||||
|
|
||||||
|
# also check for MSC3787 behaviour
|
||||||
|
if room_version.msc3787_knock_restricted_join_rule:
|
||||||
|
return content_join_rule == JoinRules.KNOCK_RESTRICTED
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
async def get_rooms_that_allow_join(
|
async def get_rooms_that_allow_join(
|
||||||
self, state_ids: StateMap[str]
|
self, state_ids: StateMap[str]
|
||||||
|
|
|
@ -54,6 +54,7 @@ from synapse.replication.http.federation import (
|
||||||
ReplicationStoreRoomOnOutlierMembershipRestServlet,
|
ReplicationStoreRoomOnOutlierMembershipRestServlet,
|
||||||
)
|
)
|
||||||
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
||||||
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.types import JsonDict, StateMap, get_domain_from_id
|
from synapse.types import JsonDict, StateMap, get_domain_from_id
|
||||||
from synapse.util.async_helpers import Linearizer
|
from synapse.util.async_helpers import Linearizer
|
||||||
from synapse.util.retryutils import NotRetryingDestination
|
from synapse.util.retryutils import NotRetryingDestination
|
||||||
|
@ -659,7 +660,7 @@ class FederationHandler:
|
||||||
# in the invitee's sync stream. It is stripped out for all other local users.
|
# in the invitee's sync stream. It is stripped out for all other local users.
|
||||||
event.unsigned["knock_room_state"] = stripped_room_state["knock_state_events"]
|
event.unsigned["knock_room_state"] = stripped_room_state["knock_state_events"]
|
||||||
|
|
||||||
context = EventContext.for_outlier()
|
context = EventContext.for_outlier(self.storage)
|
||||||
stream_id = await self._federation_event_handler.persist_events_and_notify(
|
stream_id = await self._federation_event_handler.persist_events_and_notify(
|
||||||
event.room_id, [(event, context)]
|
event.room_id, [(event, context)]
|
||||||
)
|
)
|
||||||
|
@ -848,7 +849,7 @@ class FederationHandler:
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
context = EventContext.for_outlier()
|
context = EventContext.for_outlier(self.storage)
|
||||||
await self._federation_event_handler.persist_events_and_notify(
|
await self._federation_event_handler.persist_events_and_notify(
|
||||||
event.room_id, [(event, context)]
|
event.room_id, [(event, context)]
|
||||||
)
|
)
|
||||||
|
@ -877,7 +878,7 @@ class FederationHandler:
|
||||||
|
|
||||||
await self.federation_client.send_leave(host_list, event)
|
await self.federation_client.send_leave(host_list, event)
|
||||||
|
|
||||||
context = EventContext.for_outlier()
|
context = EventContext.for_outlier(self.storage)
|
||||||
stream_id = await self._federation_event_handler.persist_events_and_notify(
|
stream_id = await self._federation_event_handler.persist_events_and_notify(
|
||||||
event.room_id, [(event, context)]
|
event.room_id, [(event, context)]
|
||||||
)
|
)
|
||||||
|
@ -1259,7 +1260,9 @@ class FederationHandler:
|
||||||
event.content["third_party_invite"]["signed"]["token"],
|
event.content["third_party_invite"]["signed"]["token"],
|
||||||
)
|
)
|
||||||
original_invite = None
|
original_invite = None
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
prev_state_ids = await context.get_prev_state_ids(
|
||||||
|
StateFilter.from_types([(EventTypes.ThirdPartyInvite, None)])
|
||||||
|
)
|
||||||
original_invite_id = prev_state_ids.get(key)
|
original_invite_id = prev_state_ids.get(key)
|
||||||
if original_invite_id:
|
if original_invite_id:
|
||||||
original_invite = await self.store.get_event(
|
original_invite = await self.store.get_event(
|
||||||
|
@ -1308,7 +1311,9 @@ class FederationHandler:
|
||||||
signed = event.content["third_party_invite"]["signed"]
|
signed = event.content["third_party_invite"]["signed"]
|
||||||
token = signed["token"]
|
token = signed["token"]
|
||||||
|
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
prev_state_ids = await context.get_prev_state_ids(
|
||||||
|
StateFilter.from_types([(EventTypes.ThirdPartyInvite, None)])
|
||||||
|
)
|
||||||
invite_event_id = prev_state_ids.get((EventTypes.ThirdPartyInvite, token))
|
invite_event_id = prev_state_ids.get((EventTypes.ThirdPartyInvite, token))
|
||||||
|
|
||||||
invite_event = None
|
invite_event = None
|
||||||
|
|
|
@ -30,6 +30,7 @@ from typing import (
|
||||||
|
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
|
|
||||||
|
from synapse import event_auth
|
||||||
from synapse.api.constants import (
|
from synapse.api.constants import (
|
||||||
EventContentFields,
|
EventContentFields,
|
||||||
EventTypes,
|
EventTypes,
|
||||||
|
@ -63,6 +64,7 @@ from synapse.replication.http.federation import (
|
||||||
)
|
)
|
||||||
from synapse.state import StateResolutionStore
|
from synapse.state import StateResolutionStore
|
||||||
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
||||||
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
PersistedEventPosition,
|
PersistedEventPosition,
|
||||||
RoomStreamToken,
|
RoomStreamToken,
|
||||||
|
@ -103,7 +105,7 @@ class FederationEventHandler:
|
||||||
self._event_creation_handler = hs.get_event_creation_handler()
|
self._event_creation_handler = hs.get_event_creation_handler()
|
||||||
self._event_auth_handler = hs.get_event_auth_handler()
|
self._event_auth_handler = hs.get_event_auth_handler()
|
||||||
self._message_handler = hs.get_message_handler()
|
self._message_handler = hs.get_message_handler()
|
||||||
self._action_generator = hs.get_action_generator()
|
self._bulk_push_rule_evaluator = hs.get_bulk_push_rule_evaluator()
|
||||||
self._state_resolution_handler = hs.get_state_resolution_handler()
|
self._state_resolution_handler = hs.get_state_resolution_handler()
|
||||||
# avoid a circular dependency by deferring execution here
|
# avoid a circular dependency by deferring execution here
|
||||||
self._get_room_member_handler = hs.get_room_member_handler
|
self._get_room_member_handler = hs.get_room_member_handler
|
||||||
|
@ -475,7 +477,23 @@ class FederationEventHandler:
|
||||||
# and discover that we do not have it.
|
# and discover that we do not have it.
|
||||||
event.internal_metadata.proactively_send = False
|
event.internal_metadata.proactively_send = False
|
||||||
|
|
||||||
return await self.persist_events_and_notify(room_id, [(event, context)])
|
stream_id_after_persist = await self.persist_events_and_notify(
|
||||||
|
room_id, [(event, context)]
|
||||||
|
)
|
||||||
|
|
||||||
|
# If we're joining the room again, check if there is new marker
|
||||||
|
# state indicating that there is new history imported somewhere in
|
||||||
|
# the DAG. Multiple markers can exist in the current state with
|
||||||
|
# unique state_keys.
|
||||||
|
#
|
||||||
|
# Do this after the state from the remote join was persisted (via
|
||||||
|
# `persist_events_and_notify`). Otherwise we can run into a
|
||||||
|
# situation where the create event doesn't exist yet in the
|
||||||
|
# `current_state_events`
|
||||||
|
for e in state:
|
||||||
|
await self._handle_marker_event(origin, e)
|
||||||
|
|
||||||
|
return stream_id_after_persist
|
||||||
|
|
||||||
async def update_state_for_partial_state_event(
|
async def update_state_for_partial_state_event(
|
||||||
self, destination: str, event: EventBase
|
self, destination: str, event: EventBase
|
||||||
|
@ -1228,6 +1246,14 @@ class FederationEventHandler:
|
||||||
# Nothing to retrieve then (invalid marker)
|
# Nothing to retrieve then (invalid marker)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
already_seen_insertion_event = await self._store.have_seen_event(
|
||||||
|
marker_event.room_id, insertion_event_id
|
||||||
|
)
|
||||||
|
if already_seen_insertion_event:
|
||||||
|
# No need to process a marker again if we have already seen the
|
||||||
|
# insertion event that it was pointing to
|
||||||
|
return
|
||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"_handle_marker_event: backfilling insertion event %s", insertion_event_id
|
"_handle_marker_event: backfilling insertion event %s", insertion_event_id
|
||||||
)
|
)
|
||||||
|
@ -1423,7 +1449,7 @@ class FederationEventHandler:
|
||||||
# we're not bothering about room state, so flag the event as an outlier.
|
# we're not bothering about room state, so flag the event as an outlier.
|
||||||
event.internal_metadata.outlier = True
|
event.internal_metadata.outlier = True
|
||||||
|
|
||||||
context = EventContext.for_outlier()
|
context = EventContext.for_outlier(self._storage)
|
||||||
try:
|
try:
|
||||||
validate_event_for_room_version(room_version_obj, event)
|
validate_event_for_room_version(room_version_obj, event)
|
||||||
check_auth_rules_for_event(room_version_obj, event, auth)
|
check_auth_rules_for_event(room_version_obj, event, auth)
|
||||||
|
@ -1500,7 +1526,11 @@ class FederationEventHandler:
|
||||||
return context
|
return context
|
||||||
|
|
||||||
# now check auth against what we think the auth events *should* be.
|
# now check auth against what we think the auth events *should* be.
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
event_types = event_auth.auth_types_for_event(event.room_version, event)
|
||||||
|
prev_state_ids = await context.get_prev_state_ids(
|
||||||
|
StateFilter.from_types(event_types)
|
||||||
|
)
|
||||||
|
|
||||||
auth_events_ids = self._event_auth_handler.compute_auth_events(
|
auth_events_ids = self._event_auth_handler.compute_auth_events(
|
||||||
event, prev_state_ids, for_verification=True
|
event, prev_state_ids, for_verification=True
|
||||||
)
|
)
|
||||||
|
@ -1874,10 +1904,10 @@ class FederationEventHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
return EventContext.with_state(
|
return EventContext.with_state(
|
||||||
|
storage=self._storage,
|
||||||
state_group=state_group,
|
state_group=state_group,
|
||||||
state_group_before_event=context.state_group_before_event,
|
state_group_before_event=context.state_group_before_event,
|
||||||
current_state_ids=current_state_ids,
|
state_delta_due_to_event=state_updates,
|
||||||
prev_state_ids=prev_state_ids,
|
|
||||||
prev_group=prev_group,
|
prev_group=prev_group,
|
||||||
delta_ids=state_updates,
|
delta_ids=state_updates,
|
||||||
partial_state=context.partial_state,
|
partial_state=context.partial_state,
|
||||||
|
@ -1913,7 +1943,7 @@ class FederationEventHandler:
|
||||||
min_depth,
|
min_depth,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
await self._action_generator.handle_push_actions_for_event(
|
await self._bulk_push_rule_evaluator.action_for_event_by_user(
|
||||||
event, context
|
event, context
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -30,6 +30,7 @@ from synapse.types import (
|
||||||
Requester,
|
Requester,
|
||||||
RoomStreamToken,
|
RoomStreamToken,
|
||||||
StateMap,
|
StateMap,
|
||||||
|
StreamKeyType,
|
||||||
StreamToken,
|
StreamToken,
|
||||||
UserID,
|
UserID,
|
||||||
)
|
)
|
||||||
|
@ -143,7 +144,7 @@ class InitialSyncHandler:
|
||||||
to_key=int(now_token.receipt_key),
|
to_key=int(now_token.receipt_key),
|
||||||
)
|
)
|
||||||
if self.hs.config.experimental.msc2285_enabled:
|
if self.hs.config.experimental.msc2285_enabled:
|
||||||
receipt = ReceiptEventSource.filter_out_private(receipt, user_id)
|
receipt = ReceiptEventSource.filter_out_private_receipts(receipt, user_id)
|
||||||
|
|
||||||
tags_by_room = await self.store.get_tags_for_user(user_id)
|
tags_by_room = await self.store.get_tags_for_user(user_id)
|
||||||
|
|
||||||
|
@ -220,8 +221,10 @@ class InitialSyncHandler:
|
||||||
self.storage, user_id, messages
|
self.storage, user_id, messages
|
||||||
)
|
)
|
||||||
|
|
||||||
start_token = now_token.copy_and_replace("room_key", token)
|
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)
|
||||||
end_token = now_token.copy_and_replace("room_key", room_end_token)
|
end_token = now_token.copy_and_replace(
|
||||||
|
StreamKeyType.ROOM, room_end_token
|
||||||
|
)
|
||||||
time_now = self.clock.time_msec()
|
time_now = self.clock.time_msec()
|
||||||
|
|
||||||
d["messages"] = {
|
d["messages"] = {
|
||||||
|
@ -369,8 +372,8 @@ class InitialSyncHandler:
|
||||||
self.storage, user_id, messages, is_peeking=is_peeking
|
self.storage, user_id, messages, is_peeking=is_peeking
|
||||||
)
|
)
|
||||||
|
|
||||||
start_token = StreamToken.START.copy_and_replace("room_key", token)
|
start_token = StreamToken.START.copy_and_replace(StreamKeyType.ROOM, token)
|
||||||
end_token = StreamToken.START.copy_and_replace("room_key", stream_token)
|
end_token = StreamToken.START.copy_and_replace(StreamKeyType.ROOM, stream_token)
|
||||||
|
|
||||||
time_now = self.clock.time_msec()
|
time_now = self.clock.time_msec()
|
||||||
|
|
||||||
|
@ -449,7 +452,9 @@ class InitialSyncHandler:
|
||||||
if not receipts:
|
if not receipts:
|
||||||
return []
|
return []
|
||||||
if self.hs.config.experimental.msc2285_enabled:
|
if self.hs.config.experimental.msc2285_enabled:
|
||||||
receipts = ReceiptEventSource.filter_out_private(receipts, user_id)
|
receipts = ReceiptEventSource.filter_out_private_receipts(
|
||||||
|
receipts, user_id
|
||||||
|
)
|
||||||
return receipts
|
return receipts
|
||||||
|
|
||||||
presence, receipts, (messages, token) = await make_deferred_yieldable(
|
presence, receipts, (messages, token) = await make_deferred_yieldable(
|
||||||
|
@ -472,7 +477,7 @@ class InitialSyncHandler:
|
||||||
self.storage, user_id, messages, is_peeking=is_peeking
|
self.storage, user_id, messages, is_peeking=is_peeking
|
||||||
)
|
)
|
||||||
|
|
||||||
start_token = now_token.copy_and_replace("room_key", token)
|
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)
|
||||||
end_token = now_token
|
end_token = now_token
|
||||||
|
|
||||||
time_now = self.clock.time_msec()
|
time_now = self.clock.time_msec()
|
||||||
|
|
|
@ -23,6 +23,7 @@ from canonicaljson import encode_canonical_json
|
||||||
|
|
||||||
from twisted.internet.interfaces import IDelayedCall
|
from twisted.internet.interfaces import IDelayedCall
|
||||||
|
|
||||||
|
import synapse
|
||||||
from synapse import event_auth
|
from synapse import event_auth
|
||||||
from synapse.api.constants import (
|
from synapse.api.constants import (
|
||||||
EventContentFields,
|
EventContentFields,
|
||||||
|
@ -44,7 +45,7 @@ from synapse.api.errors import (
|
||||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
|
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
|
||||||
from synapse.api.urls import ConsentURIBuilder
|
from synapse.api.urls import ConsentURIBuilder
|
||||||
from synapse.event_auth import validate_event_for_room_version
|
from synapse.event_auth import validate_event_for_room_version
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase, relation_from_event
|
||||||
from synapse.events.builder import EventBuilder
|
from synapse.events.builder import EventBuilder
|
||||||
from synapse.events.snapshot import EventContext
|
from synapse.events.snapshot import EventContext
|
||||||
from synapse.events.validator import EventValidator
|
from synapse.events.validator import EventValidator
|
||||||
|
@ -428,7 +429,7 @@ class EventCreationHandler:
|
||||||
# it just creates unnecessary state resolution without any performance improvements.
|
# it just creates unnecessary state resolution without any performance improvements.
|
||||||
self.limiter = Linearizer(max_count=1, name="room_event_creation_limit")
|
self.limiter = Linearizer(max_count=1, name="room_event_creation_limit")
|
||||||
|
|
||||||
self.action_generator = hs.get_action_generator()
|
self._bulk_push_rule_evaluator = hs.get_bulk_push_rule_evaluator()
|
||||||
|
|
||||||
self.spam_checker = hs.get_spam_checker()
|
self.spam_checker = hs.get_spam_checker()
|
||||||
self.third_party_event_rules: "ThirdPartyEventRules" = (
|
self.third_party_event_rules: "ThirdPartyEventRules" = (
|
||||||
|
@ -636,7 +637,9 @@ class EventCreationHandler:
|
||||||
# federation as well as those created locally. As of room v3, aliases events
|
# federation as well as those created locally. As of room v3, aliases events
|
||||||
# can be created by users that are not in the room, therefore we have to
|
# can be created by users that are not in the room, therefore we have to
|
||||||
# tolerate them in event_auth.check().
|
# tolerate them in event_auth.check().
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
prev_state_ids = await context.get_prev_state_ids(
|
||||||
|
StateFilter.from_types([(EventTypes.Member, None)])
|
||||||
|
)
|
||||||
prev_event_id = prev_state_ids.get((EventTypes.Member, event.sender))
|
prev_event_id = prev_state_ids.get((EventTypes.Member, event.sender))
|
||||||
prev_event = (
|
prev_event = (
|
||||||
await self.store.get_event(prev_event_id, allow_none=True)
|
await self.store.get_event(prev_event_id, allow_none=True)
|
||||||
|
@ -759,7 +762,13 @@ class EventCreationHandler:
|
||||||
The previous version of the event is returned, if it is found in the
|
The previous version of the event is returned, if it is found in the
|
||||||
event context. Otherwise, None is returned.
|
event context. Otherwise, None is returned.
|
||||||
"""
|
"""
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
if event.internal_metadata.is_outlier():
|
||||||
|
# This can happen due to out of band memberships
|
||||||
|
return None
|
||||||
|
|
||||||
|
prev_state_ids = await context.get_prev_state_ids(
|
||||||
|
StateFilter.from_types([(event.type, None)])
|
||||||
|
)
|
||||||
prev_event_id = prev_state_ids.get((event.type, event.state_key))
|
prev_event_id = prev_state_ids.get((event.type, event.state_key))
|
||||||
if not prev_event_id:
|
if not prev_event_id:
|
||||||
return None
|
return None
|
||||||
|
@ -879,11 +888,11 @@ class EventCreationHandler:
|
||||||
event.sender,
|
event.sender,
|
||||||
)
|
)
|
||||||
|
|
||||||
spam_error = await self.spam_checker.check_event_for_spam(event)
|
spam_check = await self.spam_checker.check_event_for_spam(event)
|
||||||
if spam_error:
|
if spam_check is not synapse.spam_checker_api.Allow.ALLOW:
|
||||||
if not isinstance(spam_error, str):
|
raise SynapseError(
|
||||||
spam_error = "Spam is not permitted here"
|
403, "This message had been rejected as probable spam", spam_check
|
||||||
raise SynapseError(403, spam_error, Codes.FORBIDDEN)
|
)
|
||||||
|
|
||||||
ev = await self.handle_new_client_event(
|
ev = await self.handle_new_client_event(
|
||||||
requester=requester,
|
requester=requester,
|
||||||
|
@ -1003,7 +1012,7 @@ class EventCreationHandler:
|
||||||
# after it is created
|
# after it is created
|
||||||
if builder.internal_metadata.outlier:
|
if builder.internal_metadata.outlier:
|
||||||
event.internal_metadata.outlier = True
|
event.internal_metadata.outlier = True
|
||||||
context = EventContext.for_outlier()
|
context = EventContext.for_outlier(self.storage)
|
||||||
elif (
|
elif (
|
||||||
event.type == EventTypes.MSC2716_INSERTION
|
event.type == EventTypes.MSC2716_INSERTION
|
||||||
and state_event_ids
|
and state_event_ids
|
||||||
|
@ -1058,20 +1067,11 @@ class EventCreationHandler:
|
||||||
SynapseError if the event is invalid.
|
SynapseError if the event is invalid.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
relation = event.content.get("m.relates_to")
|
relation = relation_from_event(event)
|
||||||
if not relation:
|
if not relation:
|
||||||
return
|
return
|
||||||
|
|
||||||
relation_type = relation.get("rel_type")
|
parent_event = await self.store.get_event(relation.parent_id, allow_none=True)
|
||||||
if not relation_type:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Ensure the parent is real.
|
|
||||||
relates_to = relation.get("event_id")
|
|
||||||
if not relates_to:
|
|
||||||
return
|
|
||||||
|
|
||||||
parent_event = await self.store.get_event(relates_to, allow_none=True)
|
|
||||||
if parent_event:
|
if parent_event:
|
||||||
# And in the same room.
|
# And in the same room.
|
||||||
if parent_event.room_id != event.room_id:
|
if parent_event.room_id != event.room_id:
|
||||||
|
@ -1080,28 +1080,31 @@ class EventCreationHandler:
|
||||||
else:
|
else:
|
||||||
# There must be some reason that the client knows the event exists,
|
# There must be some reason that the client knows the event exists,
|
||||||
# see if there are existing relations. If so, assume everything is fine.
|
# see if there are existing relations. If so, assume everything is fine.
|
||||||
if not await self.store.event_is_target_of_relation(relates_to):
|
if not await self.store.event_is_target_of_relation(relation.parent_id):
|
||||||
# Otherwise, the client can't know about the parent event!
|
# Otherwise, the client can't know about the parent event!
|
||||||
raise SynapseError(400, "Can't send relation to unknown event")
|
raise SynapseError(400, "Can't send relation to unknown event")
|
||||||
|
|
||||||
# If this event is an annotation then we check that that the sender
|
# If this event is an annotation then we check that that the sender
|
||||||
# can't annotate the same way twice (e.g. stops users from liking an
|
# can't annotate the same way twice (e.g. stops users from liking an
|
||||||
# event multiple times).
|
# event multiple times).
|
||||||
if relation_type == RelationTypes.ANNOTATION:
|
if relation.rel_type == RelationTypes.ANNOTATION:
|
||||||
aggregation_key = relation["key"]
|
aggregation_key = relation.aggregation_key
|
||||||
|
|
||||||
|
if aggregation_key is None:
|
||||||
|
raise SynapseError(400, "Missing aggregation key")
|
||||||
|
|
||||||
if len(aggregation_key) > 500:
|
if len(aggregation_key) > 500:
|
||||||
raise SynapseError(400, "Aggregation key is too long")
|
raise SynapseError(400, "Aggregation key is too long")
|
||||||
|
|
||||||
already_exists = await self.store.has_user_annotated_event(
|
already_exists = await self.store.has_user_annotated_event(
|
||||||
relates_to, event.type, aggregation_key, event.sender
|
relation.parent_id, event.type, aggregation_key, event.sender
|
||||||
)
|
)
|
||||||
if already_exists:
|
if already_exists:
|
||||||
raise SynapseError(400, "Can't send same reaction twice")
|
raise SynapseError(400, "Can't send same reaction twice")
|
||||||
|
|
||||||
# Don't attempt to start a thread if the parent event is a relation.
|
# Don't attempt to start a thread if the parent event is a relation.
|
||||||
elif relation_type == RelationTypes.THREAD:
|
elif relation.rel_type == RelationTypes.THREAD:
|
||||||
if await self.store.event_includes_relation(relates_to):
|
if await self.store.event_includes_relation(relation.parent_id):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "Cannot start threads from an event with a relation"
|
400, "Cannot start threads from an event with a relation"
|
||||||
)
|
)
|
||||||
|
@ -1252,7 +1255,9 @@ class EventCreationHandler:
|
||||||
# and `state_groups` because they have `prev_events` that aren't persisted yet
|
# and `state_groups` because they have `prev_events` that aren't persisted yet
|
||||||
# (historical messages persisted in reverse-chronological order).
|
# (historical messages persisted in reverse-chronological order).
|
||||||
if not event.internal_metadata.is_historical() and not event.content.get(EventContentFields.MSC2716_HISTORICAL):
|
if not event.internal_metadata.is_historical() and not event.content.get(EventContentFields.MSC2716_HISTORICAL):
|
||||||
await self.action_generator.handle_push_actions_for_event(event, context)
|
await self._bulk_push_rule_evaluator.action_for_event_by_user(
|
||||||
|
event, context
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# If we're a worker we need to hit out to the master.
|
# If we're a worker we need to hit out to the master.
|
||||||
|
@ -1559,7 +1564,11 @@ class EventCreationHandler:
|
||||||
"Redacting MSC2716 events is not supported in this room version",
|
"Redacting MSC2716 events is not supported in this room version",
|
||||||
)
|
)
|
||||||
|
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
event_types = event_auth.auth_types_for_event(event.room_version, event)
|
||||||
|
prev_state_ids = await context.get_prev_state_ids(
|
||||||
|
StateFilter.from_types(event_types)
|
||||||
|
)
|
||||||
|
|
||||||
auth_events_ids = self._event_auth_handler.compute_auth_events(
|
auth_events_ids = self._event_auth_handler.compute_auth_events(
|
||||||
event, prev_state_ids, for_verification=True
|
event, prev_state_ids, for_verification=True
|
||||||
)
|
)
|
||||||
|
|
|
@ -224,7 +224,7 @@ class OidcHandler:
|
||||||
self._sso_handler.render_error(request, "invalid_session", str(e))
|
self._sso_handler.render_error(request, "invalid_session", str(e))
|
||||||
return
|
return
|
||||||
except MacaroonInvalidSignatureException as e:
|
except MacaroonInvalidSignatureException as e:
|
||||||
logger.exception("Could not verify session for OIDC callback")
|
logger.warning("Could not verify session for OIDC callback: %s", e)
|
||||||
self._sso_handler.render_error(request, "mismatching_session", str(e))
|
self._sso_handler.render_error(request, "mismatching_session", str(e))
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -827,7 +827,7 @@ class OidcProvider:
|
||||||
logger.debug("Exchanging OAuth2 code for a token")
|
logger.debug("Exchanging OAuth2 code for a token")
|
||||||
token = await self._exchange_code(code)
|
token = await self._exchange_code(code)
|
||||||
except OidcError as e:
|
except OidcError as e:
|
||||||
logger.exception("Could not exchange OAuth2 code")
|
logger.warning("Could not exchange OAuth2 code: %s", e)
|
||||||
self._sso_handler.render_error(request, e.error, e.error_description)
|
self._sso_handler.render_error(request, e.error, e.error_description)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
|
@ -27,7 +27,7 @@ from synapse.handlers.room import ShutdownRoomResponse
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.storage.state import StateFilter
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.streams.config import PaginationConfig
|
from synapse.streams.config import PaginationConfig
|
||||||
from synapse.types import JsonDict, Requester
|
from synapse.types import JsonDict, Requester, StreamKeyType
|
||||||
from synapse.util.async_helpers import ReadWriteLock
|
from synapse.util.async_helpers import ReadWriteLock
|
||||||
from synapse.util.stringutils import random_string
|
from synapse.util.stringutils import random_string
|
||||||
from synapse.visibility import filter_events_for_client
|
from synapse.visibility import filter_events_for_client
|
||||||
|
@ -239,7 +239,7 @@ class PaginationHandler:
|
||||||
# defined in the server's configuration, we can safely assume that's the
|
# defined in the server's configuration, we can safely assume that's the
|
||||||
# case and use it for this room.
|
# case and use it for this room.
|
||||||
max_lifetime = (
|
max_lifetime = (
|
||||||
retention_policy["max_lifetime"] or self._retention_default_max_lifetime
|
retention_policy.max_lifetime or self._retention_default_max_lifetime
|
||||||
)
|
)
|
||||||
|
|
||||||
# Cap the effective max_lifetime to be within the range allowed in the
|
# Cap the effective max_lifetime to be within the range allowed in the
|
||||||
|
@ -448,7 +448,7 @@ class PaginationHandler:
|
||||||
)
|
)
|
||||||
# We expect `/messages` to use historic pagination tokens by default but
|
# We expect `/messages` to use historic pagination tokens by default but
|
||||||
# `/messages` should still works with live tokens when manually provided.
|
# `/messages` should still works with live tokens when manually provided.
|
||||||
assert from_token.room_key.topological
|
assert from_token.room_key.topological is not None
|
||||||
|
|
||||||
if pagin_config.limit is None:
|
if pagin_config.limit is None:
|
||||||
# This shouldn't happen as we've set a default limit before this
|
# This shouldn't happen as we've set a default limit before this
|
||||||
|
@ -491,7 +491,7 @@ class PaginationHandler:
|
||||||
|
|
||||||
if leave_token.topological < curr_topo:
|
if leave_token.topological < curr_topo:
|
||||||
from_token = from_token.copy_and_replace(
|
from_token = from_token.copy_and_replace(
|
||||||
"room_key", leave_token
|
StreamKeyType.ROOM, leave_token
|
||||||
)
|
)
|
||||||
|
|
||||||
await self.hs.get_federation_handler().maybe_backfill(
|
await self.hs.get_federation_handler().maybe_backfill(
|
||||||
|
@ -513,7 +513,7 @@ class PaginationHandler:
|
||||||
event_filter=event_filter,
|
event_filter=event_filter,
|
||||||
)
|
)
|
||||||
|
|
||||||
next_token = from_token.copy_and_replace("room_key", next_key)
|
next_token = from_token.copy_and_replace(StreamKeyType.ROOM, next_key)
|
||||||
|
|
||||||
if events:
|
if events:
|
||||||
if event_filter:
|
if event_filter:
|
||||||
|
|
|
@ -66,7 +66,7 @@ from synapse.replication.tcp.commands import ClearUserSyncsCommand
|
||||||
from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream
|
from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream
|
||||||
from synapse.storage.databases.main import DataStore
|
from synapse.storage.databases.main import DataStore
|
||||||
from synapse.streams import EventSource
|
from synapse.streams import EventSource
|
||||||
from synapse.types import JsonDict, UserID, get_domain_from_id
|
from synapse.types import JsonDict, StreamKeyType, UserID, get_domain_from_id
|
||||||
from synapse.util.async_helpers import Linearizer
|
from synapse.util.async_helpers import Linearizer
|
||||||
from synapse.util.caches.descriptors import _CacheContext, cached
|
from synapse.util.caches.descriptors import _CacheContext, cached
|
||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
|
@ -522,7 +522,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
|
||||||
room_ids_to_states, users_to_states = parties
|
room_ids_to_states, users_to_states = parties
|
||||||
|
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"presence_key",
|
StreamKeyType.PRESENCE,
|
||||||
stream_id,
|
stream_id,
|
||||||
rooms=room_ids_to_states.keys(),
|
rooms=room_ids_to_states.keys(),
|
||||||
users=users_to_states.keys(),
|
users=users_to_states.keys(),
|
||||||
|
@ -1145,7 +1145,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||||
room_ids_to_states, users_to_states = parties
|
room_ids_to_states, users_to_states = parties
|
||||||
|
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"presence_key",
|
StreamKeyType.PRESENCE,
|
||||||
stream_id,
|
stream_id,
|
||||||
rooms=room_ids_to_states.keys(),
|
rooms=room_ids_to_states.keys(),
|
||||||
users=[UserID.from_string(u) for u in users_to_states],
|
users=[UserID.from_string(u) for u in users_to_states],
|
||||||
|
|
|
@ -17,7 +17,13 @@ from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple
|
||||||
from synapse.api.constants import ReceiptTypes
|
from synapse.api.constants import ReceiptTypes
|
||||||
from synapse.appservice import ApplicationService
|
from synapse.appservice import ApplicationService
|
||||||
from synapse.streams import EventSource
|
from synapse.streams import EventSource
|
||||||
from synapse.types import JsonDict, ReadReceipt, UserID, get_domain_from_id
|
from synapse.types import (
|
||||||
|
JsonDict,
|
||||||
|
ReadReceipt,
|
||||||
|
StreamKeyType,
|
||||||
|
UserID,
|
||||||
|
get_domain_from_id,
|
||||||
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
@ -129,7 +135,9 @@ class ReceiptsHandler:
|
||||||
|
|
||||||
affected_room_ids = list({r.room_id for r in receipts})
|
affected_room_ids = list({r.room_id for r in receipts})
|
||||||
|
|
||||||
self.notifier.on_new_event("receipt_key", max_batch_id, rooms=affected_room_ids)
|
self.notifier.on_new_event(
|
||||||
|
StreamKeyType.RECEIPT, max_batch_id, rooms=affected_room_ids
|
||||||
|
)
|
||||||
# Note that the min here shouldn't be relied upon to be accurate.
|
# Note that the min here shouldn't be relied upon to be accurate.
|
||||||
await self.hs.get_pusherpool().on_new_receipts(
|
await self.hs.get_pusherpool().on_new_receipts(
|
||||||
min_batch_id, max_batch_id, affected_room_ids
|
min_batch_id, max_batch_id, affected_room_ids
|
||||||
|
@ -166,43 +174,69 @@ class ReceiptEventSource(EventSource[int, JsonDict]):
|
||||||
self.config = hs.config
|
self.config = hs.config
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def filter_out_private(events: List[JsonDict], user_id: str) -> List[JsonDict]:
|
def filter_out_private_receipts(
|
||||||
|
rooms: List[JsonDict], user_id: str
|
||||||
|
) -> List[JsonDict]:
|
||||||
"""
|
"""
|
||||||
This method takes in what is returned by
|
Filters a list of serialized receipts (as returned by /sync and /initialSync)
|
||||||
get_linearized_receipts_for_rooms() and goes through read receipts
|
and removes private read receipts of other users.
|
||||||
filtering out m.read.private receipts if they were not sent by the
|
|
||||||
current user.
|
This operates on the return value of get_linearized_receipts_for_rooms(),
|
||||||
|
which is wrapped in a cache. Care must be taken to ensure that the input
|
||||||
|
values are not modified.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
rooms: A list of mappings, each mapping has a `content` field, which
|
||||||
|
is a map of event ID -> receipt type -> user ID -> receipt information.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The same as rooms, but filtered.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
visible_events = []
|
result = []
|
||||||
|
|
||||||
# filter out private receipts the user shouldn't see
|
# Iterate through each room's receipt content.
|
||||||
for event in events:
|
for room in rooms:
|
||||||
content = event.get("content", {})
|
# The receipt content with other user's private read receipts removed.
|
||||||
new_event = event.copy()
|
content = {}
|
||||||
new_event["content"] = {}
|
|
||||||
|
|
||||||
for event_id, event_content in content.items():
|
# Iterate over each event ID / receipts for that event.
|
||||||
receipt_event = {}
|
for event_id, orig_event_content in room.get("content", {}).items():
|
||||||
for receipt_type, receipt_content in event_content.items():
|
event_content = orig_event_content
|
||||||
if receipt_type == ReceiptTypes.READ_PRIVATE:
|
# If there are private read receipts, additional logic is necessary.
|
||||||
user_rr = receipt_content.get(user_id, None)
|
if ReceiptTypes.READ_PRIVATE in event_content:
|
||||||
if user_rr:
|
# Make a copy without private read receipts to avoid leaking
|
||||||
receipt_event[ReceiptTypes.READ_PRIVATE] = {
|
# other user's private read receipts..
|
||||||
user_id: user_rr.copy()
|
event_content = {
|
||||||
}
|
receipt_type: receipt_value
|
||||||
else:
|
for receipt_type, receipt_value in event_content.items()
|
||||||
receipt_event[receipt_type] = receipt_content.copy()
|
if receipt_type != ReceiptTypes.READ_PRIVATE
|
||||||
|
}
|
||||||
|
|
||||||
# Only include the receipt event if it is non-empty.
|
# Copy the current user's private read receipt from the
|
||||||
if receipt_event:
|
# original content, if it exists.
|
||||||
new_event["content"][event_id] = receipt_event
|
user_private_read_receipt = orig_event_content[
|
||||||
|
ReceiptTypes.READ_PRIVATE
|
||||||
|
].get(user_id, None)
|
||||||
|
if user_private_read_receipt:
|
||||||
|
event_content[ReceiptTypes.READ_PRIVATE] = {
|
||||||
|
user_id: user_private_read_receipt
|
||||||
|
}
|
||||||
|
|
||||||
# Append new_event to visible_events unless empty
|
# Include the event if there is at least one non-private read
|
||||||
if len(new_event["content"].keys()) > 0:
|
# receipt or the current user has a private read receipt.
|
||||||
visible_events.append(new_event)
|
if event_content:
|
||||||
|
content[event_id] = event_content
|
||||||
|
|
||||||
return visible_events
|
# Include the event if there is at least one non-private read receipt
|
||||||
|
# or the current user has a private read receipt.
|
||||||
|
if content:
|
||||||
|
# Build a new event to avoid mutating the cache.
|
||||||
|
new_room = {k: v for k, v in room.items() if k != "content"}
|
||||||
|
new_room["content"] = content
|
||||||
|
result.append(new_room)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
async def get_new_events(
|
async def get_new_events(
|
||||||
self,
|
self,
|
||||||
|
@ -224,7 +258,9 @@ class ReceiptEventSource(EventSource[int, JsonDict]):
|
||||||
)
|
)
|
||||||
|
|
||||||
if self.config.experimental.msc2285_enabled:
|
if self.config.experimental.msc2285_enabled:
|
||||||
events = ReceiptEventSource.filter_out_private(events, user.to_string())
|
events = ReceiptEventSource.filter_out_private_receipts(
|
||||||
|
events, user.to_string()
|
||||||
|
)
|
||||||
|
|
||||||
return events, to_key
|
return events, to_key
|
||||||
|
|
||||||
|
|
|
@ -11,7 +11,6 @@
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import collections.abc
|
|
||||||
import logging
|
import logging
|
||||||
from typing import (
|
from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
|
@ -28,7 +27,7 @@ import attr
|
||||||
|
|
||||||
from synapse.api.constants import RelationTypes
|
from synapse.api.constants import RelationTypes
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase, relation_from_event
|
||||||
from synapse.storage.databases.main.relations import _RelatedEvent
|
from synapse.storage.databases.main.relations import _RelatedEvent
|
||||||
from synapse.types import JsonDict, Requester, StreamToken, UserID
|
from synapse.types import JsonDict, Requester, StreamToken, UserID
|
||||||
from synapse.visibility import filter_events_for_client
|
from synapse.visibility import filter_events_for_client
|
||||||
|
@ -373,20 +372,21 @@ class RelationsHandler:
|
||||||
if event.is_state():
|
if event.is_state():
|
||||||
continue
|
continue
|
||||||
|
|
||||||
relates_to = event.content.get("m.relates_to")
|
relates_to = relation_from_event(event)
|
||||||
relation_type = None
|
if relates_to:
|
||||||
if isinstance(relates_to, collections.abc.Mapping):
|
|
||||||
relation_type = relates_to.get("rel_type")
|
|
||||||
# An event which is a replacement (ie edit) or annotation (ie,
|
# An event which is a replacement (ie edit) or annotation (ie,
|
||||||
# reaction) may not have any other event related to it.
|
# reaction) may not have any other event related to it.
|
||||||
if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE):
|
if relates_to.rel_type in (
|
||||||
|
RelationTypes.ANNOTATION,
|
||||||
|
RelationTypes.REPLACE,
|
||||||
|
):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
# Track the event's relation information for later.
|
||||||
|
relations_by_id[event.event_id] = relates_to.rel_type
|
||||||
|
|
||||||
# The event should get bundled aggregations.
|
# The event should get bundled aggregations.
|
||||||
events_by_id[event.event_id] = event
|
events_by_id[event.event_id] = event
|
||||||
# Track the event's relation information for later.
|
|
||||||
if isinstance(relation_type, str):
|
|
||||||
relations_by_id[event.event_id] = relation_type
|
|
||||||
|
|
||||||
# event ID -> bundled aggregation in non-serialized form.
|
# event ID -> bundled aggregation in non-serialized form.
|
||||||
results: Dict[str, BundledAggregations] = {}
|
results: Dict[str, BundledAggregations] = {}
|
||||||
|
|
|
@ -33,6 +33,7 @@ from typing import (
|
||||||
import attr
|
import attr
|
||||||
from typing_extensions import TypedDict
|
from typing_extensions import TypedDict
|
||||||
|
|
||||||
|
import synapse.events.snapshot
|
||||||
from synapse.api.constants import (
|
from synapse.api.constants import (
|
||||||
EventContentFields,
|
EventContentFields,
|
||||||
EventTypes,
|
EventTypes,
|
||||||
|
@ -72,12 +73,12 @@ from synapse.types import (
|
||||||
RoomID,
|
RoomID,
|
||||||
RoomStreamToken,
|
RoomStreamToken,
|
||||||
StateMap,
|
StateMap,
|
||||||
|
StreamKeyType,
|
||||||
StreamToken,
|
StreamToken,
|
||||||
UserID,
|
UserID,
|
||||||
create_requester,
|
create_requester,
|
||||||
)
|
)
|
||||||
from synapse.util import stringutils
|
from synapse.util import stringutils
|
||||||
from synapse.util.async_helpers import Linearizer
|
|
||||||
from synapse.util.caches.response_cache import ResponseCache
|
from synapse.util.caches.response_cache import ResponseCache
|
||||||
from synapse.util.stringutils import parse_and_validate_server_name
|
from synapse.util.stringutils import parse_and_validate_server_name
|
||||||
from synapse.visibility import filter_events_for_client
|
from synapse.visibility import filter_events_for_client
|
||||||
|
@ -149,10 +150,11 @@ class RoomCreationHandler:
|
||||||
)
|
)
|
||||||
preset_config["encrypted"] = encrypted
|
preset_config["encrypted"] = encrypted
|
||||||
|
|
||||||
self._replication = hs.get_replication_data_handler()
|
self._default_power_level_content_override = (
|
||||||
|
self.config.room.default_power_level_content_override
|
||||||
|
)
|
||||||
|
|
||||||
# linearizer to stop two upgrades happening at once
|
self._replication = hs.get_replication_data_handler()
|
||||||
self._upgrade_linearizer = Linearizer("room_upgrade_linearizer")
|
|
||||||
|
|
||||||
# If a user tries to update the same room multiple times in quick
|
# If a user tries to update the same room multiple times in quick
|
||||||
# succession, only process the first attempt and return its result to
|
# succession, only process the first attempt and return its result to
|
||||||
|
@ -196,50 +198,17 @@ class RoomCreationHandler:
|
||||||
400, "An upgrade for this room is currently in progress"
|
400, "An upgrade for this room is currently in progress"
|
||||||
)
|
)
|
||||||
|
|
||||||
# Upgrade the room
|
# Check whether the room exists and 404 if it doesn't.
|
||||||
#
|
# We could go straight for the auth check, but that will raise a 403 instead.
|
||||||
# If this user has sent multiple upgrade requests for the same room
|
old_room = await self.store.get_room(old_room_id)
|
||||||
# and one of them is not complete yet, cache the response and
|
if old_room is None:
|
||||||
# return it to all subsequent requests
|
|
||||||
ret = await self._upgrade_response_cache.wrap(
|
|
||||||
(old_room_id, user_id),
|
|
||||||
self._upgrade_room,
|
|
||||||
requester,
|
|
||||||
old_room_id,
|
|
||||||
new_version, # args for _upgrade_room
|
|
||||||
)
|
|
||||||
|
|
||||||
return ret
|
|
||||||
|
|
||||||
async def _upgrade_room(
|
|
||||||
self, requester: Requester, old_room_id: str, new_version: RoomVersion
|
|
||||||
) -> str:
|
|
||||||
"""
|
|
||||||
Args:
|
|
||||||
requester: the user requesting the upgrade
|
|
||||||
old_room_id: the id of the room to be replaced
|
|
||||||
new_versions: the version to upgrade the room to
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ShadowBanError if the requester is shadow-banned.
|
|
||||||
"""
|
|
||||||
user_id = requester.user.to_string()
|
|
||||||
assert self.hs.is_mine_id(user_id), "User must be our own: %s" % (user_id,)
|
|
||||||
|
|
||||||
# start by allocating a new room id
|
|
||||||
r = await self.store.get_room(old_room_id)
|
|
||||||
if r is None:
|
|
||||||
raise NotFoundError("Unknown room id %s" % (old_room_id,))
|
raise NotFoundError("Unknown room id %s" % (old_room_id,))
|
||||||
new_room_id = await self._generate_room_id(
|
|
||||||
creator_id=user_id,
|
|
||||||
is_public=r["is_public"],
|
|
||||||
room_version=new_version,
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.info("Creating new room %s to replace %s", new_room_id, old_room_id)
|
new_room_id = self._generate_room_id()
|
||||||
|
|
||||||
# we create and auth the tombstone event before properly creating the new
|
# Check whether the user has the power level to carry out the upgrade.
|
||||||
# room, to check our user has perms in the old room.
|
# `check_auth_rules_from_context` will check that they are in the room and have
|
||||||
|
# the required power level to send the tombstone event.
|
||||||
(
|
(
|
||||||
tombstone_event,
|
tombstone_event,
|
||||||
tombstone_context,
|
tombstone_context,
|
||||||
|
@ -262,6 +231,63 @@ class RoomCreationHandler:
|
||||||
old_room_version, tombstone_event, tombstone_context
|
old_room_version, tombstone_event, tombstone_context
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Upgrade the room
|
||||||
|
#
|
||||||
|
# If this user has sent multiple upgrade requests for the same room
|
||||||
|
# and one of them is not complete yet, cache the response and
|
||||||
|
# return it to all subsequent requests
|
||||||
|
ret = await self._upgrade_response_cache.wrap(
|
||||||
|
(old_room_id, user_id),
|
||||||
|
self._upgrade_room,
|
||||||
|
requester,
|
||||||
|
old_room_id,
|
||||||
|
old_room, # args for _upgrade_room
|
||||||
|
new_room_id,
|
||||||
|
new_version,
|
||||||
|
tombstone_event,
|
||||||
|
tombstone_context,
|
||||||
|
)
|
||||||
|
|
||||||
|
return ret
|
||||||
|
|
||||||
|
async def _upgrade_room(
|
||||||
|
self,
|
||||||
|
requester: Requester,
|
||||||
|
old_room_id: str,
|
||||||
|
old_room: Dict[str, Any],
|
||||||
|
new_room_id: str,
|
||||||
|
new_version: RoomVersion,
|
||||||
|
tombstone_event: EventBase,
|
||||||
|
tombstone_context: synapse.events.snapshot.EventContext,
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Args:
|
||||||
|
requester: the user requesting the upgrade
|
||||||
|
old_room_id: the id of the room to be replaced
|
||||||
|
old_room: a dict containing room information for the room to be replaced,
|
||||||
|
as returned by `RoomWorkerStore.get_room`.
|
||||||
|
new_room_id: the id of the replacement room
|
||||||
|
new_version: the version to upgrade the room to
|
||||||
|
tombstone_event: the tombstone event to send to the old room
|
||||||
|
tombstone_context: the context for the tombstone event
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ShadowBanError if the requester is shadow-banned.
|
||||||
|
"""
|
||||||
|
user_id = requester.user.to_string()
|
||||||
|
assert self.hs.is_mine_id(user_id), "User must be our own: %s" % (user_id,)
|
||||||
|
|
||||||
|
logger.info("Creating new room %s to replace %s", new_room_id, old_room_id)
|
||||||
|
|
||||||
|
# create the new room. may raise a `StoreError` in the exceedingly unlikely
|
||||||
|
# event of a room ID collision.
|
||||||
|
await self.store.store_room(
|
||||||
|
room_id=new_room_id,
|
||||||
|
room_creator_user_id=user_id,
|
||||||
|
is_public=old_room["is_public"],
|
||||||
|
room_version=new_version,
|
||||||
|
)
|
||||||
|
|
||||||
await self.clone_existing_room(
|
await self.clone_existing_room(
|
||||||
requester,
|
requester,
|
||||||
old_room_id=old_room_id,
|
old_room_id=old_room_id,
|
||||||
|
@ -277,7 +303,10 @@ class RoomCreationHandler:
|
||||||
context=tombstone_context,
|
context=tombstone_context,
|
||||||
)
|
)
|
||||||
|
|
||||||
old_room_state = await tombstone_context.get_current_state_ids()
|
state_filter = StateFilter.from_types(
|
||||||
|
[(EventTypes.CanonicalAlias, ""), (EventTypes.PowerLevels, "")]
|
||||||
|
)
|
||||||
|
old_room_state = await tombstone_context.get_current_state_ids(state_filter)
|
||||||
|
|
||||||
# We know the tombstone event isn't an outlier so it has current state.
|
# We know the tombstone event isn't an outlier so it has current state.
|
||||||
assert old_room_state is not None
|
assert old_room_state is not None
|
||||||
|
@ -401,7 +430,7 @@ class RoomCreationHandler:
|
||||||
requester: the user requesting the upgrade
|
requester: the user requesting the upgrade
|
||||||
old_room_id : the id of the room to be replaced
|
old_room_id : the id of the room to be replaced
|
||||||
new_room_id: the id to give the new room (should already have been
|
new_room_id: the id to give the new room (should already have been
|
||||||
created with _gemerate_room_id())
|
created with _generate_room_id())
|
||||||
new_room_version: the new room version to use
|
new_room_version: the new room version to use
|
||||||
tombstone_event_id: the ID of the tombstone event in the old room.
|
tombstone_event_id: the ID of the tombstone event in the old room.
|
||||||
"""
|
"""
|
||||||
|
@ -443,14 +472,14 @@ class RoomCreationHandler:
|
||||||
(EventTypes.PowerLevels, ""),
|
(EventTypes.PowerLevels, ""),
|
||||||
]
|
]
|
||||||
|
|
||||||
# If the old room was a space, copy over the room type and the rooms in
|
# Copy the room type as per MSC3818.
|
||||||
# the space.
|
room_type = old_room_create_event.content.get(EventContentFields.ROOM_TYPE)
|
||||||
if (
|
if room_type is not None:
|
||||||
old_room_create_event.content.get(EventContentFields.ROOM_TYPE)
|
creation_content[EventContentFields.ROOM_TYPE] = room_type
|
||||||
== RoomTypes.SPACE
|
|
||||||
):
|
# If the old room was a space, copy over the rooms in the space.
|
||||||
creation_content[EventContentFields.ROOM_TYPE] = RoomTypes.SPACE
|
if room_type == RoomTypes.SPACE:
|
||||||
types_to_copy.append((EventTypes.SpaceChild, None))
|
types_to_copy.append((EventTypes.SpaceChild, None))
|
||||||
|
|
||||||
old_room_state_ids = await self.store.get_filtered_current_state_ids(
|
old_room_state_ids = await self.store.get_filtered_current_state_ids(
|
||||||
old_room_id, StateFilter.from_types(types_to_copy)
|
old_room_id, StateFilter.from_types(types_to_copy)
|
||||||
|
@ -725,6 +754,21 @@ class RoomCreationHandler:
|
||||||
if wchar in config["room_alias_name"]:
|
if wchar in config["room_alias_name"]:
|
||||||
raise SynapseError(400, "Invalid characters in room alias")
|
raise SynapseError(400, "Invalid characters in room alias")
|
||||||
|
|
||||||
|
if ":" in config["room_alias_name"]:
|
||||||
|
# Prevent someone from trying to pass in a full alias here.
|
||||||
|
# Note that it's permissible for a room alias to have multiple
|
||||||
|
# hash symbols at the start (notably bridged over from IRC, too),
|
||||||
|
# but the first colon in the alias is defined to separate the local
|
||||||
|
# part from the server name.
|
||||||
|
# (remember server names can contain port numbers, also separated
|
||||||
|
# by a colon. But under no circumstances should the local part be
|
||||||
|
# allowed to contain a colon!)
|
||||||
|
raise SynapseError(
|
||||||
|
400,
|
||||||
|
"':' is not permitted in the room alias name. "
|
||||||
|
"Please note this expects a local part — 'wombat', not '#wombat:example.com'.",
|
||||||
|
)
|
||||||
|
|
||||||
room_alias = RoomAlias(config["room_alias_name"], self.hs.hostname)
|
room_alias = RoomAlias(config["room_alias_name"], self.hs.hostname)
|
||||||
mapping = await self.store.get_association_from_room_alias(room_alias)
|
mapping = await self.store.get_association_from_room_alias(room_alias)
|
||||||
|
|
||||||
|
@ -790,8 +834,10 @@ class RoomCreationHandler:
|
||||||
except StoreError:
|
except StoreError:
|
||||||
raise SynapseError(409, "Room ID already in use", errcode="M_CONFLICT")
|
raise SynapseError(409, "Room ID already in use", errcode="M_CONFLICT")
|
||||||
else:
|
else:
|
||||||
room_id = await self._generate_room_id(
|
room_id = await self._generate_and_create_room_id(
|
||||||
creator_id=user_id, is_public=is_public, room_version=room_version,
|
creator_id=user_id,
|
||||||
|
is_public=is_public,
|
||||||
|
room_version=room_version,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Check whether this visibility value is blocked by a third party module
|
# Check whether this visibility value is blocked by a third party module
|
||||||
|
@ -1052,9 +1098,19 @@ class RoomCreationHandler:
|
||||||
for invitee in invite_list:
|
for invitee in invite_list:
|
||||||
power_level_content["users"][invitee] = 100
|
power_level_content["users"][invitee] = 100
|
||||||
|
|
||||||
# Power levels overrides are defined per chat preset
|
# If the user supplied a preset name e.g. "private_chat",
|
||||||
|
# we apply that preset
|
||||||
power_level_content.update(config["power_level_content_override"])
|
power_level_content.update(config["power_level_content_override"])
|
||||||
|
|
||||||
|
# If the server config contains default_power_level_content_override,
|
||||||
|
# and that contains information for this room preset, apply it.
|
||||||
|
if self._default_power_level_content_override:
|
||||||
|
override = self._default_power_level_content_override.get(preset_config)
|
||||||
|
if override is not None:
|
||||||
|
power_level_content.update(override)
|
||||||
|
|
||||||
|
# Finally, if the user supplied specific permissions for this room,
|
||||||
|
# apply those.
|
||||||
if power_level_content_override:
|
if power_level_content_override:
|
||||||
power_level_content.update(power_level_content_override)
|
power_level_content.update(power_level_content_override)
|
||||||
|
|
||||||
|
@ -1100,7 +1156,26 @@ class RoomCreationHandler:
|
||||||
|
|
||||||
return last_sent_stream_id
|
return last_sent_stream_id
|
||||||
|
|
||||||
async def _generate_room_id(
|
def _generate_room_id(self) -> str:
|
||||||
|
"""Generates a random room ID.
|
||||||
|
|
||||||
|
Room IDs look like "!opaque_id:domain" and are case-sensitive as per the spec
|
||||||
|
at https://spec.matrix.org/v1.2/appendices/#room-ids-and-event-ids.
|
||||||
|
|
||||||
|
Does not check for collisions with existing rooms or prevent future calls from
|
||||||
|
returning the same room ID. To ensure the uniqueness of a new room ID, use
|
||||||
|
`_generate_and_create_room_id` instead.
|
||||||
|
|
||||||
|
Synapse's room IDs are 18 [a-zA-Z] characters long, which comes out to around
|
||||||
|
102 bits.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A random room ID of the form "!opaque_id:domain".
|
||||||
|
"""
|
||||||
|
random_string = stringutils.random_string(18)
|
||||||
|
return RoomID(random_string, self.hs.hostname).to_string()
|
||||||
|
|
||||||
|
async def _generate_and_create_room_id(
|
||||||
self,
|
self,
|
||||||
creator_id: str,
|
creator_id: str,
|
||||||
is_public: bool,
|
is_public: bool,
|
||||||
|
@ -1111,8 +1186,7 @@ class RoomCreationHandler:
|
||||||
attempts = 0
|
attempts = 0
|
||||||
while attempts < 5:
|
while attempts < 5:
|
||||||
try:
|
try:
|
||||||
random_string = stringutils.random_string(18)
|
gen_room_id = self._generate_room_id()
|
||||||
gen_room_id = RoomID(random_string, self.hs.hostname).to_string()
|
|
||||||
await self.store.store_room(
|
await self.store.store_room(
|
||||||
room_id=gen_room_id,
|
room_id=gen_room_id,
|
||||||
room_creator_user_id=creator_id,
|
room_creator_user_id=creator_id,
|
||||||
|
@ -1249,10 +1323,10 @@ class RoomContextHandler:
|
||||||
events_after=events_after,
|
events_after=events_after,
|
||||||
state=await filter_evts(state_events),
|
state=await filter_evts(state_events),
|
||||||
aggregations=aggregations,
|
aggregations=aggregations,
|
||||||
start=await token.copy_and_replace("room_key", results.start).to_string(
|
start=await token.copy_and_replace(
|
||||||
self.store
|
StreamKeyType.ROOM, results.start
|
||||||
),
|
).to_string(self.store),
|
||||||
end=await token.copy_and_replace("room_key", results.end).to_string(
|
end=await token.copy_and_replace(StreamKeyType.ROOM, results.end).to_string(
|
||||||
self.store
|
self.store
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
|
@ -54,6 +54,7 @@ class RoomBatchHandler:
|
||||||
# We want to use the successor event depth so they appear after `prev_event` because
|
# We want to use the successor event depth so they appear after `prev_event` because
|
||||||
# it has a larger `depth` but before the successor event because the `stream_ordering`
|
# it has a larger `depth` but before the successor event because the `stream_ordering`
|
||||||
# is negative before the successor event.
|
# is negative before the successor event.
|
||||||
|
assert most_recent_prev_event_id is not None
|
||||||
successor_event_ids = await self.store.get_successor_events(
|
successor_event_ids = await self.store.get_successor_events(
|
||||||
most_recent_prev_event_id
|
most_recent_prev_event_id
|
||||||
)
|
)
|
||||||
|
@ -141,6 +142,7 @@ class RoomBatchHandler:
|
||||||
_,
|
_,
|
||||||
) = await self.store.get_max_depth_of(event_ids)
|
) = await self.store.get_max_depth_of(event_ids)
|
||||||
# mapping from (type, state_key) -> state_event_id
|
# mapping from (type, state_key) -> state_event_id
|
||||||
|
assert most_recent_event_id is not None
|
||||||
prev_state_map = await self.state_store.get_state_ids_for_event(
|
prev_state_map = await self.state_store.get_state_ids_for_event(
|
||||||
most_recent_event_id
|
most_recent_event_id
|
||||||
)
|
)
|
||||||
|
|
|
@ -38,6 +38,7 @@ from synapse.event_auth import get_named_level, get_power_level_event
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.events.snapshot import EventContext
|
from synapse.events.snapshot import EventContext
|
||||||
from synapse.handlers.profile import MAX_AVATAR_URL_LEN, MAX_DISPLAYNAME_LEN
|
from synapse.handlers.profile import MAX_AVATAR_URL_LEN, MAX_DISPLAYNAME_LEN
|
||||||
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
JsonDict,
|
JsonDict,
|
||||||
Requester,
|
Requester,
|
||||||
|
@ -362,7 +363,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
historical=historical,
|
historical=historical,
|
||||||
)
|
)
|
||||||
|
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
prev_state_ids = await context.get_prev_state_ids(
|
||||||
|
StateFilter.from_types([(EventTypes.Member, None)])
|
||||||
|
)
|
||||||
|
|
||||||
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
|
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
|
||||||
|
|
||||||
|
@ -1160,7 +1163,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
else:
|
else:
|
||||||
requester = types.create_requester(target_user)
|
requester = types.create_requester(target_user)
|
||||||
|
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
prev_state_ids = await context.get_prev_state_ids(
|
||||||
|
StateFilter.from_types([(EventTypes.GuestAccess, None)])
|
||||||
|
)
|
||||||
if event.membership == Membership.JOIN:
|
if event.membership == Membership.JOIN:
|
||||||
if requester.is_guest:
|
if requester.is_guest:
|
||||||
guest_can_join = await self._can_guest_join(prev_state_ids)
|
guest_can_join = await self._can_guest_join(prev_state_ids)
|
||||||
|
|
|
@ -562,8 +562,13 @@ class RoomSummaryHandler:
|
||||||
if join_rules_event_id:
|
if join_rules_event_id:
|
||||||
join_rules_event = await self._store.get_event(join_rules_event_id)
|
join_rules_event = await self._store.get_event(join_rules_event_id)
|
||||||
join_rule = join_rules_event.content.get("join_rule")
|
join_rule = join_rules_event.content.get("join_rule")
|
||||||
if join_rule == JoinRules.PUBLIC or (
|
if (
|
||||||
room_version.msc2403_knocking and join_rule == JoinRules.KNOCK
|
join_rule == JoinRules.PUBLIC
|
||||||
|
or (room_version.msc2403_knocking and join_rule == JoinRules.KNOCK)
|
||||||
|
or (
|
||||||
|
room_version.msc3787_knock_restricted_join_rule
|
||||||
|
and join_rule == JoinRules.KNOCK_RESTRICTED
|
||||||
|
)
|
||||||
):
|
):
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
|
@ -24,7 +24,7 @@ from synapse.api.errors import NotFoundError, SynapseError
|
||||||
from synapse.api.filtering import Filter
|
from synapse.api.filtering import Filter
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.storage.state import StateFilter
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.types import JsonDict, UserID
|
from synapse.types import JsonDict, StreamKeyType, UserID
|
||||||
from synapse.visibility import filter_events_for_client
|
from synapse.visibility import filter_events_for_client
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
@ -655,11 +655,11 @@ class SearchHandler:
|
||||||
"events_before": events_before,
|
"events_before": events_before,
|
||||||
"events_after": events_after,
|
"events_after": events_after,
|
||||||
"start": await now_token.copy_and_replace(
|
"start": await now_token.copy_and_replace(
|
||||||
"room_key", res.start
|
StreamKeyType.ROOM, res.start
|
||||||
|
).to_string(self.store),
|
||||||
|
"end": await now_token.copy_and_replace(
|
||||||
|
StreamKeyType.ROOM, res.end
|
||||||
).to_string(self.store),
|
).to_string(self.store),
|
||||||
"end": await now_token.copy_and_replace("room_key", res.end).to_string(
|
|
||||||
self.store
|
|
||||||
),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if include_profile:
|
if include_profile:
|
||||||
|
|
|
@ -37,6 +37,7 @@ from synapse.types import (
|
||||||
Requester,
|
Requester,
|
||||||
RoomStreamToken,
|
RoomStreamToken,
|
||||||
StateMap,
|
StateMap,
|
||||||
|
StreamKeyType,
|
||||||
StreamToken,
|
StreamToken,
|
||||||
UserID,
|
UserID,
|
||||||
)
|
)
|
||||||
|
@ -410,10 +411,10 @@ class SyncHandler:
|
||||||
set_tag(SynapseTags.SYNC_RESULT, bool(sync_result))
|
set_tag(SynapseTags.SYNC_RESULT, bool(sync_result))
|
||||||
return sync_result
|
return sync_result
|
||||||
|
|
||||||
async def push_rules_for_user(self, user: UserID) -> JsonDict:
|
async def push_rules_for_user(self, user: UserID) -> Dict[str, Dict[str, list]]:
|
||||||
user_id = user.to_string()
|
user_id = user.to_string()
|
||||||
rules = await self.store.get_push_rules_for_user(user_id)
|
rules_raw = await self.store.get_push_rules_for_user(user_id)
|
||||||
rules = format_push_rules_for_user(user, rules)
|
rules = format_push_rules_for_user(user, rules_raw)
|
||||||
return rules
|
return rules
|
||||||
|
|
||||||
async def ephemeral_by_room(
|
async def ephemeral_by_room(
|
||||||
|
@ -449,7 +450,7 @@ class SyncHandler:
|
||||||
room_ids=room_ids,
|
room_ids=room_ids,
|
||||||
is_guest=sync_config.is_guest,
|
is_guest=sync_config.is_guest,
|
||||||
)
|
)
|
||||||
now_token = now_token.copy_and_replace("typing_key", typing_key)
|
now_token = now_token.copy_and_replace(StreamKeyType.TYPING, typing_key)
|
||||||
|
|
||||||
ephemeral_by_room: JsonDict = {}
|
ephemeral_by_room: JsonDict = {}
|
||||||
|
|
||||||
|
@ -471,7 +472,7 @@ class SyncHandler:
|
||||||
room_ids=room_ids,
|
room_ids=room_ids,
|
||||||
is_guest=sync_config.is_guest,
|
is_guest=sync_config.is_guest,
|
||||||
)
|
)
|
||||||
now_token = now_token.copy_and_replace("receipt_key", receipt_key)
|
now_token = now_token.copy_and_replace(StreamKeyType.RECEIPT, receipt_key)
|
||||||
|
|
||||||
for event in receipts:
|
for event in receipts:
|
||||||
room_id = event["room_id"]
|
room_id = event["room_id"]
|
||||||
|
@ -537,7 +538,9 @@ class SyncHandler:
|
||||||
prev_batch_token = now_token
|
prev_batch_token = now_token
|
||||||
if recents:
|
if recents:
|
||||||
room_key = recents[0].internal_metadata.before
|
room_key = recents[0].internal_metadata.before
|
||||||
prev_batch_token = now_token.copy_and_replace("room_key", room_key)
|
prev_batch_token = now_token.copy_and_replace(
|
||||||
|
StreamKeyType.ROOM, room_key
|
||||||
|
)
|
||||||
|
|
||||||
return TimelineBatch(
|
return TimelineBatch(
|
||||||
events=recents, prev_batch=prev_batch_token, limited=False
|
events=recents, prev_batch=prev_batch_token, limited=False
|
||||||
|
@ -611,7 +614,7 @@ class SyncHandler:
|
||||||
recents = recents[-timeline_limit:]
|
recents = recents[-timeline_limit:]
|
||||||
room_key = recents[0].internal_metadata.before
|
room_key = recents[0].internal_metadata.before
|
||||||
|
|
||||||
prev_batch_token = now_token.copy_and_replace("room_key", room_key)
|
prev_batch_token = now_token.copy_and_replace(StreamKeyType.ROOM, room_key)
|
||||||
|
|
||||||
# Don't bother to bundle aggregations if the timeline is unlimited,
|
# Don't bother to bundle aggregations if the timeline is unlimited,
|
||||||
# as clients will have all the necessary information.
|
# as clients will have all the necessary information.
|
||||||
|
@ -1397,7 +1400,7 @@ class SyncHandler:
|
||||||
now_token.to_device_key,
|
now_token.to_device_key,
|
||||||
)
|
)
|
||||||
sync_result_builder.now_token = now_token.copy_and_replace(
|
sync_result_builder.now_token = now_token.copy_and_replace(
|
||||||
"to_device_key", stream_id
|
StreamKeyType.TO_DEVICE, stream_id
|
||||||
)
|
)
|
||||||
sync_result_builder.to_device = messages
|
sync_result_builder.to_device = messages
|
||||||
else:
|
else:
|
||||||
|
@ -1502,7 +1505,7 @@ class SyncHandler:
|
||||||
)
|
)
|
||||||
assert presence_key
|
assert presence_key
|
||||||
sync_result_builder.now_token = now_token.copy_and_replace(
|
sync_result_builder.now_token = now_token.copy_and_replace(
|
||||||
"presence_key", presence_key
|
StreamKeyType.PRESENCE, presence_key
|
||||||
)
|
)
|
||||||
|
|
||||||
extra_users_ids = set(newly_joined_or_invited_users)
|
extra_users_ids = set(newly_joined_or_invited_users)
|
||||||
|
@ -1825,7 +1828,7 @@ class SyncHandler:
|
||||||
# stream token as it'll only be used in the context of this
|
# stream token as it'll only be used in the context of this
|
||||||
# room. (c.f. the docstring of `to_room_stream_token`).
|
# room. (c.f. the docstring of `to_room_stream_token`).
|
||||||
leave_token = since_token.copy_and_replace(
|
leave_token = since_token.copy_and_replace(
|
||||||
"room_key", leave_position.to_room_stream_token()
|
StreamKeyType.ROOM, leave_position.to_room_stream_token()
|
||||||
)
|
)
|
||||||
|
|
||||||
# If this is an out of band message, like a remote invite
|
# If this is an out of band message, like a remote invite
|
||||||
|
@ -1874,7 +1877,9 @@ class SyncHandler:
|
||||||
if room_entry:
|
if room_entry:
|
||||||
events, start_key = room_entry
|
events, start_key = room_entry
|
||||||
|
|
||||||
prev_batch_token = now_token.copy_and_replace("room_key", start_key)
|
prev_batch_token = now_token.copy_and_replace(
|
||||||
|
StreamKeyType.ROOM, start_key
|
||||||
|
)
|
||||||
|
|
||||||
entry = RoomSyncResultBuilder(
|
entry = RoomSyncResultBuilder(
|
||||||
room_id=room_id,
|
room_id=room_id,
|
||||||
|
@ -1971,7 +1976,7 @@ class SyncHandler:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
leave_token = now_token.copy_and_replace(
|
leave_token = now_token.copy_and_replace(
|
||||||
"room_key", RoomStreamToken(None, event.stream_ordering)
|
StreamKeyType.ROOM, RoomStreamToken(None, event.stream_ordering)
|
||||||
)
|
)
|
||||||
room_entries.append(
|
room_entries.append(
|
||||||
RoomSyncResultBuilder(
|
RoomSyncResultBuilder(
|
||||||
|
|
|
@ -25,7 +25,7 @@ from synapse.metrics.background_process_metrics import (
|
||||||
)
|
)
|
||||||
from synapse.replication.tcp.streams import TypingStream
|
from synapse.replication.tcp.streams import TypingStream
|
||||||
from synapse.streams import EventSource
|
from synapse.streams import EventSource
|
||||||
from synapse.types import JsonDict, Requester, UserID, get_domain_from_id
|
from synapse.types import JsonDict, Requester, StreamKeyType, UserID, get_domain_from_id
|
||||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
from synapse.util.wheel_timer import WheelTimer
|
from synapse.util.wheel_timer import WheelTimer
|
||||||
|
@ -382,7 +382,7 @@ class TypingWriterHandler(FollowerTypingHandler):
|
||||||
)
|
)
|
||||||
|
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"typing_key", self._latest_room_serial, rooms=[member.room_id]
|
StreamKeyType.TYPING, self._latest_room_serial, rooms=[member.room_id]
|
||||||
)
|
)
|
||||||
|
|
||||||
async def get_all_typing_updates(
|
async def get_all_typing_updates(
|
||||||
|
|
|
@ -43,8 +43,10 @@ from twisted.internet import defer, error as twisted_error, protocol, ssl
|
||||||
from twisted.internet.address import IPv4Address, IPv6Address
|
from twisted.internet.address import IPv4Address, IPv6Address
|
||||||
from twisted.internet.interfaces import (
|
from twisted.internet.interfaces import (
|
||||||
IAddress,
|
IAddress,
|
||||||
|
IDelayedCall,
|
||||||
IHostResolution,
|
IHostResolution,
|
||||||
IReactorPluggableNameResolver,
|
IReactorPluggableNameResolver,
|
||||||
|
IReactorTime,
|
||||||
IResolutionReceiver,
|
IResolutionReceiver,
|
||||||
ITCPTransport,
|
ITCPTransport,
|
||||||
)
|
)
|
||||||
|
@ -121,13 +123,15 @@ def check_against_blacklist(
|
||||||
_EPSILON = 0.00000001
|
_EPSILON = 0.00000001
|
||||||
|
|
||||||
|
|
||||||
def _make_scheduler(reactor):
|
def _make_scheduler(
|
||||||
|
reactor: IReactorTime,
|
||||||
|
) -> Callable[[Callable[[], object]], IDelayedCall]:
|
||||||
"""Makes a schedular suitable for a Cooperator using the given reactor.
|
"""Makes a schedular suitable for a Cooperator using the given reactor.
|
||||||
|
|
||||||
(This is effectively just a copy from `twisted.internet.task`)
|
(This is effectively just a copy from `twisted.internet.task`)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def _scheduler(x):
|
def _scheduler(x: Callable[[], object]) -> IDelayedCall:
|
||||||
return reactor.callLater(_EPSILON, x)
|
return reactor.callLater(_EPSILON, x)
|
||||||
|
|
||||||
return _scheduler
|
return _scheduler
|
||||||
|
@ -349,7 +353,7 @@ class SimpleHttpClient:
|
||||||
# XXX: The justification for using the cache factor here is that larger instances
|
# XXX: The justification for using the cache factor here is that larger instances
|
||||||
# will need both more cache and more connections.
|
# will need both more cache and more connections.
|
||||||
# Still, this should probably be a separate dial
|
# Still, this should probably be a separate dial
|
||||||
pool.maxPersistentPerHost = max((100 * hs.config.caches.global_factor, 5))
|
pool.maxPersistentPerHost = max(int(100 * hs.config.caches.global_factor), 5)
|
||||||
pool.cachedConnectionTimeout = 2 * 60
|
pool.cachedConnectionTimeout = 2 * 60
|
||||||
|
|
||||||
self.agent: IAgent = ProxyAgent(
|
self.agent: IAgent = ProxyAgent(
|
||||||
|
@ -776,7 +780,7 @@ class SimpleHttpClient:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def _timeout_to_request_timed_out_error(f: Failure):
|
def _timeout_to_request_timed_out_error(f: Failure) -> Failure:
|
||||||
if f.check(twisted_error.TimeoutError, twisted_error.ConnectingCancelledError):
|
if f.check(twisted_error.TimeoutError, twisted_error.ConnectingCancelledError):
|
||||||
# The TCP connection has its own timeout (set by the 'connectTimeout' param
|
# The TCP connection has its own timeout (set by the 'connectTimeout' param
|
||||||
# on the Agent), which raises twisted_error.TimeoutError exception.
|
# on the Agent), which raises twisted_error.TimeoutError exception.
|
||||||
|
@ -810,7 +814,7 @@ class _DiscardBodyWithMaxSizeProtocol(protocol.Protocol):
|
||||||
def __init__(self, deferred: defer.Deferred):
|
def __init__(self, deferred: defer.Deferred):
|
||||||
self.deferred = deferred
|
self.deferred = deferred
|
||||||
|
|
||||||
def _maybe_fail(self):
|
def _maybe_fail(self) -> None:
|
||||||
"""
|
"""
|
||||||
Report a max size exceed error and disconnect the first time this is called.
|
Report a max size exceed error and disconnect the first time this is called.
|
||||||
"""
|
"""
|
||||||
|
@ -934,12 +938,12 @@ class InsecureInterceptableContextFactory(ssl.ContextFactory):
|
||||||
Do not use this since it allows an attacker to intercept your communications.
|
Do not use this since it allows an attacker to intercept your communications.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self) -> None:
|
||||||
self._context = SSL.Context(SSL.SSLv23_METHOD)
|
self._context = SSL.Context(SSL.SSLv23_METHOD)
|
||||||
self._context.set_verify(VERIFY_NONE, lambda *_: False)
|
self._context.set_verify(VERIFY_NONE, lambda *_: False)
|
||||||
|
|
||||||
def getContext(self, hostname=None, port=None):
|
def getContext(self, hostname=None, port=None):
|
||||||
return self._context
|
return self._context
|
||||||
|
|
||||||
def creatorForNetloc(self, hostname, port):
|
def creatorForNetloc(self, hostname: bytes, port: int):
|
||||||
return self
|
return self
|
||||||
|
|
|
@ -14,15 +14,22 @@
|
||||||
|
|
||||||
import base64
|
import base64
|
||||||
import logging
|
import logging
|
||||||
from typing import Optional
|
from typing import Optional, Union
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
from zope.interface import implementer
|
from zope.interface import implementer
|
||||||
|
|
||||||
from twisted.internet import defer, protocol
|
from twisted.internet import defer, protocol
|
||||||
from twisted.internet.error import ConnectError
|
from twisted.internet.error import ConnectError
|
||||||
from twisted.internet.interfaces import IReactorCore, IStreamClientEndpoint
|
from twisted.internet.interfaces import (
|
||||||
|
IAddress,
|
||||||
|
IConnector,
|
||||||
|
IProtocol,
|
||||||
|
IReactorCore,
|
||||||
|
IStreamClientEndpoint,
|
||||||
|
)
|
||||||
from twisted.internet.protocol import ClientFactory, Protocol, connectionDone
|
from twisted.internet.protocol import ClientFactory, Protocol, connectionDone
|
||||||
|
from twisted.python.failure import Failure
|
||||||
from twisted.web import http
|
from twisted.web import http
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
@ -81,14 +88,14 @@ class HTTPConnectProxyEndpoint:
|
||||||
self._port = port
|
self._port = port
|
||||||
self._proxy_creds = proxy_creds
|
self._proxy_creds = proxy_creds
|
||||||
|
|
||||||
def __repr__(self):
|
def __repr__(self) -> str:
|
||||||
return "<HTTPConnectProxyEndpoint %s>" % (self._proxy_endpoint,)
|
return "<HTTPConnectProxyEndpoint %s>" % (self._proxy_endpoint,)
|
||||||
|
|
||||||
# Mypy encounters a false positive here: it complains that ClientFactory
|
# Mypy encounters a false positive here: it complains that ClientFactory
|
||||||
# is incompatible with IProtocolFactory. But ClientFactory inherits from
|
# is incompatible with IProtocolFactory. But ClientFactory inherits from
|
||||||
# Factory, which implements IProtocolFactory. So I think this is a bug
|
# Factory, which implements IProtocolFactory. So I think this is a bug
|
||||||
# in mypy-zope.
|
# in mypy-zope.
|
||||||
def connect(self, protocolFactory: ClientFactory): # type: ignore[override]
|
def connect(self, protocolFactory: ClientFactory) -> "defer.Deferred[IProtocol]": # type: ignore[override]
|
||||||
f = HTTPProxiedClientFactory(
|
f = HTTPProxiedClientFactory(
|
||||||
self._host, self._port, protocolFactory, self._proxy_creds
|
self._host, self._port, protocolFactory, self._proxy_creds
|
||||||
)
|
)
|
||||||
|
@ -125,10 +132,10 @@ class HTTPProxiedClientFactory(protocol.ClientFactory):
|
||||||
self.proxy_creds = proxy_creds
|
self.proxy_creds = proxy_creds
|
||||||
self.on_connection: "defer.Deferred[None]" = defer.Deferred()
|
self.on_connection: "defer.Deferred[None]" = defer.Deferred()
|
||||||
|
|
||||||
def startedConnecting(self, connector):
|
def startedConnecting(self, connector: IConnector) -> None:
|
||||||
return self.wrapped_factory.startedConnecting(connector)
|
return self.wrapped_factory.startedConnecting(connector)
|
||||||
|
|
||||||
def buildProtocol(self, addr):
|
def buildProtocol(self, addr: IAddress) -> "HTTPConnectProtocol":
|
||||||
wrapped_protocol = self.wrapped_factory.buildProtocol(addr)
|
wrapped_protocol = self.wrapped_factory.buildProtocol(addr)
|
||||||
if wrapped_protocol is None:
|
if wrapped_protocol is None:
|
||||||
raise TypeError("buildProtocol produced None instead of a Protocol")
|
raise TypeError("buildProtocol produced None instead of a Protocol")
|
||||||
|
@ -141,13 +148,13 @@ class HTTPProxiedClientFactory(protocol.ClientFactory):
|
||||||
self.proxy_creds,
|
self.proxy_creds,
|
||||||
)
|
)
|
||||||
|
|
||||||
def clientConnectionFailed(self, connector, reason):
|
def clientConnectionFailed(self, connector: IConnector, reason: Failure) -> None:
|
||||||
logger.debug("Connection to proxy failed: %s", reason)
|
logger.debug("Connection to proxy failed: %s", reason)
|
||||||
if not self.on_connection.called:
|
if not self.on_connection.called:
|
||||||
self.on_connection.errback(reason)
|
self.on_connection.errback(reason)
|
||||||
return self.wrapped_factory.clientConnectionFailed(connector, reason)
|
return self.wrapped_factory.clientConnectionFailed(connector, reason)
|
||||||
|
|
||||||
def clientConnectionLost(self, connector, reason):
|
def clientConnectionLost(self, connector: IConnector, reason: Failure) -> None:
|
||||||
logger.debug("Connection to proxy lost: %s", reason)
|
logger.debug("Connection to proxy lost: %s", reason)
|
||||||
if not self.on_connection.called:
|
if not self.on_connection.called:
|
||||||
self.on_connection.errback(reason)
|
self.on_connection.errback(reason)
|
||||||
|
@ -191,10 +198,10 @@ class HTTPConnectProtocol(protocol.Protocol):
|
||||||
)
|
)
|
||||||
self.http_setup_client.on_connected.addCallback(self.proxyConnected)
|
self.http_setup_client.on_connected.addCallback(self.proxyConnected)
|
||||||
|
|
||||||
def connectionMade(self):
|
def connectionMade(self) -> None:
|
||||||
self.http_setup_client.makeConnection(self.transport)
|
self.http_setup_client.makeConnection(self.transport)
|
||||||
|
|
||||||
def connectionLost(self, reason=connectionDone):
|
def connectionLost(self, reason: Failure = connectionDone) -> None:
|
||||||
if self.wrapped_protocol.connected:
|
if self.wrapped_protocol.connected:
|
||||||
self.wrapped_protocol.connectionLost(reason)
|
self.wrapped_protocol.connectionLost(reason)
|
||||||
|
|
||||||
|
@ -203,7 +210,7 @@ class HTTPConnectProtocol(protocol.Protocol):
|
||||||
if not self.connected_deferred.called:
|
if not self.connected_deferred.called:
|
||||||
self.connected_deferred.errback(reason)
|
self.connected_deferred.errback(reason)
|
||||||
|
|
||||||
def proxyConnected(self, _):
|
def proxyConnected(self, _: Union[None, "defer.Deferred[None]"]) -> None:
|
||||||
self.wrapped_protocol.makeConnection(self.transport)
|
self.wrapped_protocol.makeConnection(self.transport)
|
||||||
|
|
||||||
self.connected_deferred.callback(self.wrapped_protocol)
|
self.connected_deferred.callback(self.wrapped_protocol)
|
||||||
|
@ -213,7 +220,7 @@ class HTTPConnectProtocol(protocol.Protocol):
|
||||||
if buf:
|
if buf:
|
||||||
self.wrapped_protocol.dataReceived(buf)
|
self.wrapped_protocol.dataReceived(buf)
|
||||||
|
|
||||||
def dataReceived(self, data: bytes):
|
def dataReceived(self, data: bytes) -> None:
|
||||||
# if we've set up the HTTP protocol, we can send the data there
|
# if we've set up the HTTP protocol, we can send the data there
|
||||||
if self.wrapped_protocol.connected:
|
if self.wrapped_protocol.connected:
|
||||||
return self.wrapped_protocol.dataReceived(data)
|
return self.wrapped_protocol.dataReceived(data)
|
||||||
|
@ -243,7 +250,7 @@ class HTTPConnectSetupClient(http.HTTPClient):
|
||||||
self.proxy_creds = proxy_creds
|
self.proxy_creds = proxy_creds
|
||||||
self.on_connected: "defer.Deferred[None]" = defer.Deferred()
|
self.on_connected: "defer.Deferred[None]" = defer.Deferred()
|
||||||
|
|
||||||
def connectionMade(self):
|
def connectionMade(self) -> None:
|
||||||
logger.debug("Connected to proxy, sending CONNECT")
|
logger.debug("Connected to proxy, sending CONNECT")
|
||||||
self.sendCommand(b"CONNECT", b"%s:%d" % (self.host, self.port))
|
self.sendCommand(b"CONNECT", b"%s:%d" % (self.host, self.port))
|
||||||
|
|
||||||
|
@ -257,14 +264,14 @@ class HTTPConnectSetupClient(http.HTTPClient):
|
||||||
|
|
||||||
self.endHeaders()
|
self.endHeaders()
|
||||||
|
|
||||||
def handleStatus(self, version: bytes, status: bytes, message: bytes):
|
def handleStatus(self, version: bytes, status: bytes, message: bytes) -> None:
|
||||||
logger.debug("Got Status: %s %s %s", status, message, version)
|
logger.debug("Got Status: %s %s %s", status, message, version)
|
||||||
if status != b"200":
|
if status != b"200":
|
||||||
raise ProxyConnectError(f"Unexpected status on CONNECT: {status!s}")
|
raise ProxyConnectError(f"Unexpected status on CONNECT: {status!s}")
|
||||||
|
|
||||||
def handleEndHeaders(self):
|
def handleEndHeaders(self) -> None:
|
||||||
logger.debug("End Headers")
|
logger.debug("End Headers")
|
||||||
self.on_connected.callback(None)
|
self.on_connected.callback(None)
|
||||||
|
|
||||||
def handleResponse(self, body):
|
def handleResponse(self, body: bytes) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
|
@ -239,7 +239,7 @@ class MatrixHostnameEndpointFactory:
|
||||||
|
|
||||||
self._srv_resolver = srv_resolver
|
self._srv_resolver = srv_resolver
|
||||||
|
|
||||||
def endpointForURI(self, parsed_uri: URI):
|
def endpointForURI(self, parsed_uri: URI) -> "MatrixHostnameEndpoint":
|
||||||
return MatrixHostnameEndpoint(
|
return MatrixHostnameEndpoint(
|
||||||
self._reactor,
|
self._reactor,
|
||||||
self._proxy_reactor,
|
self._proxy_reactor,
|
||||||
|
|
|
@ -16,7 +16,7 @@
|
||||||
import logging
|
import logging
|
||||||
import random
|
import random
|
||||||
import time
|
import time
|
||||||
from typing import Callable, Dict, List
|
from typing import Any, Callable, Dict, List
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
|
@ -109,7 +109,7 @@ class SrvResolver:
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
dns_client=client,
|
dns_client: Any = client,
|
||||||
cache: Dict[bytes, List[Server]] = SERVER_CACHE,
|
cache: Dict[bytes, List[Server]] = SERVER_CACHE,
|
||||||
get_time: Callable[[], float] = time.time,
|
get_time: Callable[[], float] = time.time,
|
||||||
):
|
):
|
||||||
|
|
|
@ -74,9 +74,9 @@ _well_known_cache: TTLCache[bytes, Optional[bytes]] = TTLCache("well-known")
|
||||||
_had_valid_well_known_cache: TTLCache[bytes, bool] = TTLCache("had-valid-well-known")
|
_had_valid_well_known_cache: TTLCache[bytes, bool] = TTLCache("had-valid-well-known")
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
class WellKnownLookupResult:
|
class WellKnownLookupResult:
|
||||||
delegated_server = attr.ib()
|
delegated_server: Optional[bytes]
|
||||||
|
|
||||||
|
|
||||||
class WellKnownResolver:
|
class WellKnownResolver:
|
||||||
|
@ -336,4 +336,4 @@ def _parse_cache_control(headers: Headers) -> Dict[bytes, Optional[bytes]]:
|
||||||
class _FetchWellKnownFailure(Exception):
|
class _FetchWellKnownFailure(Exception):
|
||||||
# True if we didn't get a non-5xx HTTP response, i.e. this may or may not be
|
# True if we didn't get a non-5xx HTTP response, i.e. this may or may not be
|
||||||
# a temporary failure.
|
# a temporary failure.
|
||||||
temporary = attr.ib()
|
temporary: bool = attr.ib()
|
||||||
|
|
|
@ -23,6 +23,8 @@ from http import HTTPStatus
|
||||||
from io import BytesIO, StringIO
|
from io import BytesIO, StringIO
|
||||||
from typing import (
|
from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
|
Any,
|
||||||
|
BinaryIO,
|
||||||
Callable,
|
Callable,
|
||||||
Dict,
|
Dict,
|
||||||
Generic,
|
Generic,
|
||||||
|
@ -44,7 +46,7 @@ from typing_extensions import Literal
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.internet.error import DNSLookupError
|
from twisted.internet.error import DNSLookupError
|
||||||
from twisted.internet.interfaces import IReactorTime
|
from twisted.internet.interfaces import IReactorTime
|
||||||
from twisted.internet.task import _EPSILON, Cooperator
|
from twisted.internet.task import Cooperator
|
||||||
from twisted.web.client import ResponseFailed
|
from twisted.web.client import ResponseFailed
|
||||||
from twisted.web.http_headers import Headers
|
from twisted.web.http_headers import Headers
|
||||||
from twisted.web.iweb import IBodyProducer, IResponse
|
from twisted.web.iweb import IBodyProducer, IResponse
|
||||||
|
@ -58,11 +60,13 @@ from synapse.api.errors import (
|
||||||
RequestSendFailed,
|
RequestSendFailed,
|
||||||
SynapseError,
|
SynapseError,
|
||||||
)
|
)
|
||||||
|
from synapse.crypto.context_factory import FederationPolicyForHTTPS
|
||||||
from synapse.http import QuieterFileBodyProducer
|
from synapse.http import QuieterFileBodyProducer
|
||||||
from synapse.http.client import (
|
from synapse.http.client import (
|
||||||
BlacklistingAgentWrapper,
|
BlacklistingAgentWrapper,
|
||||||
BodyExceededMaxSize,
|
BodyExceededMaxSize,
|
||||||
ByteWriteable,
|
ByteWriteable,
|
||||||
|
_make_scheduler,
|
||||||
encode_query_args,
|
encode_query_args,
|
||||||
read_body_with_max_size,
|
read_body_with_max_size,
|
||||||
)
|
)
|
||||||
|
@ -181,7 +185,7 @@ class JsonParser(ByteParser[Union[JsonDict, list]]):
|
||||||
|
|
||||||
CONTENT_TYPE = "application/json"
|
CONTENT_TYPE = "application/json"
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self) -> None:
|
||||||
self._buffer = StringIO()
|
self._buffer = StringIO()
|
||||||
self._binary_wrapper = BinaryIOWrapper(self._buffer)
|
self._binary_wrapper = BinaryIOWrapper(self._buffer)
|
||||||
|
|
||||||
|
@ -299,7 +303,9 @@ async def _handle_response(
|
||||||
class BinaryIOWrapper:
|
class BinaryIOWrapper:
|
||||||
"""A wrapper for a TextIO which converts from bytes on the fly."""
|
"""A wrapper for a TextIO which converts from bytes on the fly."""
|
||||||
|
|
||||||
def __init__(self, file: typing.TextIO, encoding="utf-8", errors="strict"):
|
def __init__(
|
||||||
|
self, file: typing.TextIO, encoding: str = "utf-8", errors: str = "strict"
|
||||||
|
):
|
||||||
self.decoder = codecs.getincrementaldecoder(encoding)(errors)
|
self.decoder = codecs.getincrementaldecoder(encoding)(errors)
|
||||||
self.file = file
|
self.file = file
|
||||||
|
|
||||||
|
@ -317,7 +323,11 @@ class MatrixFederationHttpClient:
|
||||||
requests.
|
requests.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer", tls_client_options_factory):
|
def __init__(
|
||||||
|
self,
|
||||||
|
hs: "HomeServer",
|
||||||
|
tls_client_options_factory: Optional[FederationPolicyForHTTPS],
|
||||||
|
):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.signing_key = hs.signing_key
|
self.signing_key = hs.signing_key
|
||||||
self.server_name = hs.hostname
|
self.server_name = hs.hostname
|
||||||
|
@ -348,10 +358,7 @@ class MatrixFederationHttpClient:
|
||||||
self.version_string_bytes = hs.version_string.encode("ascii")
|
self.version_string_bytes = hs.version_string.encode("ascii")
|
||||||
self.default_timeout = 60
|
self.default_timeout = 60
|
||||||
|
|
||||||
def schedule(x):
|
self._cooperator = Cooperator(scheduler=_make_scheduler(self.reactor))
|
||||||
self.reactor.callLater(_EPSILON, x)
|
|
||||||
|
|
||||||
self._cooperator = Cooperator(scheduler=schedule)
|
|
||||||
|
|
||||||
self._sleeper = AwakenableSleeper(self.reactor)
|
self._sleeper = AwakenableSleeper(self.reactor)
|
||||||
|
|
||||||
|
@ -364,7 +371,7 @@ class MatrixFederationHttpClient:
|
||||||
self,
|
self,
|
||||||
request: MatrixFederationRequest,
|
request: MatrixFederationRequest,
|
||||||
try_trailing_slash_on_400: bool = False,
|
try_trailing_slash_on_400: bool = False,
|
||||||
**send_request_args,
|
**send_request_args: Any,
|
||||||
) -> IResponse:
|
) -> IResponse:
|
||||||
"""Wrapper for _send_request which can optionally retry the request
|
"""Wrapper for _send_request which can optionally retry the request
|
||||||
upon receiving a combination of a 400 HTTP response code and a
|
upon receiving a combination of a 400 HTTP response code and a
|
||||||
|
@ -740,7 +747,7 @@ class MatrixFederationHttpClient:
|
||||||
for key, sig in request["signatures"][self.server_name].items():
|
for key, sig in request["signatures"][self.server_name].items():
|
||||||
auth_headers.append(
|
auth_headers.append(
|
||||||
(
|
(
|
||||||
'X-Matrix origin=%s,key="%s",sig="%s",destination="%s"'
|
'X-Matrix origin="%s",key="%s",sig="%s",destination="%s"'
|
||||||
% (
|
% (
|
||||||
self.server_name,
|
self.server_name,
|
||||||
key,
|
key,
|
||||||
|
@ -1159,7 +1166,7 @@ class MatrixFederationHttpClient:
|
||||||
self,
|
self,
|
||||||
destination: str,
|
destination: str,
|
||||||
path: str,
|
path: str,
|
||||||
output_stream,
|
output_stream: BinaryIO,
|
||||||
args: Optional[QueryParams] = None,
|
args: Optional[QueryParams] = None,
|
||||||
retry_on_dns_fail: bool = True,
|
retry_on_dns_fail: bool = True,
|
||||||
max_size: Optional[int] = None,
|
max_size: Optional[int] = None,
|
||||||
|
@ -1250,10 +1257,10 @@ class MatrixFederationHttpClient:
|
||||||
return length, headers
|
return length, headers
|
||||||
|
|
||||||
|
|
||||||
def _flatten_response_never_received(e):
|
def _flatten_response_never_received(e: BaseException) -> str:
|
||||||
if hasattr(e, "reasons"):
|
if hasattr(e, "reasons"):
|
||||||
reasons = ", ".join(
|
reasons = ", ".join(
|
||||||
_flatten_response_never_received(f.value) for f in e.reasons
|
_flatten_response_never_received(f.value) for f in e.reasons # type: ignore[attr-defined]
|
||||||
)
|
)
|
||||||
|
|
||||||
return "%s:[%s]" % (type(e).__name__, reasons)
|
return "%s:[%s]" % (type(e).__name__, reasons)
|
||||||
|
|
|
@ -245,7 +245,7 @@ def http_proxy_endpoint(
|
||||||
proxy: Optional[bytes],
|
proxy: Optional[bytes],
|
||||||
reactor: IReactorCore,
|
reactor: IReactorCore,
|
||||||
tls_options_factory: Optional[IPolicyForHTTPS],
|
tls_options_factory: Optional[IPolicyForHTTPS],
|
||||||
**kwargs,
|
**kwargs: object,
|
||||||
) -> Tuple[Optional[IStreamClientEndpoint], Optional[ProxyCredentials]]:
|
) -> Tuple[Optional[IStreamClientEndpoint], Optional[ProxyCredentials]]:
|
||||||
"""Parses an http proxy setting and returns an endpoint for the proxy
|
"""Parses an http proxy setting and returns an endpoint for the proxy
|
||||||
|
|
||||||
|
|
|
@ -162,7 +162,7 @@ class RequestMetrics:
|
||||||
with _in_flight_requests_lock:
|
with _in_flight_requests_lock:
|
||||||
_in_flight_requests.add(self)
|
_in_flight_requests.add(self)
|
||||||
|
|
||||||
def stop(self, time_sec, response_code, sent_bytes):
|
def stop(self, time_sec: float, response_code: int, sent_bytes: int) -> None:
|
||||||
with _in_flight_requests_lock:
|
with _in_flight_requests_lock:
|
||||||
_in_flight_requests.discard(self)
|
_in_flight_requests.discard(self)
|
||||||
|
|
||||||
|
@ -186,13 +186,13 @@ class RequestMetrics:
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
response_code = str(response_code)
|
response_code_str = str(response_code)
|
||||||
|
|
||||||
outgoing_responses_counter.labels(self.method, response_code).inc()
|
outgoing_responses_counter.labels(self.method, response_code_str).inc()
|
||||||
|
|
||||||
response_count.labels(self.method, self.name, tag).inc()
|
response_count.labels(self.method, self.name, tag).inc()
|
||||||
|
|
||||||
response_timer.labels(self.method, self.name, tag, response_code).observe(
|
response_timer.labels(self.method, self.name, tag, response_code_str).observe(
|
||||||
time_sec - self.start_ts
|
time_sec - self.start_ts
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -221,7 +221,7 @@ class RequestMetrics:
|
||||||
# flight.
|
# flight.
|
||||||
self.update_metrics()
|
self.update_metrics()
|
||||||
|
|
||||||
def update_metrics(self):
|
def update_metrics(self) -> None:
|
||||||
"""Updates the in flight metrics with values from this request."""
|
"""Updates the in flight metrics with values from this request."""
|
||||||
if not self.start_context:
|
if not self.start_context:
|
||||||
logger.error(
|
logger.error(
|
||||||
|
|
|
@ -33,6 +33,7 @@ from typing import (
|
||||||
Optional,
|
Optional,
|
||||||
Pattern,
|
Pattern,
|
||||||
Tuple,
|
Tuple,
|
||||||
|
TypeVar,
|
||||||
Union,
|
Union,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -92,6 +93,68 @@ HTML_ERROR_TEMPLATE = """<!DOCTYPE html>
|
||||||
HTTP_STATUS_REQUEST_CANCELLED = 499
|
HTTP_STATUS_REQUEST_CANCELLED = 499
|
||||||
|
|
||||||
|
|
||||||
|
F = TypeVar("F", bound=Callable[..., Any])
|
||||||
|
|
||||||
|
|
||||||
|
_cancellable_method_names = frozenset(
|
||||||
|
{
|
||||||
|
# `RestServlet`, `BaseFederationServlet` and `BaseFederationServerServlet`
|
||||||
|
# methods
|
||||||
|
"on_GET",
|
||||||
|
"on_PUT",
|
||||||
|
"on_POST",
|
||||||
|
"on_DELETE",
|
||||||
|
# `_AsyncResource`, `DirectServeHtmlResource` and `DirectServeJsonResource`
|
||||||
|
# methods
|
||||||
|
"_async_render_GET",
|
||||||
|
"_async_render_PUT",
|
||||||
|
"_async_render_POST",
|
||||||
|
"_async_render_DELETE",
|
||||||
|
"_async_render_OPTIONS",
|
||||||
|
# `ReplicationEndpoint` methods
|
||||||
|
"_handle_request",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def cancellable(method: F) -> F:
|
||||||
|
"""Marks a servlet method as cancellable.
|
||||||
|
|
||||||
|
Methods with this decorator will be cancelled if the client disconnects before we
|
||||||
|
finish processing the request.
|
||||||
|
|
||||||
|
During cancellation, `Deferred.cancel()` will be invoked on the `Deferred` wrapping
|
||||||
|
the method. The `cancel()` call will propagate down to the `Deferred` that is
|
||||||
|
currently being waited on. That `Deferred` will raise a `CancelledError`, which will
|
||||||
|
propagate up, as per normal exception handling.
|
||||||
|
|
||||||
|
Before applying this decorator to a new endpoint, you MUST recursively check
|
||||||
|
that all `await`s in the function are on `async` functions or `Deferred`s that
|
||||||
|
handle cancellation cleanly, otherwise a variety of bugs may occur, ranging from
|
||||||
|
premature logging context closure, to stuck requests, to database corruption.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
class SomeServlet(RestServlet):
|
||||||
|
@cancellable
|
||||||
|
async def on_GET(self, request: SynapseRequest) -> ...:
|
||||||
|
...
|
||||||
|
"""
|
||||||
|
if method.__name__ not in _cancellable_method_names and not any(
|
||||||
|
method.__name__.startswith(prefix) for prefix in _cancellable_method_names
|
||||||
|
):
|
||||||
|
raise ValueError(
|
||||||
|
"@cancellable decorator can only be applied to servlet methods."
|
||||||
|
)
|
||||||
|
|
||||||
|
method.cancellable = True # type: ignore[attr-defined]
|
||||||
|
return method
|
||||||
|
|
||||||
|
|
||||||
|
def is_method_cancellable(method: Callable[..., Any]) -> bool:
|
||||||
|
"""Checks whether a servlet method has the `@cancellable` flag."""
|
||||||
|
return getattr(method, "cancellable", False)
|
||||||
|
|
||||||
|
|
||||||
def return_json_error(f: failure.Failure, request: SynapseRequest) -> None:
|
def return_json_error(f: failure.Failure, request: SynapseRequest) -> None:
|
||||||
"""Sends a JSON error response to clients."""
|
"""Sends a JSON error response to clients."""
|
||||||
|
|
||||||
|
@ -253,6 +316,9 @@ class HttpServer(Protocol):
|
||||||
If the regex contains groups these gets passed to the callback via
|
If the regex contains groups these gets passed to the callback via
|
||||||
an unpacked tuple.
|
an unpacked tuple.
|
||||||
|
|
||||||
|
The callback may be marked with the `@cancellable` decorator, which will
|
||||||
|
cause request processing to be cancelled when clients disconnect early.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
method: The HTTP method to listen to.
|
method: The HTTP method to listen to.
|
||||||
path_patterns: The regex used to match requests.
|
path_patterns: The regex used to match requests.
|
||||||
|
@ -283,7 +349,9 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
def render(self, request: SynapseRequest) -> int:
|
def render(self, request: SynapseRequest) -> int:
|
||||||
"""This gets called by twisted every time someone sends us a request."""
|
"""This gets called by twisted every time someone sends us a request."""
|
||||||
defer.ensureDeferred(self._async_render_wrapper(request))
|
request.render_deferred = defer.ensureDeferred(
|
||||||
|
self._async_render_wrapper(request)
|
||||||
|
)
|
||||||
return NOT_DONE_YET
|
return NOT_DONE_YET
|
||||||
|
|
||||||
@wrap_async_request_handler
|
@wrap_async_request_handler
|
||||||
|
@ -319,6 +387,8 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
method_handler = getattr(self, "_async_render_%s" % (request_method,), None)
|
method_handler = getattr(self, "_async_render_%s" % (request_method,), None)
|
||||||
if method_handler:
|
if method_handler:
|
||||||
|
request.is_render_cancellable = is_method_cancellable(method_handler)
|
||||||
|
|
||||||
raw_callback_return = method_handler(request)
|
raw_callback_return = method_handler(request)
|
||||||
|
|
||||||
# Is it synchronous? We'll allow this for now.
|
# Is it synchronous? We'll allow this for now.
|
||||||
|
@ -479,6 +549,8 @@ class JsonResource(DirectServeJsonResource):
|
||||||
async def _async_render(self, request: SynapseRequest) -> Tuple[int, Any]:
|
async def _async_render(self, request: SynapseRequest) -> Tuple[int, Any]:
|
||||||
callback, servlet_classname, group_dict = self._get_handler_for_request(request)
|
callback, servlet_classname, group_dict = self._get_handler_for_request(request)
|
||||||
|
|
||||||
|
request.is_render_cancellable = is_method_cancellable(callback)
|
||||||
|
|
||||||
# Make sure we have an appropriate name for this handler in prometheus
|
# Make sure we have an appropriate name for this handler in prometheus
|
||||||
# (rather than the default of JsonResource).
|
# (rather than the default of JsonResource).
|
||||||
request.request_metrics.name = servlet_classname
|
request.request_metrics.name = servlet_classname
|
||||||
|
|
|
@ -19,6 +19,7 @@ from typing import TYPE_CHECKING, Any, Generator, Optional, Tuple, Union
|
||||||
import attr
|
import attr
|
||||||
from zope.interface import implementer
|
from zope.interface import implementer
|
||||||
|
|
||||||
|
from twisted.internet.defer import Deferred
|
||||||
from twisted.internet.interfaces import IAddress, IReactorTime
|
from twisted.internet.interfaces import IAddress, IReactorTime
|
||||||
from twisted.python.failure import Failure
|
from twisted.python.failure import Failure
|
||||||
from twisted.web.http import HTTPChannel
|
from twisted.web.http import HTTPChannel
|
||||||
|
@ -91,6 +92,14 @@ class SynapseRequest(Request):
|
||||||
# we can't yet create the logcontext, as we don't know the method.
|
# we can't yet create the logcontext, as we don't know the method.
|
||||||
self.logcontext: Optional[LoggingContext] = None
|
self.logcontext: Optional[LoggingContext] = None
|
||||||
|
|
||||||
|
# The `Deferred` to cancel if the client disconnects early and
|
||||||
|
# `is_render_cancellable` is set. Expected to be set by `Resource.render`.
|
||||||
|
self.render_deferred: Optional["Deferred[None]"] = None
|
||||||
|
# A boolean indicating whether `render_deferred` should be cancelled if the
|
||||||
|
# client disconnects early. Expected to be set by the coroutine started by
|
||||||
|
# `Resource.render`, if rendering is asynchronous.
|
||||||
|
self.is_render_cancellable = False
|
||||||
|
|
||||||
global _next_request_seq
|
global _next_request_seq
|
||||||
self.request_seq = _next_request_seq
|
self.request_seq = _next_request_seq
|
||||||
_next_request_seq += 1
|
_next_request_seq += 1
|
||||||
|
@ -357,7 +366,21 @@ class SynapseRequest(Request):
|
||||||
{"event": "client connection lost", "reason": str(reason.value)}
|
{"event": "client connection lost", "reason": str(reason.value)}
|
||||||
)
|
)
|
||||||
|
|
||||||
if not self._is_processing:
|
if self._is_processing:
|
||||||
|
if self.is_render_cancellable:
|
||||||
|
if self.render_deferred is not None:
|
||||||
|
# Throw a cancellation into the request processing, in the hope
|
||||||
|
# that it will finish up sooner than it normally would.
|
||||||
|
# The `self.processing()` context manager will call
|
||||||
|
# `_finished_processing()` when done.
|
||||||
|
with PreserveLoggingContext():
|
||||||
|
self.render_deferred.cancel()
|
||||||
|
else:
|
||||||
|
logger.error(
|
||||||
|
"Connection from client lost, but have no Deferred to "
|
||||||
|
"cancel even though the request is marked as cancellable."
|
||||||
|
)
|
||||||
|
else:
|
||||||
self._finished_processing()
|
self._finished_processing()
|
||||||
|
|
||||||
def _started_processing(self, servlet_name: str) -> None:
|
def _started_processing(self, servlet_name: str) -> None:
|
||||||
|
|
|
@ -31,7 +31,11 @@ from twisted.internet.endpoints import (
|
||||||
TCP4ClientEndpoint,
|
TCP4ClientEndpoint,
|
||||||
TCP6ClientEndpoint,
|
TCP6ClientEndpoint,
|
||||||
)
|
)
|
||||||
from twisted.internet.interfaces import IPushProducer, IStreamClientEndpoint
|
from twisted.internet.interfaces import (
|
||||||
|
IPushProducer,
|
||||||
|
IReactorTCP,
|
||||||
|
IStreamClientEndpoint,
|
||||||
|
)
|
||||||
from twisted.internet.protocol import Factory, Protocol
|
from twisted.internet.protocol import Factory, Protocol
|
||||||
from twisted.internet.tcp import Connection
|
from twisted.internet.tcp import Connection
|
||||||
from twisted.python.failure import Failure
|
from twisted.python.failure import Failure
|
||||||
|
@ -59,14 +63,14 @@ class LogProducer:
|
||||||
_buffer: Deque[logging.LogRecord]
|
_buffer: Deque[logging.LogRecord]
|
||||||
_paused: bool = attr.ib(default=False, init=False)
|
_paused: bool = attr.ib(default=False, init=False)
|
||||||
|
|
||||||
def pauseProducing(self):
|
def pauseProducing(self) -> None:
|
||||||
self._paused = True
|
self._paused = True
|
||||||
|
|
||||||
def stopProducing(self):
|
def stopProducing(self) -> None:
|
||||||
self._paused = True
|
self._paused = True
|
||||||
self._buffer = deque()
|
self._buffer = deque()
|
||||||
|
|
||||||
def resumeProducing(self):
|
def resumeProducing(self) -> None:
|
||||||
# If we're already producing, nothing to do.
|
# If we're already producing, nothing to do.
|
||||||
self._paused = False
|
self._paused = False
|
||||||
|
|
||||||
|
@ -102,8 +106,8 @@ class RemoteHandler(logging.Handler):
|
||||||
host: str,
|
host: str,
|
||||||
port: int,
|
port: int,
|
||||||
maximum_buffer: int = 1000,
|
maximum_buffer: int = 1000,
|
||||||
level=logging.NOTSET,
|
level: int = logging.NOTSET,
|
||||||
_reactor=None,
|
_reactor: Optional[IReactorTCP] = None,
|
||||||
):
|
):
|
||||||
super().__init__(level=level)
|
super().__init__(level=level)
|
||||||
self.host = host
|
self.host = host
|
||||||
|
@ -118,7 +122,7 @@ class RemoteHandler(logging.Handler):
|
||||||
if _reactor is None:
|
if _reactor is None:
|
||||||
from twisted.internet import reactor
|
from twisted.internet import reactor
|
||||||
|
|
||||||
_reactor = reactor
|
_reactor = reactor # type: ignore[assignment]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
ip = ip_address(self.host)
|
ip = ip_address(self.host)
|
||||||
|
@ -139,7 +143,7 @@ class RemoteHandler(logging.Handler):
|
||||||
self._stopping = False
|
self._stopping = False
|
||||||
self._connect()
|
self._connect()
|
||||||
|
|
||||||
def close(self):
|
def close(self) -> None:
|
||||||
self._stopping = True
|
self._stopping = True
|
||||||
self._service.stopService()
|
self._service.stopService()
|
||||||
|
|
||||||
|
|
|
@ -16,6 +16,8 @@
|
||||||
import logging
|
import logging
|
||||||
import traceback
|
import traceback
|
||||||
from io import StringIO
|
from io import StringIO
|
||||||
|
from types import TracebackType
|
||||||
|
from typing import Optional, Tuple, Type
|
||||||
|
|
||||||
|
|
||||||
class LogFormatter(logging.Formatter):
|
class LogFormatter(logging.Formatter):
|
||||||
|
@ -28,10 +30,14 @@ class LogFormatter(logging.Formatter):
|
||||||
where it was caught are logged).
|
where it was caught are logged).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def formatException(
|
||||||
super().__init__(*args, **kwargs)
|
self,
|
||||||
|
ei: Tuple[
|
||||||
def formatException(self, ei):
|
Optional[Type[BaseException]],
|
||||||
|
Optional[BaseException],
|
||||||
|
Optional[TracebackType],
|
||||||
|
],
|
||||||
|
) -> str:
|
||||||
sio = StringIO()
|
sio = StringIO()
|
||||||
(typ, val, tb) = ei
|
(typ, val, tb) = ei
|
||||||
|
|
||||||
|
|
|
@ -49,7 +49,7 @@ class PeriodicallyFlushingMemoryHandler(MemoryHandler):
|
||||||
)
|
)
|
||||||
self._flushing_thread.start()
|
self._flushing_thread.start()
|
||||||
|
|
||||||
def on_reactor_running():
|
def on_reactor_running() -> None:
|
||||||
self._reactor_started = True
|
self._reactor_started = True
|
||||||
|
|
||||||
reactor_to_use: IReactorCore
|
reactor_to_use: IReactorCore
|
||||||
|
@ -74,7 +74,7 @@ class PeriodicallyFlushingMemoryHandler(MemoryHandler):
|
||||||
else:
|
else:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def _flush_periodically(self):
|
def _flush_periodically(self) -> None:
|
||||||
"""
|
"""
|
||||||
Whilst this handler is active, flush the handler periodically.
|
Whilst this handler is active, flush the handler periodically.
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -13,6 +13,8 @@
|
||||||
# limitations under the License.import logging
|
# limitations under the License.import logging
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
from types import TracebackType
|
||||||
|
from typing import Optional, Type
|
||||||
|
|
||||||
from opentracing import Scope, ScopeManager
|
from opentracing import Scope, ScopeManager
|
||||||
|
|
||||||
|
@ -107,19 +109,26 @@ class _LogContextScope(Scope):
|
||||||
and - if enter_logcontext was set - the logcontext is finished too.
|
and - if enter_logcontext was set - the logcontext is finished too.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, manager, span, logcontext, enter_logcontext, finish_on_close):
|
def __init__(
|
||||||
|
self,
|
||||||
|
manager: LogContextScopeManager,
|
||||||
|
span,
|
||||||
|
logcontext,
|
||||||
|
enter_logcontext: bool,
|
||||||
|
finish_on_close: bool,
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
manager (LogContextScopeManager):
|
manager:
|
||||||
the manager that is responsible for this scope.
|
the manager that is responsible for this scope.
|
||||||
span (Span):
|
span (Span):
|
||||||
the opentracing span which this scope represents the local
|
the opentracing span which this scope represents the local
|
||||||
lifetime for.
|
lifetime for.
|
||||||
logcontext (LogContext):
|
logcontext (LogContext):
|
||||||
the logcontext to which this scope is attached.
|
the logcontext to which this scope is attached.
|
||||||
enter_logcontext (Boolean):
|
enter_logcontext:
|
||||||
if True the logcontext will be exited when the scope is finished
|
if True the logcontext will be exited when the scope is finished
|
||||||
finish_on_close (Boolean):
|
finish_on_close:
|
||||||
if True finish the span when the scope is closed
|
if True finish the span when the scope is closed
|
||||||
"""
|
"""
|
||||||
super().__init__(manager, span)
|
super().__init__(manager, span)
|
||||||
|
@ -127,16 +136,21 @@ class _LogContextScope(Scope):
|
||||||
self._finish_on_close = finish_on_close
|
self._finish_on_close = finish_on_close
|
||||||
self._enter_logcontext = enter_logcontext
|
self._enter_logcontext = enter_logcontext
|
||||||
|
|
||||||
def __exit__(self, exc_type, value, traceback):
|
def __exit__(
|
||||||
|
self,
|
||||||
|
exc_type: Optional[Type[BaseException]],
|
||||||
|
value: Optional[BaseException],
|
||||||
|
traceback: Optional[TracebackType],
|
||||||
|
) -> None:
|
||||||
if exc_type == twisted.internet.defer._DefGen_Return:
|
if exc_type == twisted.internet.defer._DefGen_Return:
|
||||||
# filter out defer.returnValue() calls
|
# filter out defer.returnValue() calls
|
||||||
exc_type = value = traceback = None
|
exc_type = value = traceback = None
|
||||||
super().__exit__(exc_type, value, traceback)
|
super().__exit__(exc_type, value, traceback)
|
||||||
|
|
||||||
def __str__(self):
|
def __str__(self) -> str:
|
||||||
return f"Scope<{self.span}>"
|
return f"Scope<{self.span}>"
|
||||||
|
|
||||||
def close(self):
|
def close(self) -> None:
|
||||||
active_scope = self.manager.active
|
active_scope = self.manager.active
|
||||||
if active_scope is not self:
|
if active_scope is not self:
|
||||||
logger.error(
|
logger.error(
|
||||||
|
|
|
@ -18,6 +18,7 @@ import os
|
||||||
import re
|
import re
|
||||||
from typing import Iterable, Optional, overload
|
from typing import Iterable, Optional, overload
|
||||||
|
|
||||||
|
import attr
|
||||||
from prometheus_client import REGISTRY, Metric
|
from prometheus_client import REGISTRY, Metric
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
|
@ -27,52 +28,24 @@ from synapse.metrics._types import Collector
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def _setup_jemalloc_stats() -> None:
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
"""Checks to see if jemalloc is loaded, and hooks up a collector to record
|
class JemallocStats:
|
||||||
statistics exposed by jemalloc.
|
jemalloc: ctypes.CDLL
|
||||||
"""
|
|
||||||
|
|
||||||
# Try to find the loaded jemalloc shared library, if any. We need to
|
|
||||||
# introspect into what is loaded, rather than loading whatever is on the
|
|
||||||
# path, as if we load a *different* jemalloc version things will seg fault.
|
|
||||||
|
|
||||||
# We look in `/proc/self/maps`, which only exists on linux.
|
|
||||||
if not os.path.exists("/proc/self/maps"):
|
|
||||||
logger.debug("Not looking for jemalloc as no /proc/self/maps exist")
|
|
||||||
return
|
|
||||||
|
|
||||||
# We're looking for a path at the end of the line that includes
|
|
||||||
# "libjemalloc".
|
|
||||||
regex = re.compile(r"/\S+/libjemalloc.*$")
|
|
||||||
|
|
||||||
jemalloc_path = None
|
|
||||||
with open("/proc/self/maps") as f:
|
|
||||||
for line in f:
|
|
||||||
match = regex.search(line.strip())
|
|
||||||
if match:
|
|
||||||
jemalloc_path = match.group()
|
|
||||||
|
|
||||||
if not jemalloc_path:
|
|
||||||
# No loaded jemalloc was found.
|
|
||||||
logger.debug("jemalloc not found")
|
|
||||||
return
|
|
||||||
|
|
||||||
logger.debug("Found jemalloc at %s", jemalloc_path)
|
|
||||||
|
|
||||||
jemalloc = ctypes.CDLL(jemalloc_path)
|
|
||||||
|
|
||||||
@overload
|
@overload
|
||||||
def _mallctl(
|
def _mallctl(
|
||||||
name: str, read: Literal[True] = True, write: Optional[int] = None
|
self, name: str, read: Literal[True] = True, write: Optional[int] = None
|
||||||
) -> int:
|
) -> int:
|
||||||
...
|
...
|
||||||
|
|
||||||
@overload
|
@overload
|
||||||
def _mallctl(name: str, read: Literal[False], write: Optional[int] = None) -> None:
|
def _mallctl(
|
||||||
|
self, name: str, read: Literal[False], write: Optional[int] = None
|
||||||
|
) -> None:
|
||||||
...
|
...
|
||||||
|
|
||||||
def _mallctl(
|
def _mallctl(
|
||||||
name: str, read: bool = True, write: Optional[int] = None
|
self, name: str, read: bool = True, write: Optional[int] = None
|
||||||
) -> Optional[int]:
|
) -> Optional[int]:
|
||||||
"""Wrapper around `mallctl` for reading and writing integers to
|
"""Wrapper around `mallctl` for reading and writing integers to
|
||||||
jemalloc.
|
jemalloc.
|
||||||
|
@ -120,7 +93,7 @@ def _setup_jemalloc_stats() -> None:
|
||||||
# Where oldp/oldlenp is a buffer where the old value will be written to
|
# Where oldp/oldlenp is a buffer where the old value will be written to
|
||||||
# (if not null), and newp/newlen is the buffer with the new value to set
|
# (if not null), and newp/newlen is the buffer with the new value to set
|
||||||
# (if not null). Note that they're all references *except* newlen.
|
# (if not null). Note that they're all references *except* newlen.
|
||||||
result = jemalloc.mallctl(
|
result = self.jemalloc.mallctl(
|
||||||
name.encode("ascii"),
|
name.encode("ascii"),
|
||||||
input_var_ref,
|
input_var_ref,
|
||||||
input_len_ref,
|
input_len_ref,
|
||||||
|
@ -136,21 +109,80 @@ def _setup_jemalloc_stats() -> None:
|
||||||
|
|
||||||
return input_var.value
|
return input_var.value
|
||||||
|
|
||||||
def _jemalloc_refresh_stats() -> None:
|
def refresh_stats(self) -> None:
|
||||||
"""Request that jemalloc updates its internal statistics. This needs to
|
"""Request that jemalloc updates its internal statistics. This needs to
|
||||||
be called before querying for stats, otherwise it will return stale
|
be called before querying for stats, otherwise it will return stale
|
||||||
values.
|
values.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
_mallctl("epoch", read=False, write=1)
|
self._mallctl("epoch", read=False, write=1)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warning("Failed to reload jemalloc stats: %s", e)
|
logger.warning("Failed to reload jemalloc stats: %s", e)
|
||||||
|
|
||||||
|
def get_stat(self, name: str) -> int:
|
||||||
|
"""Request the stat of the given name at the time of the last
|
||||||
|
`refresh_stats` call. This may throw if we fail to read
|
||||||
|
the stat.
|
||||||
|
"""
|
||||||
|
return self._mallctl(f"stats.{name}")
|
||||||
|
|
||||||
|
|
||||||
|
_JEMALLOC_STATS: Optional[JemallocStats] = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_jemalloc_stats() -> Optional[JemallocStats]:
|
||||||
|
"""Returns an interface to jemalloc, if it is being used.
|
||||||
|
|
||||||
|
Note that this will always return None until `setup_jemalloc_stats` has been
|
||||||
|
called.
|
||||||
|
"""
|
||||||
|
return _JEMALLOC_STATS
|
||||||
|
|
||||||
|
|
||||||
|
def _setup_jemalloc_stats() -> None:
|
||||||
|
"""Checks to see if jemalloc is loaded, and hooks up a collector to record
|
||||||
|
statistics exposed by jemalloc.
|
||||||
|
"""
|
||||||
|
|
||||||
|
global _JEMALLOC_STATS
|
||||||
|
|
||||||
|
# Try to find the loaded jemalloc shared library, if any. We need to
|
||||||
|
# introspect into what is loaded, rather than loading whatever is on the
|
||||||
|
# path, as if we load a *different* jemalloc version things will seg fault.
|
||||||
|
|
||||||
|
# We look in `/proc/self/maps`, which only exists on linux.
|
||||||
|
if not os.path.exists("/proc/self/maps"):
|
||||||
|
logger.debug("Not looking for jemalloc as no /proc/self/maps exist")
|
||||||
|
return
|
||||||
|
|
||||||
|
# We're looking for a path at the end of the line that includes
|
||||||
|
# "libjemalloc".
|
||||||
|
regex = re.compile(r"/\S+/libjemalloc.*$")
|
||||||
|
|
||||||
|
jemalloc_path = None
|
||||||
|
with open("/proc/self/maps") as f:
|
||||||
|
for line in f:
|
||||||
|
match = regex.search(line.strip())
|
||||||
|
if match:
|
||||||
|
jemalloc_path = match.group()
|
||||||
|
|
||||||
|
if not jemalloc_path:
|
||||||
|
# No loaded jemalloc was found.
|
||||||
|
logger.debug("jemalloc not found")
|
||||||
|
return
|
||||||
|
|
||||||
|
logger.debug("Found jemalloc at %s", jemalloc_path)
|
||||||
|
|
||||||
|
jemalloc_dll = ctypes.CDLL(jemalloc_path)
|
||||||
|
|
||||||
|
stats = JemallocStats(jemalloc_dll)
|
||||||
|
_JEMALLOC_STATS = stats
|
||||||
|
|
||||||
class JemallocCollector(Collector):
|
class JemallocCollector(Collector):
|
||||||
"""Metrics for internal jemalloc stats."""
|
"""Metrics for internal jemalloc stats."""
|
||||||
|
|
||||||
def collect(self) -> Iterable[Metric]:
|
def collect(self) -> Iterable[Metric]:
|
||||||
_jemalloc_refresh_stats()
|
stats.refresh_stats()
|
||||||
|
|
||||||
g = GaugeMetricFamily(
|
g = GaugeMetricFamily(
|
||||||
"jemalloc_stats_app_memory_bytes",
|
"jemalloc_stats_app_memory_bytes",
|
||||||
|
@ -184,7 +216,7 @@ def _setup_jemalloc_stats() -> None:
|
||||||
"metadata",
|
"metadata",
|
||||||
):
|
):
|
||||||
try:
|
try:
|
||||||
value = _mallctl(f"stats.{t}")
|
value = stats.get_stat(t)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# There was an error fetching the value, skip.
|
# There was an error fetching the value, skip.
|
||||||
logger.warning("Failed to read jemalloc stats.%s: %s", t, e)
|
logger.warning("Failed to read jemalloc stats.%s: %s", t, e)
|
||||||
|
|
|
@ -35,6 +35,7 @@ from typing_extensions import ParamSpec
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.web.resource import Resource
|
from twisted.web.resource import Resource
|
||||||
|
|
||||||
|
from synapse import spam_checker_api
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.events.presence_router import (
|
from synapse.events.presence_router import (
|
||||||
|
@ -47,6 +48,7 @@ from synapse.events.spamcheck import (
|
||||||
CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK,
|
CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK,
|
||||||
CHECK_REGISTRATION_FOR_SPAM_CALLBACK,
|
CHECK_REGISTRATION_FOR_SPAM_CALLBACK,
|
||||||
CHECK_USERNAME_FOR_SPAM_CALLBACK,
|
CHECK_USERNAME_FOR_SPAM_CALLBACK,
|
||||||
|
SHOULD_DROP_FEDERATED_EVENT_CALLBACK,
|
||||||
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK,
|
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK,
|
||||||
USER_MAY_CREATE_ROOM_CALLBACK,
|
USER_MAY_CREATE_ROOM_CALLBACK,
|
||||||
USER_MAY_INVITE_CALLBACK,
|
USER_MAY_INVITE_CALLBACK,
|
||||||
|
@ -139,6 +141,9 @@ are loaded into Synapse.
|
||||||
|
|
||||||
PRESENCE_ALL_USERS = PresenceRouter.ALL_USERS
|
PRESENCE_ALL_USERS = PresenceRouter.ALL_USERS
|
||||||
|
|
||||||
|
ALLOW = spam_checker_api.Allow.ALLOW
|
||||||
|
# Singleton value used to mark a message as permitted.
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
"errors",
|
"errors",
|
||||||
"make_deferred_yieldable",
|
"make_deferred_yieldable",
|
||||||
|
@ -146,6 +151,7 @@ __all__ = [
|
||||||
"respond_with_html",
|
"respond_with_html",
|
||||||
"run_in_background",
|
"run_in_background",
|
||||||
"cached",
|
"cached",
|
||||||
|
"Allow",
|
||||||
"UserID",
|
"UserID",
|
||||||
"DatabasePool",
|
"DatabasePool",
|
||||||
"LoggingTransaction",
|
"LoggingTransaction",
|
||||||
|
@ -234,6 +240,9 @@ class ModuleApi:
|
||||||
self,
|
self,
|
||||||
*,
|
*,
|
||||||
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
|
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
|
||||||
|
should_drop_federated_event: Optional[
|
||||||
|
SHOULD_DROP_FEDERATED_EVENT_CALLBACK
|
||||||
|
] = None,
|
||||||
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
|
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
|
||||||
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
|
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
|
||||||
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
|
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
|
||||||
|
@ -254,6 +263,7 @@ class ModuleApi:
|
||||||
"""
|
"""
|
||||||
return self._spam_checker.register_callbacks(
|
return self._spam_checker.register_callbacks(
|
||||||
check_event_for_spam=check_event_for_spam,
|
check_event_for_spam=check_event_for_spam,
|
||||||
|
should_drop_federated_event=should_drop_federated_event,
|
||||||
user_may_join_room=user_may_join_room,
|
user_may_join_room=user_may_join_room,
|
||||||
user_may_invite=user_may_invite,
|
user_may_invite=user_may_invite,
|
||||||
user_may_send_3pid_invite=user_may_send_3pid_invite,
|
user_may_send_3pid_invite=user_may_send_3pid_invite,
|
||||||
|
|
|
@ -15,6 +15,7 @@
|
||||||
"""Exception types which are exposed as part of the stable module API"""
|
"""Exception types which are exposed as part of the stable module API"""
|
||||||
|
|
||||||
from synapse.api.errors import (
|
from synapse.api.errors import (
|
||||||
|
Codes,
|
||||||
InvalidClientCredentialsError,
|
InvalidClientCredentialsError,
|
||||||
RedirectException,
|
RedirectException,
|
||||||
SynapseError,
|
SynapseError,
|
||||||
|
@ -24,6 +25,7 @@ from synapse.handlers.push_rules import InvalidRuleException
|
||||||
from synapse.storage.push_rule import RuleNotFoundException
|
from synapse.storage.push_rule import RuleNotFoundException
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
|
"Codes",
|
||||||
"InvalidClientCredentialsError",
|
"InvalidClientCredentialsError",
|
||||||
"RedirectException",
|
"RedirectException",
|
||||||
"SynapseError",
|
"SynapseError",
|
||||||
|
|
|
@ -46,6 +46,7 @@ from synapse.types import (
|
||||||
JsonDict,
|
JsonDict,
|
||||||
PersistedEventPosition,
|
PersistedEventPosition,
|
||||||
RoomStreamToken,
|
RoomStreamToken,
|
||||||
|
StreamKeyType,
|
||||||
StreamToken,
|
StreamToken,
|
||||||
UserID,
|
UserID,
|
||||||
)
|
)
|
||||||
|
@ -370,7 +371,7 @@ class Notifier:
|
||||||
|
|
||||||
if users or rooms:
|
if users or rooms:
|
||||||
self.on_new_event(
|
self.on_new_event(
|
||||||
"room_key",
|
StreamKeyType.ROOM,
|
||||||
max_room_stream_token,
|
max_room_stream_token,
|
||||||
users=users,
|
users=users,
|
||||||
rooms=rooms,
|
rooms=rooms,
|
||||||
|
@ -440,7 +441,7 @@ class Notifier:
|
||||||
for room in rooms:
|
for room in rooms:
|
||||||
user_streams |= self.room_to_user_streams.get(room, set())
|
user_streams |= self.room_to_user_streams.get(room, set())
|
||||||
|
|
||||||
if stream_key == "to_device_key":
|
if stream_key == StreamKeyType.TO_DEVICE:
|
||||||
issue9533_logger.debug(
|
issue9533_logger.debug(
|
||||||
"to-device messages stream id %s, awaking streams for %s",
|
"to-device messages stream id %s, awaking streams for %s",
|
||||||
new_token,
|
new_token,
|
||||||
|
|
|
@ -12,6 +12,80 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""
|
||||||
|
This module implements the push rules & notifications portion of the Matrix
|
||||||
|
specification.
|
||||||
|
|
||||||
|
There's a few related features:
|
||||||
|
|
||||||
|
* Push notifications (i.e. email or outgoing requests to a Push Gateway).
|
||||||
|
* Calculation of unread notifications (for /sync and /notifications).
|
||||||
|
|
||||||
|
When Synapse receives a new event (locally, via the Client-Server API, or via
|
||||||
|
federation), the following occurs:
|
||||||
|
|
||||||
|
1. The push rules get evaluated to generate a set of per-user actions.
|
||||||
|
2. The event is persisted into the database.
|
||||||
|
3. (In the background) The notifier is notified about the new event.
|
||||||
|
|
||||||
|
The per-user actions are initially stored in the event_push_actions_staging table,
|
||||||
|
before getting moved into the event_push_actions table when the event is persisted.
|
||||||
|
The event_push_actions table is periodically summarised into the event_push_summary
|
||||||
|
and event_push_summary_stream_ordering tables.
|
||||||
|
|
||||||
|
Since push actions block an event from being persisted the generation of push
|
||||||
|
actions is performance sensitive.
|
||||||
|
|
||||||
|
The general interaction of the classes are:
|
||||||
|
|
||||||
|
+---------------------------------------------+
|
||||||
|
| FederationEventHandler/EventCreationHandler |
|
||||||
|
+---------------------------------------------+
|
||||||
|
|
|
||||||
|
v
|
||||||
|
+-----------------------+ +---------------------------+
|
||||||
|
| BulkPushRuleEvaluator |---->| PushRuleEvaluatorForEvent |
|
||||||
|
+-----------------------+ +---------------------------+
|
||||||
|
|
|
||||||
|
v
|
||||||
|
+-----------------------------+
|
||||||
|
| EventPushActionsWorkerStore |
|
||||||
|
+-----------------------------+
|
||||||
|
|
||||||
|
The notifier notifies the pusher pool of the new event, which checks for affected
|
||||||
|
users. Each user-configured pusher of the affected users then performs the
|
||||||
|
previously calculated action.
|
||||||
|
|
||||||
|
The general interaction of the classes are:
|
||||||
|
|
||||||
|
+----------+
|
||||||
|
| Notifier |
|
||||||
|
+----------+
|
||||||
|
|
|
||||||
|
v
|
||||||
|
+------------+ +--------------+
|
||||||
|
| PusherPool |---->| PusherConfig |
|
||||||
|
+------------+ +--------------+
|
||||||
|
|
|
||||||
|
| +---------------+
|
||||||
|
+<--->| PusherFactory |
|
||||||
|
| +---------------+
|
||||||
|
v
|
||||||
|
+------------------------+ +-----------------------------------------------+
|
||||||
|
| EmailPusher/HttpPusher |---->| EventPushActionsWorkerStore/PusherWorkerStore |
|
||||||
|
+------------------------+ +-----------------------------------------------+
|
||||||
|
|
|
||||||
|
v
|
||||||
|
+-------------------------+
|
||||||
|
| Mailer/SimpleHttpClient |
|
||||||
|
+-------------------------+
|
||||||
|
|
||||||
|
The Pusher instance also calls out to various utilities for generating payloads
|
||||||
|
(or email templates), but those interactions are not detailed in this diagram
|
||||||
|
(and are specific to the type of pusher).
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
import abc
|
import abc
|
||||||
from typing import TYPE_CHECKING, Any, Dict, Optional
|
from typing import TYPE_CHECKING, Any, Dict, Optional
|
||||||
|
|
||||||
|
|
|
@ -1,44 +0,0 @@
|
||||||
# Copyright 2015 OpenMarket Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from typing import TYPE_CHECKING
|
|
||||||
|
|
||||||
from synapse.events import EventBase
|
|
||||||
from synapse.events.snapshot import EventContext
|
|
||||||
from synapse.push.bulk_push_rule_evaluator import BulkPushRuleEvaluator
|
|
||||||
from synapse.util.metrics import Measure
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from synapse.server import HomeServer
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class ActionGenerator:
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
|
||||||
self.clock = hs.get_clock()
|
|
||||||
self.bulk_evaluator = BulkPushRuleEvaluator(hs)
|
|
||||||
# really we want to get all user ids and all profile tags too,
|
|
||||||
# since we want the actions for each profile tag for every user and
|
|
||||||
# also actions for a client with no profile tag for each user.
|
|
||||||
# Currently the event stream doesn't support profile tags on an
|
|
||||||
# event stream, so we just run the rules for a client with no profile
|
|
||||||
# tag (ie. we just need all the users).
|
|
||||||
|
|
||||||
async def handle_push_actions_for_event(
|
|
||||||
self, event: EventBase, context: EventContext
|
|
||||||
) -> None:
|
|
||||||
with Measure(self.clock, "action_for_event_by_user"):
|
|
||||||
await self.bulk_evaluator.action_for_event_by_user(event, context)
|
|
|
@ -20,8 +20,8 @@ import attr
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes, Membership, RelationTypes
|
from synapse.api.constants import EventTypes, Membership, RelationTypes
|
||||||
from synapse.event_auth import get_user_power_level
|
from synapse.event_auth import auth_types_for_event, get_user_power_level
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase, relation_from_event
|
||||||
from synapse.events.snapshot import EventContext
|
from synapse.events.snapshot import EventContext
|
||||||
from synapse.state import POWER_KEY
|
from synapse.state import POWER_KEY
|
||||||
from synapse.storage.databases.main.roommember import EventIdMembership
|
from synapse.storage.databases.main.roommember import EventIdMembership
|
||||||
|
@ -29,7 +29,9 @@ from synapse.util.async_helpers import Linearizer
|
||||||
from synapse.util.caches import CacheMetric, register_cache
|
from synapse.util.caches import CacheMetric, register_cache
|
||||||
from synapse.util.caches.descriptors import lru_cache
|
from synapse.util.caches.descriptors import lru_cache
|
||||||
from synapse.util.caches.lrucache import LruCache
|
from synapse.util.caches.lrucache import LruCache
|
||||||
|
from synapse.util.metrics import measure_func
|
||||||
|
|
||||||
|
from ..storage.state import StateFilter
|
||||||
from .push_rule_evaluator import PushRuleEvaluatorForEvent
|
from .push_rule_evaluator import PushRuleEvaluatorForEvent
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
@ -77,8 +79,8 @@ def _should_count_as_unread(event: EventBase, context: EventContext) -> bool:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# Exclude edits.
|
# Exclude edits.
|
||||||
relates_to = event.content.get("m.relates_to", {})
|
relates_to = relation_from_event(event)
|
||||||
if relates_to.get("rel_type") == RelationTypes.REPLACE:
|
if relates_to and relates_to.rel_type == RelationTypes.REPLACE:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# Mark events that have a non-empty string body as unread.
|
# Mark events that have a non-empty string body as unread.
|
||||||
|
@ -105,6 +107,7 @@ class BulkPushRuleEvaluator:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
|
self.clock = hs.get_clock()
|
||||||
self._event_auth_handler = hs.get_event_auth_handler()
|
self._event_auth_handler = hs.get_event_auth_handler()
|
||||||
|
|
||||||
# Used by `RulesForRoom` to ensure only one thing mutates the cache at a
|
# Used by `RulesForRoom` to ensure only one thing mutates the cache at a
|
||||||
|
@ -166,8 +169,12 @@ class BulkPushRuleEvaluator:
|
||||||
async def _get_power_levels_and_sender_level(
|
async def _get_power_levels_and_sender_level(
|
||||||
self, event: EventBase, context: EventContext
|
self, event: EventBase, context: EventContext
|
||||||
) -> Tuple[dict, int]:
|
) -> Tuple[dict, int]:
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
event_types = auth_types_for_event(event.room_version, event)
|
||||||
|
prev_state_ids = await context.get_prev_state_ids(
|
||||||
|
StateFilter.from_types(event_types)
|
||||||
|
)
|
||||||
pl_event_id = prev_state_ids.get(POWER_KEY)
|
pl_event_id = prev_state_ids.get(POWER_KEY)
|
||||||
|
|
||||||
if pl_event_id:
|
if pl_event_id:
|
||||||
# fastpath: if there's a power level event, that's all we need, and
|
# fastpath: if there's a power level event, that's all we need, and
|
||||||
# not having a power level event is an extreme edge case
|
# not having a power level event is an extreme edge case
|
||||||
|
@ -185,6 +192,7 @@ class BulkPushRuleEvaluator:
|
||||||
|
|
||||||
return pl_event.content if pl_event else {}, sender_level
|
return pl_event.content if pl_event else {}, sender_level
|
||||||
|
|
||||||
|
@measure_func("action_for_event_by_user")
|
||||||
async def action_for_event_by_user(
|
async def action_for_event_by_user(
|
||||||
self, event: EventBase, context: EventContext
|
self, event: EventBase, context: EventContext
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -192,6 +200,10 @@ class BulkPushRuleEvaluator:
|
||||||
should increment the unread count, and insert the results into the
|
should increment the unread count, and insert the results into the
|
||||||
event_push_actions_staging table.
|
event_push_actions_staging table.
|
||||||
"""
|
"""
|
||||||
|
if event.internal_metadata.is_outlier():
|
||||||
|
# This can happen due to out of band memberships
|
||||||
|
return
|
||||||
|
|
||||||
count_as_unread = _should_count_as_unread(event, context)
|
count_as_unread = _should_count_as_unread(event, context)
|
||||||
|
|
||||||
rules_by_user = await self._get_rules_for_event(event, context)
|
rules_by_user = await self._get_rules_for_event(event, context)
|
||||||
|
@ -208,8 +220,6 @@ class BulkPushRuleEvaluator:
|
||||||
event, len(room_members), sender_power_level, power_levels
|
event, len(room_members), sender_power_level, power_levels
|
||||||
)
|
)
|
||||||
|
|
||||||
condition_cache: Dict[str, bool] = {}
|
|
||||||
|
|
||||||
# If the event is not a state event check if any users ignore the sender.
|
# If the event is not a state event check if any users ignore the sender.
|
||||||
if not event.is_state():
|
if not event.is_state():
|
||||||
ignorers = await self.store.ignored_by(event.sender)
|
ignorers = await self.store.ignored_by(event.sender)
|
||||||
|
@ -247,8 +257,8 @@ class BulkPushRuleEvaluator:
|
||||||
if "enabled" in rule and not rule["enabled"]:
|
if "enabled" in rule and not rule["enabled"]:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
matches = _condition_checker(
|
matches = evaluator.check_conditions(
|
||||||
evaluator, rule["conditions"], uid, display_name, condition_cache
|
rule["conditions"], uid, display_name
|
||||||
)
|
)
|
||||||
if matches:
|
if matches:
|
||||||
actions = [x for x in rule["actions"] if x != "dont_notify"]
|
actions = [x for x in rule["actions"] if x != "dont_notify"]
|
||||||
|
@ -267,32 +277,6 @@ class BulkPushRuleEvaluator:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def _condition_checker(
|
|
||||||
evaluator: PushRuleEvaluatorForEvent,
|
|
||||||
conditions: List[dict],
|
|
||||||
uid: str,
|
|
||||||
display_name: Optional[str],
|
|
||||||
cache: Dict[str, bool],
|
|
||||||
) -> bool:
|
|
||||||
for cond in conditions:
|
|
||||||
_cache_key = cond.get("_cache_key", None)
|
|
||||||
if _cache_key:
|
|
||||||
res = cache.get(_cache_key, None)
|
|
||||||
if res is False:
|
|
||||||
return False
|
|
||||||
elif res is True:
|
|
||||||
continue
|
|
||||||
|
|
||||||
res = evaluator.matches(cond, uid, display_name)
|
|
||||||
if _cache_key:
|
|
||||||
cache[_cache_key] = bool(res)
|
|
||||||
|
|
||||||
if not res:
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
MemberMap = Dict[str, Optional[EventIdMembership]]
|
MemberMap = Dict[str, Optional[EventIdMembership]]
|
||||||
Rule = Dict[str, dict]
|
Rule = Dict[str, dict]
|
||||||
RulesByUser = Dict[str, List[Rule]]
|
RulesByUser = Dict[str, List[Rule]]
|
||||||
|
|
|
@ -398,7 +398,7 @@ class HttpPusher(Pusher):
|
||||||
rejected = []
|
rejected = []
|
||||||
if "rejected" in resp:
|
if "rejected" in resp:
|
||||||
rejected = resp["rejected"]
|
rejected = resp["rejected"]
|
||||||
else:
|
if not rejected:
|
||||||
self.badge_count_last_call = badge
|
self.badge_count_last_call = badge
|
||||||
return rejected
|
return rejected
|
||||||
|
|
||||||
|
|
|
@ -129,9 +129,55 @@ class PushRuleEvaluatorForEvent:
|
||||||
# Maps strings of e.g. 'content.body' -> event["content"]["body"]
|
# Maps strings of e.g. 'content.body' -> event["content"]["body"]
|
||||||
self._value_cache = _flatten_dict(event)
|
self._value_cache = _flatten_dict(event)
|
||||||
|
|
||||||
|
# Maps cache keys to final values.
|
||||||
|
self._condition_cache: Dict[str, bool] = {}
|
||||||
|
|
||||||
|
def check_conditions(
|
||||||
|
self, conditions: List[dict], uid: str, display_name: Optional[str]
|
||||||
|
) -> bool:
|
||||||
|
"""
|
||||||
|
Returns true if a user's conditions/user ID/display name match the event.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
conditions: The user's conditions to match.
|
||||||
|
uid: The user's MXID.
|
||||||
|
display_name: The display name.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if all conditions match the event, False otherwise.
|
||||||
|
"""
|
||||||
|
for cond in conditions:
|
||||||
|
_cache_key = cond.get("_cache_key", None)
|
||||||
|
if _cache_key:
|
||||||
|
res = self._condition_cache.get(_cache_key, None)
|
||||||
|
if res is False:
|
||||||
|
return False
|
||||||
|
elif res is True:
|
||||||
|
continue
|
||||||
|
|
||||||
|
res = self.matches(cond, uid, display_name)
|
||||||
|
if _cache_key:
|
||||||
|
self._condition_cache[_cache_key] = bool(res)
|
||||||
|
|
||||||
|
if not res:
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
def matches(
|
def matches(
|
||||||
self, condition: Dict[str, Any], user_id: str, display_name: Optional[str]
|
self, condition: Dict[str, Any], user_id: str, display_name: Optional[str]
|
||||||
) -> bool:
|
) -> bool:
|
||||||
|
"""
|
||||||
|
Returns true if a user's condition/user ID/display name match the event.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
condition: The user's condition to match.
|
||||||
|
uid: The user's MXID.
|
||||||
|
display_name: The display name, or None if there is not one.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if the condition matches the event, False otherwise.
|
||||||
|
"""
|
||||||
if condition["kind"] == "event_match":
|
if condition["kind"] == "event_match":
|
||||||
return self._event_match(condition, user_id)
|
return self._event_match(condition, user_id)
|
||||||
elif condition["kind"] == "contains_display_name":
|
elif condition["kind"] == "contains_display_name":
|
||||||
|
@ -146,6 +192,16 @@ class PushRuleEvaluatorForEvent:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def _event_match(self, condition: dict, user_id: str) -> bool:
|
def _event_match(self, condition: dict, user_id: str) -> bool:
|
||||||
|
"""
|
||||||
|
Check an "event_match" push rule condition.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
condition: The "event_match" push rule condition to match.
|
||||||
|
user_id: The user's MXID.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if the condition matches the event, False otherwise.
|
||||||
|
"""
|
||||||
pattern = condition.get("pattern", None)
|
pattern = condition.get("pattern", None)
|
||||||
|
|
||||||
if not pattern:
|
if not pattern:
|
||||||
|
@ -167,13 +223,22 @@ class PushRuleEvaluatorForEvent:
|
||||||
|
|
||||||
return _glob_matches(pattern, body, word_boundary=True)
|
return _glob_matches(pattern, body, word_boundary=True)
|
||||||
else:
|
else:
|
||||||
haystack = self._get_value(condition["key"])
|
haystack = self._value_cache.get(condition["key"], None)
|
||||||
if haystack is None:
|
if haystack is None:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
return _glob_matches(pattern, haystack)
|
return _glob_matches(pattern, haystack)
|
||||||
|
|
||||||
def _contains_display_name(self, display_name: Optional[str]) -> bool:
|
def _contains_display_name(self, display_name: Optional[str]) -> bool:
|
||||||
|
"""
|
||||||
|
Check an "event_match" push rule condition.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
display_name: The display name, or None if there is not one.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if the display name is found in the event body, False otherwise.
|
||||||
|
"""
|
||||||
if not display_name:
|
if not display_name:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -191,9 +256,6 @@ class PushRuleEvaluatorForEvent:
|
||||||
|
|
||||||
return bool(r.search(body))
|
return bool(r.search(body))
|
||||||
|
|
||||||
def _get_value(self, dotted_key: str) -> Optional[str]:
|
|
||||||
return self._value_cache.get(dotted_key, None)
|
|
||||||
|
|
||||||
|
|
||||||
# Caches (string, is_glob, word_boundary) -> regex for push. See _glob_matches
|
# Caches (string, is_glob, word_boundary) -> regex for push. See _glob_matches
|
||||||
regex_cache: LruCache[Tuple[str, bool, bool], Pattern] = LruCache(
|
regex_cache: LruCache[Tuple[str, bool, bool], Pattern] = LruCache(
|
||||||
|
|
|
@ -26,7 +26,8 @@ from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse.api.errors import HttpResponseException, SynapseError
|
from synapse.api.errors import HttpResponseException, SynapseError
|
||||||
from synapse.http import RequestTimedOutError
|
from synapse.http import RequestTimedOutError
|
||||||
from synapse.http.server import HttpServer
|
from synapse.http.server import HttpServer, is_method_cancellable
|
||||||
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging import opentracing
|
from synapse.logging import opentracing
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import trace
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
|
@ -310,6 +311,12 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
||||||
url_args = list(self.PATH_ARGS)
|
url_args = list(self.PATH_ARGS)
|
||||||
method = self.METHOD
|
method = self.METHOD
|
||||||
|
|
||||||
|
if self.CACHE and is_method_cancellable(self._handle_request):
|
||||||
|
raise Exception(
|
||||||
|
f"{self.__class__.__name__} has been marked as cancellable, but CACHE "
|
||||||
|
"is set. The cancellable flag would have no effect."
|
||||||
|
)
|
||||||
|
|
||||||
if self.CACHE:
|
if self.CACHE:
|
||||||
url_args.append("txn_id")
|
url_args.append("txn_id")
|
||||||
|
|
||||||
|
@ -324,7 +331,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _check_auth_and_handle(
|
async def _check_auth_and_handle(
|
||||||
self, request: Request, **kwargs: Any
|
self, request: SynapseRequest, **kwargs: Any
|
||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
"""Called on new incoming requests when caching is enabled. Checks
|
"""Called on new incoming requests when caching is enabled. Checks
|
||||||
if there is a cached response for the request and returns that,
|
if there is a cached response for the request and returns that,
|
||||||
|
@ -340,8 +347,18 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
||||||
if self.CACHE:
|
if self.CACHE:
|
||||||
txn_id = kwargs.pop("txn_id")
|
txn_id = kwargs.pop("txn_id")
|
||||||
|
|
||||||
|
# We ignore the `@cancellable` flag, since cancellation wouldn't interupt
|
||||||
|
# `_handle_request` and `ResponseCache` does not handle cancellation
|
||||||
|
# correctly yet. In particular, there may be issues to do with logging
|
||||||
|
# context lifetimes.
|
||||||
|
|
||||||
return await self.response_cache.wrap(
|
return await self.response_cache.wrap(
|
||||||
txn_id, self._handle_request, request, **kwargs
|
txn_id, self._handle_request, request, **kwargs
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# The `@cancellable` decorator may be applied to `_handle_request`. But we
|
||||||
|
# told `HttpServer.register_paths` that our handler is `_check_auth_and_handle`,
|
||||||
|
# so we have to set up the cancellable flag ourselves.
|
||||||
|
request.is_render_cancellable = is_method_cancellable(self._handle_request)
|
||||||
|
|
||||||
return await self._handle_request(request, **kwargs)
|
return await self._handle_request(request, **kwargs)
|
||||||
|
|
|
@ -43,7 +43,7 @@ from synapse.replication.tcp.streams.events import (
|
||||||
EventsStreamEventRow,
|
EventsStreamEventRow,
|
||||||
EventsStreamRow,
|
EventsStreamRow,
|
||||||
)
|
)
|
||||||
from synapse.types import PersistedEventPosition, ReadReceipt, UserID
|
from synapse.types import PersistedEventPosition, ReadReceipt, StreamKeyType, UserID
|
||||||
from synapse.util.async_helpers import Linearizer, timeout_deferred
|
from synapse.util.async_helpers import Linearizer, timeout_deferred
|
||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
|
|
||||||
|
@ -153,19 +153,19 @@ class ReplicationDataHandler:
|
||||||
if stream_name == TypingStream.NAME:
|
if stream_name == TypingStream.NAME:
|
||||||
self._typing_handler.process_replication_rows(token, rows)
|
self._typing_handler.process_replication_rows(token, rows)
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"typing_key", token, rooms=[row.room_id for row in rows]
|
StreamKeyType.TYPING, token, rooms=[row.room_id for row in rows]
|
||||||
)
|
)
|
||||||
elif stream_name == PushRulesStream.NAME:
|
elif stream_name == PushRulesStream.NAME:
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"push_rules_key", token, users=[row.user_id for row in rows]
|
StreamKeyType.PUSH_RULES, token, users=[row.user_id for row in rows]
|
||||||
)
|
)
|
||||||
elif stream_name in (AccountDataStream.NAME, TagAccountDataStream.NAME):
|
elif stream_name in (AccountDataStream.NAME, TagAccountDataStream.NAME):
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"account_data_key", token, users=[row.user_id for row in rows]
|
StreamKeyType.ACCOUNT_DATA, token, users=[row.user_id for row in rows]
|
||||||
)
|
)
|
||||||
elif stream_name == ReceiptsStream.NAME:
|
elif stream_name == ReceiptsStream.NAME:
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"receipt_key", token, rooms=[row.room_id for row in rows]
|
StreamKeyType.RECEIPT, token, rooms=[row.room_id for row in rows]
|
||||||
)
|
)
|
||||||
await self._pusher_pool.on_new_receipts(
|
await self._pusher_pool.on_new_receipts(
|
||||||
token, token, {row.room_id for row in rows}
|
token, token, {row.room_id for row in rows}
|
||||||
|
@ -173,14 +173,18 @@ class ReplicationDataHandler:
|
||||||
elif stream_name == ToDeviceStream.NAME:
|
elif stream_name == ToDeviceStream.NAME:
|
||||||
entities = [row.entity for row in rows if row.entity.startswith("@")]
|
entities = [row.entity for row in rows if row.entity.startswith("@")]
|
||||||
if entities:
|
if entities:
|
||||||
self.notifier.on_new_event("to_device_key", token, users=entities)
|
self.notifier.on_new_event(
|
||||||
|
StreamKeyType.TO_DEVICE, token, users=entities
|
||||||
|
)
|
||||||
elif stream_name == DeviceListsStream.NAME:
|
elif stream_name == DeviceListsStream.NAME:
|
||||||
all_room_ids: Set[str] = set()
|
all_room_ids: Set[str] = set()
|
||||||
for row in rows:
|
for row in rows:
|
||||||
if row.entity.startswith("@"):
|
if row.entity.startswith("@"):
|
||||||
room_ids = await self.store.get_rooms_for_user(row.entity)
|
room_ids = await self.store.get_rooms_for_user(row.entity)
|
||||||
all_room_ids.update(room_ids)
|
all_room_ids.update(room_ids)
|
||||||
self.notifier.on_new_event("device_list_key", token, rooms=all_room_ids)
|
self.notifier.on_new_event(
|
||||||
|
StreamKeyType.DEVICE_LIST, token, rooms=all_room_ids
|
||||||
|
)
|
||||||
elif stream_name == GroupServerStream.NAME:
|
elif stream_name == GroupServerStream.NAME:
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"groups_key", token, users=[row.user_id for row in rows]
|
"groups_key", token, users=[row.user_id for row in rows]
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue