Synapse 1.3.0 (2019-08-15)

==========================
 
 Bugfixes
 --------
 
 - Fix 500 Internal Server Error on `publicRooms` when the public room list was
   cached. ([\#5851](https://github.com/matrix-org/synapse/issues/5851))
 
 Synapse 1.3.0rc1 (2019-08-13)
 ==========================
 
 Features
 --------
 
 - Use `M_USER_DEACTIVATED` instead of `M_UNKNOWN` for errcode when a deactivated user attempts to login. ([\#5686](https://github.com/matrix-org/synapse/issues/5686))
 - Add sd_notify hooks to ease systemd integration and allows usage of Type=Notify. ([\#5732](https://github.com/matrix-org/synapse/issues/5732))
 - Synapse will no longer serve any media repo admin endpoints when `enable_media_repo` is set to False in the configuration. If a media repo worker is used, the admin APIs relating to the media repo will be served from it instead. ([\#5754](https://github.com/matrix-org/synapse/issues/5754), [\#5848](https://github.com/matrix-org/synapse/issues/5848))
 - Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events) over federation. This option can be used to prevent adverse performance on resource-constrained homeservers. ([\#5783](https://github.com/matrix-org/synapse/issues/5783))
 - Allow defining HTML templates to serve the user on account renewal attempt when using the account validity feature. ([\#5807](https://github.com/matrix-org/synapse/issues/5807))
 
 Bugfixes
 --------
 
 - Fix UISIs during homeserver outage. ([\#5693](https://github.com/matrix-org/synapse/issues/5693), [\#5789](https://github.com/matrix-org/synapse/issues/5789))
 - Fix stack overflow in server key lookup code. ([\#5724](https://github.com/matrix-org/synapse/issues/5724))
 - start.sh no longer uses deprecated cli option. ([\#5725](https://github.com/matrix-org/synapse/issues/5725))
 - Log when we receive an event receipt from an unexpected origin. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
 - Fix debian packaging scripts to correctly build sid packages. ([\#5775](https://github.com/matrix-org/synapse/issues/5775))
 - Correctly handle redactions of redactions. ([\#5788](https://github.com/matrix-org/synapse/issues/5788))
 - Return 404 instead of 403 when accessing /rooms/{roomId}/event/{eventId} for an event without the appropriate permissions. ([\#5798](https://github.com/matrix-org/synapse/issues/5798))
 - Fix check that tombstone is a state event in push rules. ([\#5804](https://github.com/matrix-org/synapse/issues/5804))
 - Fix error when trying to login as a deactivated user when using a worker to handle login. ([\#5806](https://github.com/matrix-org/synapse/issues/5806))
 - Fix bug where user `/sync` stream could get wedged in rare circumstances. ([\#5825](https://github.com/matrix-org/synapse/issues/5825))
 - The purge_remote_media.sh script was fixed. ([\#5839](https://github.com/matrix-org/synapse/issues/5839))
 
 Deprecations and Removals
 -------------------------
 
 - Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration. ([\#5678](https://github.com/matrix-org/synapse/issues/5678), [\#5729](https://github.com/matrix-org/synapse/issues/5729))
 - Remove non-functional 'expire_access_token' setting. ([\#5782](https://github.com/matrix-org/synapse/issues/5782))
 
 Internal Changes
 ----------------
 
 - Make Jaeger fully configurable. ([\#5694](https://github.com/matrix-org/synapse/issues/5694))
 - Add precautionary measures to prevent future abuse of `window.opener` in default welcome page. ([\#5695](https://github.com/matrix-org/synapse/issues/5695))
 - Reduce database IO usage by optimising queries for current membership. ([\#5706](https://github.com/matrix-org/synapse/issues/5706), [\#5738](https://github.com/matrix-org/synapse/issues/5738), [\#5746](https://github.com/matrix-org/synapse/issues/5746), [\#5752](https://github.com/matrix-org/synapse/issues/5752), [\#5770](https://github.com/matrix-org/synapse/issues/5770), [\#5774](https://github.com/matrix-org/synapse/issues/5774), [\#5792](https://github.com/matrix-org/synapse/issues/5792), [\#5793](https://github.com/matrix-org/synapse/issues/5793))
 - Improve caching when fetching `get_filtered_current_state_ids`. ([\#5713](https://github.com/matrix-org/synapse/issues/5713))
 - Don't accept opentracing data from clients. ([\#5715](https://github.com/matrix-org/synapse/issues/5715))
 - Speed up PostgreSQL unit tests in CI. ([\#5717](https://github.com/matrix-org/synapse/issues/5717))
 - Update the coding style document. ([\#5719](https://github.com/matrix-org/synapse/issues/5719))
 - Improve database query performance when recording retry intervals for remote hosts. ([\#5720](https://github.com/matrix-org/synapse/issues/5720))
 - Add a set of opentracing utils. ([\#5722](https://github.com/matrix-org/synapse/issues/5722))
 - Cache result of get_version_string to reduce overhead of `/version` federation requests. ([\#5730](https://github.com/matrix-org/synapse/issues/5730))
 - Return 'user_type' in admin API user endpoints results. ([\#5731](https://github.com/matrix-org/synapse/issues/5731))
 - Don't package the sytest test blacklist file. ([\#5733](https://github.com/matrix-org/synapse/issues/5733))
 - Replace uses of returnValue with plain return, as returnValue is not needed on Python 3. ([\#5736](https://github.com/matrix-org/synapse/issues/5736))
 - Blacklist some flakey tests in worker mode. ([\#5740](https://github.com/matrix-org/synapse/issues/5740))
 - Fix some error cases in the caching layer. ([\#5749](https://github.com/matrix-org/synapse/issues/5749))
 - Add a prometheus metric for pending cache lookups. ([\#5750](https://github.com/matrix-org/synapse/issues/5750))
 - Stop trying to fetch events with event_id=None. ([\#5753](https://github.com/matrix-org/synapse/issues/5753))
 - Convert RedactionTestCase to modern test style. ([\#5768](https://github.com/matrix-org/synapse/issues/5768))
 - Allow looping calls to be given arguments. ([\#5780](https://github.com/matrix-org/synapse/issues/5780))
 - Set the logs emitted when checking typing and presence timeouts to DEBUG level, not INFO. ([\#5785](https://github.com/matrix-org/synapse/issues/5785))
 - Remove DelayedCall debugging from the test suite, as it is no longer required in the vast majority of Synapse's tests. ([\#5787](https://github.com/matrix-org/synapse/issues/5787))
 - Remove some spurious exceptions from the logs where we failed to talk to a remote server. ([\#5790](https://github.com/matrix-org/synapse/issues/5790))
 - Improve performance when making `.well-known` requests by sharing the SSL options between requests. ([\#5794](https://github.com/matrix-org/synapse/issues/5794))
 - Disable codecov GitHub comments on PRs. ([\#5796](https://github.com/matrix-org/synapse/issues/5796))
 - Don't allow clients to send tombstone events that reference the room it's sent in. ([\#5801](https://github.com/matrix-org/synapse/issues/5801))
 - Deny redactions of events sent in a different room. ([\#5802](https://github.com/matrix-org/synapse/issues/5802))
 - Deny sending well known state types as non-state events. ([\#5805](https://github.com/matrix-org/synapse/issues/5805))
 - Handle incorrectly encoded query params correctly by returning a 400. ([\#5808](https://github.com/matrix-org/synapse/issues/5808))
 - Handle pusher being deleted during processing rather than logging an exception. ([\#5809](https://github.com/matrix-org/synapse/issues/5809))
 - Return 502 not 500 when failing to reach any remote server. ([\#5810](https://github.com/matrix-org/synapse/issues/5810))
 - Reduce global pauses in the events stream caused by expensive state resolution during persistence. ([\#5826](https://github.com/matrix-org/synapse/issues/5826))
 - Add a lower bound to well-known lookup cache time to avoid repeated lookups. ([\#5836](https://github.com/matrix-org/synapse/issues/5836))
 - Whitelist history visbility sytests in worker mode tests. ([\#5843](https://github.com/matrix-org/synapse/issues/5843))
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdVkXOgzrGzds0jtrHgFcFF8ZFs0FAl1VPMUACgkQHgFcFF8Z
 Fs0dow/8Ca5rLS4c5P443xXtT9EUXImdOAJS58i2S3NBQwqoAXnG0f+QBDdcrZNx
 hVuaaadejSOlvSB5sBo2KfokgbK2UffsXBwyfnhXf7+Fwcbu6S68TGrLQo46SYaE
 jYaT+wP2W3zJAqXeP3ux6fFXw1YFlTPWD30t/xjumuB2wRpXzvn9Y3Ts6Id9ResR
 YHtp+UtNbsdTKeTpDRMoyzhIDzo9IPUQg0jxyTiMPPGW2BdGxLftzMSHlbhEa//+
 DjaqE2o3CVgmHEB39lBkceH4TRzUx3X6ZID+h3Crc2cbN0B2bgRbdBIyivTpOMf6
 q1az+HTHNmiWiGm6EfUha6kP34hrhk3ob+oVfPyqqIvPAzWROdSGRvrhcX+Dtlwd
 mGYIPqcDC6j2g9/mourllFVhY8GD0G5vnGEDlbG9Ls3+GwQa/P6fZLppDWEbmWsZ
 pfYFD5ecQqcR/eZ3HIRt2xBQoi3bisvL1i7i23NlJwfRgU4JfPXPPi2hm2H/IIQJ
 TmUIc/e0VvPJ4IM+D+7VE54Y6/dJ0gP/seFE+D19Xyp0Zxu0rz/qLRaGkHAp74Su
 /FGMHRHCJ6O7aY81pZqIrIeswUPPK+p0UqWp70/zxJ5e97s7hInAlc4ICfynacjC
 78xVTo3g3FBtu61fn97cBXLFby87xQuibTmz2xtQI21n2Ku4Ypc=
 =94aU
 -----END PGP SIGNATURE-----

Merge tag 'v1.3.0'

Synapse 1.3.0 (2019-08-15)
==========================

Bugfixes
--------

- Fix 500 Internal Server Error on `publicRooms` when the public room list was
  cached. ([\#5851](https://github.com/matrix-org/synapse/issues/5851))

Synapse 1.3.0rc1 (2019-08-13)
==========================

Features
--------

- Use `M_USER_DEACTIVATED` instead of `M_UNKNOWN` for errcode when a deactivated user attempts to login. ([\#5686](https://github.com/matrix-org/synapse/issues/5686))
- Add sd_notify hooks to ease systemd integration and allows usage of Type=Notify. ([\#5732](https://github.com/matrix-org/synapse/issues/5732))
- Synapse will no longer serve any media repo admin endpoints when `enable_media_repo` is set to False in the configuration. If a media repo worker is used, the admin APIs relating to the media repo will be served from it instead. ([\#5754](https://github.com/matrix-org/synapse/issues/5754), [\#5848](https://github.com/matrix-org/synapse/issues/5848))
- Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events) over federation. This option can be used to prevent adverse performance on resource-constrained homeservers. ([\#5783](https://github.com/matrix-org/synapse/issues/5783))
- Allow defining HTML templates to serve the user on account renewal attempt when using the account validity feature. ([\#5807](https://github.com/matrix-org/synapse/issues/5807))

Bugfixes
--------

- Fix UISIs during homeserver outage. ([\#5693](https://github.com/matrix-org/synapse/issues/5693), [\#5789](https://github.com/matrix-org/synapse/issues/5789))
- Fix stack overflow in server key lookup code. ([\#5724](https://github.com/matrix-org/synapse/issues/5724))
- start.sh no longer uses deprecated cli option. ([\#5725](https://github.com/matrix-org/synapse/issues/5725))
- Log when we receive an event receipt from an unexpected origin. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
- Fix debian packaging scripts to correctly build sid packages. ([\#5775](https://github.com/matrix-org/synapse/issues/5775))
- Correctly handle redactions of redactions. ([\#5788](https://github.com/matrix-org/synapse/issues/5788))
- Return 404 instead of 403 when accessing /rooms/{roomId}/event/{eventId} for an event without the appropriate permissions. ([\#5798](https://github.com/matrix-org/synapse/issues/5798))
- Fix check that tombstone is a state event in push rules. ([\#5804](https://github.com/matrix-org/synapse/issues/5804))
- Fix error when trying to login as a deactivated user when using a worker to handle login. ([\#5806](https://github.com/matrix-org/synapse/issues/5806))
- Fix bug where user `/sync` stream could get wedged in rare circumstances. ([\#5825](https://github.com/matrix-org/synapse/issues/5825))
- The purge_remote_media.sh script was fixed. ([\#5839](https://github.com/matrix-org/synapse/issues/5839))

Deprecations and Removals
-------------------------

- Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration. ([\#5678](https://github.com/matrix-org/synapse/issues/5678), [\#5729](https://github.com/matrix-org/synapse/issues/5729))
- Remove non-functional 'expire_access_token' setting. ([\#5782](https://github.com/matrix-org/synapse/issues/5782))

Internal Changes
----------------

- Make Jaeger fully configurable. ([\#5694](https://github.com/matrix-org/synapse/issues/5694))
- Add precautionary measures to prevent future abuse of `window.opener` in default welcome page. ([\#5695](https://github.com/matrix-org/synapse/issues/5695))
- Reduce database IO usage by optimising queries for current membership. ([\#5706](https://github.com/matrix-org/synapse/issues/5706), [\#5738](https://github.com/matrix-org/synapse/issues/5738), [\#5746](https://github.com/matrix-org/synapse/issues/5746), [\#5752](https://github.com/matrix-org/synapse/issues/5752), [\#5770](https://github.com/matrix-org/synapse/issues/5770), [\#5774](https://github.com/matrix-org/synapse/issues/5774), [\#5792](https://github.com/matrix-org/synapse/issues/5792), [\#5793](https://github.com/matrix-org/synapse/issues/5793))
- Improve caching when fetching `get_filtered_current_state_ids`. ([\#5713](https://github.com/matrix-org/synapse/issues/5713))
- Don't accept opentracing data from clients. ([\#5715](https://github.com/matrix-org/synapse/issues/5715))
- Speed up PostgreSQL unit tests in CI. ([\#5717](https://github.com/matrix-org/synapse/issues/5717))
- Update the coding style document. ([\#5719](https://github.com/matrix-org/synapse/issues/5719))
- Improve database query performance when recording retry intervals for remote hosts. ([\#5720](https://github.com/matrix-org/synapse/issues/5720))
- Add a set of opentracing utils. ([\#5722](https://github.com/matrix-org/synapse/issues/5722))
- Cache result of get_version_string to reduce overhead of `/version` federation requests. ([\#5730](https://github.com/matrix-org/synapse/issues/5730))
- Return 'user_type' in admin API user endpoints results. ([\#5731](https://github.com/matrix-org/synapse/issues/5731))
- Don't package the sytest test blacklist file. ([\#5733](https://github.com/matrix-org/synapse/issues/5733))
- Replace uses of returnValue with plain return, as returnValue is not needed on Python 3. ([\#5736](https://github.com/matrix-org/synapse/issues/5736))
- Blacklist some flakey tests in worker mode. ([\#5740](https://github.com/matrix-org/synapse/issues/5740))
- Fix some error cases in the caching layer. ([\#5749](https://github.com/matrix-org/synapse/issues/5749))
- Add a prometheus metric for pending cache lookups. ([\#5750](https://github.com/matrix-org/synapse/issues/5750))
- Stop trying to fetch events with event_id=None. ([\#5753](https://github.com/matrix-org/synapse/issues/5753))
- Convert RedactionTestCase to modern test style. ([\#5768](https://github.com/matrix-org/synapse/issues/5768))
- Allow looping calls to be given arguments. ([\#5780](https://github.com/matrix-org/synapse/issues/5780))
- Set the logs emitted when checking typing and presence timeouts to DEBUG level, not INFO. ([\#5785](https://github.com/matrix-org/synapse/issues/5785))
- Remove DelayedCall debugging from the test suite, as it is no longer required in the vast majority of Synapse's tests. ([\#5787](https://github.com/matrix-org/synapse/issues/5787))
- Remove some spurious exceptions from the logs where we failed to talk to a remote server. ([\#5790](https://github.com/matrix-org/synapse/issues/5790))
- Improve performance when making `.well-known` requests by sharing the SSL options between requests. ([\#5794](https://github.com/matrix-org/synapse/issues/5794))
- Disable codecov GitHub comments on PRs. ([\#5796](https://github.com/matrix-org/synapse/issues/5796))
- Don't allow clients to send tombstone events that reference the room it's sent in. ([\#5801](https://github.com/matrix-org/synapse/issues/5801))
- Deny redactions of events sent in a different room. ([\#5802](https://github.com/matrix-org/synapse/issues/5802))
- Deny sending well known state types as non-state events. ([\#5805](https://github.com/matrix-org/synapse/issues/5805))
- Handle incorrectly encoded query params correctly by returning a 400. ([\#5808](https://github.com/matrix-org/synapse/issues/5808))
- Handle pusher being deleted during processing rather than logging an exception. ([\#5809](https://github.com/matrix-org/synapse/issues/5809))
- Return 502 not 500 when failing to reach any remote server. ([\#5810](https://github.com/matrix-org/synapse/issues/5810))
- Reduce global pauses in the events stream caused by expensive state resolution during persistence. ([\#5826](https://github.com/matrix-org/synapse/issues/5826))
- Add a lower bound to well-known lookup cache time to avoid repeated lookups. ([\#5836](https://github.com/matrix-org/synapse/issues/5836))
- Whitelist history visbility sytests in worker mode tests. ([\#5843](https://github.com/matrix-org/synapse/issues/5843))
This commit is contained in:
Brendan Abolivier 2019-08-15 12:37:45 +01:00
commit 6382914587
237 changed files with 4308 additions and 2737 deletions

View File

@ -49,14 +49,15 @@ steps:
- command: - command:
- "python -m pip install tox" - "apt-get update && apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev"
- "python3.5 -m pip install tox"
- "tox -e py35-old,codecov" - "tox -e py35-old,codecov"
label: ":python: 3.5 / SQLite / Old Deps" label: ":python: 3.5 / SQLite / Old Deps"
env: env:
TRIAL_FLAGS: "-j 2" TRIAL_FLAGS: "-j 2"
plugins: plugins:
- docker#v3.0.1: - docker#v3.0.1:
image: "python:3.5" image: "ubuntu:xenial" # We use xenail to get an old sqlite and python
propagate-environment: true propagate-environment: true
retry: retry:
automatic: automatic:
@ -117,8 +118,10 @@ steps:
limit: 2 limit: 2
- label: ":python: 3.5 / :postgres: 9.5" - label: ":python: 3.5 / :postgres: 9.5"
agents:
queue: "medium"
env: env:
TRIAL_FLAGS: "-j 4" TRIAL_FLAGS: "-j 8"
command: command:
- "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,codecov'" - "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,codecov'"
plugins: plugins:
@ -134,8 +137,10 @@ steps:
limit: 2 limit: 2
- label: ":python: 3.7 / :postgres: 9.5" - label: ":python: 3.7 / :postgres: 9.5"
agents:
queue: "medium"
env: env:
TRIAL_FLAGS: "-j 4" TRIAL_FLAGS: "-j 8"
command: command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'" - "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins: plugins:
@ -151,8 +156,10 @@ steps:
limit: 2 limit: 2
- label: ":python: 3.7 / :postgres: 11" - label: ":python: 3.7 / :postgres: 11"
agents:
queue: "medium"
env: env:
TRIAL_FLAGS: "-j 4" TRIAL_FLAGS: "-j 8"
command: command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'" - "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins: plugins:
@ -214,8 +221,10 @@ steps:
env: env:
POSTGRES: "1" POSTGRES: "1"
WORKERS: "1" WORKERS: "1"
BLACKLIST: "synapse-blacklist-with-workers"
command: command:
- "bash .buildkite/merge_base_branch.sh" - "bash .buildkite/merge_base_branch.sh"
- "bash -c 'cat /src/sytest-blacklist /src/.buildkite/worker-blacklist > /src/synapse-blacklist-with-workers'"
- "bash /synapse_sytest.sh" - "bash /synapse_sytest.sh"
plugins: plugins:
- docker#v3.0.1: - docker#v3.0.1:
@ -223,7 +232,6 @@ steps:
propagate-environment: true propagate-environment: true
always-pull: true always-pull: true
workdir: "/src" workdir: "/src"
soft_fail: true
retry: retry:
automatic: automatic:
- exit_status: -1 - exit_status: -1

View File

@ -0,0 +1,30 @@
# This file serves as a blacklist for SyTest tests that we expect will fail in
# Synapse when run under worker mode. For more details, see sytest-blacklist.
Message history can be paginated
Can re-join room if re-invited
/upgrade creates a new room
The only membership state included in an initial sync is for all the senders in the timeline
Local device key changes get to remote servers
If remote user leaves room we no longer receive device updates
Forgotten room messages cannot be paginated
Inbound federation can get public room list
Members from the gap are included in gappy incr LL sync
Leaves are present in non-gapped incremental syncs
Old leaves are present in gapped incremental syncs
User sees updates to presence from other users in the incremental sync.
Gapped incremental syncs include all state changes
Old members are included in gappy incr LL sync if they start speaking

View File

@ -1,5 +1,4 @@
comment: comment: off
layout: "diff"
coverage: coverage:
status: status:

1
.gitignore vendored
View File

@ -16,6 +16,7 @@ _trial_temp*/
/*.log /*.log
/*.log.config /*.log.config
/*.pid /*.pid
/.python-version
/*.signing.key /*.signing.key
/env/ /env/
/homeserver*.yaml /homeserver*.yaml

View File

@ -1,3 +1,87 @@
Synapse 1.3.0 (2019-08-15)
==========================
Bugfixes
--------
- Fix 500 Internal Server Error on `publicRooms` when the public room list was
cached. ([\#5851](https://github.com/matrix-org/synapse/issues/5851))
Synapse 1.3.0rc1 (2019-08-13)
==========================
Features
--------
- Use `M_USER_DEACTIVATED` instead of `M_UNKNOWN` for errcode when a deactivated user attempts to login. ([\#5686](https://github.com/matrix-org/synapse/issues/5686))
- Add sd_notify hooks to ease systemd integration and allows usage of Type=Notify. ([\#5732](https://github.com/matrix-org/synapse/issues/5732))
- Synapse will no longer serve any media repo admin endpoints when `enable_media_repo` is set to False in the configuration. If a media repo worker is used, the admin APIs relating to the media repo will be served from it instead. ([\#5754](https://github.com/matrix-org/synapse/issues/5754), [\#5848](https://github.com/matrix-org/synapse/issues/5848))
- Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events) over federation. This option can be used to prevent adverse performance on resource-constrained homeservers. ([\#5783](https://github.com/matrix-org/synapse/issues/5783))
- Allow defining HTML templates to serve the user on account renewal attempt when using the account validity feature. ([\#5807](https://github.com/matrix-org/synapse/issues/5807))
Bugfixes
--------
- Fix UISIs during homeserver outage. ([\#5693](https://github.com/matrix-org/synapse/issues/5693), [\#5789](https://github.com/matrix-org/synapse/issues/5789))
- Fix stack overflow in server key lookup code. ([\#5724](https://github.com/matrix-org/synapse/issues/5724))
- start.sh no longer uses deprecated cli option. ([\#5725](https://github.com/matrix-org/synapse/issues/5725))
- Log when we receive an event receipt from an unexpected origin. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
- Fix debian packaging scripts to correctly build sid packages. ([\#5775](https://github.com/matrix-org/synapse/issues/5775))
- Correctly handle redactions of redactions. ([\#5788](https://github.com/matrix-org/synapse/issues/5788))
- Return 404 instead of 403 when accessing /rooms/{roomId}/event/{eventId} for an event without the appropriate permissions. ([\#5798](https://github.com/matrix-org/synapse/issues/5798))
- Fix check that tombstone is a state event in push rules. ([\#5804](https://github.com/matrix-org/synapse/issues/5804))
- Fix error when trying to login as a deactivated user when using a worker to handle login. ([\#5806](https://github.com/matrix-org/synapse/issues/5806))
- Fix bug where user `/sync` stream could get wedged in rare circumstances. ([\#5825](https://github.com/matrix-org/synapse/issues/5825))
- The purge_remote_media.sh script was fixed. ([\#5839](https://github.com/matrix-org/synapse/issues/5839))
Deprecations and Removals
-------------------------
- Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration. ([\#5678](https://github.com/matrix-org/synapse/issues/5678), [\#5729](https://github.com/matrix-org/synapse/issues/5729))
- Remove non-functional 'expire_access_token' setting. ([\#5782](https://github.com/matrix-org/synapse/issues/5782))
Internal Changes
----------------
- Make Jaeger fully configurable. ([\#5694](https://github.com/matrix-org/synapse/issues/5694))
- Add precautionary measures to prevent future abuse of `window.opener` in default welcome page. ([\#5695](https://github.com/matrix-org/synapse/issues/5695))
- Reduce database IO usage by optimising queries for current membership. ([\#5706](https://github.com/matrix-org/synapse/issues/5706), [\#5738](https://github.com/matrix-org/synapse/issues/5738), [\#5746](https://github.com/matrix-org/synapse/issues/5746), [\#5752](https://github.com/matrix-org/synapse/issues/5752), [\#5770](https://github.com/matrix-org/synapse/issues/5770), [\#5774](https://github.com/matrix-org/synapse/issues/5774), [\#5792](https://github.com/matrix-org/synapse/issues/5792), [\#5793](https://github.com/matrix-org/synapse/issues/5793))
- Improve caching when fetching `get_filtered_current_state_ids`. ([\#5713](https://github.com/matrix-org/synapse/issues/5713))
- Don't accept opentracing data from clients. ([\#5715](https://github.com/matrix-org/synapse/issues/5715))
- Speed up PostgreSQL unit tests in CI. ([\#5717](https://github.com/matrix-org/synapse/issues/5717))
- Update the coding style document. ([\#5719](https://github.com/matrix-org/synapse/issues/5719))
- Improve database query performance when recording retry intervals for remote hosts. ([\#5720](https://github.com/matrix-org/synapse/issues/5720))
- Add a set of opentracing utils. ([\#5722](https://github.com/matrix-org/synapse/issues/5722))
- Cache result of get_version_string to reduce overhead of `/version` federation requests. ([\#5730](https://github.com/matrix-org/synapse/issues/5730))
- Return 'user_type' in admin API user endpoints results. ([\#5731](https://github.com/matrix-org/synapse/issues/5731))
- Don't package the sytest test blacklist file. ([\#5733](https://github.com/matrix-org/synapse/issues/5733))
- Replace uses of returnValue with plain return, as returnValue is not needed on Python 3. ([\#5736](https://github.com/matrix-org/synapse/issues/5736))
- Blacklist some flakey tests in worker mode. ([\#5740](https://github.com/matrix-org/synapse/issues/5740))
- Fix some error cases in the caching layer. ([\#5749](https://github.com/matrix-org/synapse/issues/5749))
- Add a prometheus metric for pending cache lookups. ([\#5750](https://github.com/matrix-org/synapse/issues/5750))
- Stop trying to fetch events with event_id=None. ([\#5753](https://github.com/matrix-org/synapse/issues/5753))
- Convert RedactionTestCase to modern test style. ([\#5768](https://github.com/matrix-org/synapse/issues/5768))
- Allow looping calls to be given arguments. ([\#5780](https://github.com/matrix-org/synapse/issues/5780))
- Set the logs emitted when checking typing and presence timeouts to DEBUG level, not INFO. ([\#5785](https://github.com/matrix-org/synapse/issues/5785))
- Remove DelayedCall debugging from the test suite, as it is no longer required in the vast majority of Synapse's tests. ([\#5787](https://github.com/matrix-org/synapse/issues/5787))
- Remove some spurious exceptions from the logs where we failed to talk to a remote server. ([\#5790](https://github.com/matrix-org/synapse/issues/5790))
- Improve performance when making `.well-known` requests by sharing the SSL options between requests. ([\#5794](https://github.com/matrix-org/synapse/issues/5794))
- Disable codecov GitHub comments on PRs. ([\#5796](https://github.com/matrix-org/synapse/issues/5796))
- Don't allow clients to send tombstone events that reference the room it's sent in. ([\#5801](https://github.com/matrix-org/synapse/issues/5801))
- Deny redactions of events sent in a different room. ([\#5802](https://github.com/matrix-org/synapse/issues/5802))
- Deny sending well known state types as non-state events. ([\#5805](https://github.com/matrix-org/synapse/issues/5805))
- Handle incorrectly encoded query params correctly by returning a 400. ([\#5808](https://github.com/matrix-org/synapse/issues/5808))
- Handle pusher being deleted during processing rather than logging an exception. ([\#5809](https://github.com/matrix-org/synapse/issues/5809))
- Return 502 not 500 when failing to reach any remote server. ([\#5810](https://github.com/matrix-org/synapse/issues/5810))
- Reduce global pauses in the events stream caused by expensive state resolution during persistence. ([\#5826](https://github.com/matrix-org/synapse/issues/5826))
- Add a lower bound to well-known lookup cache time to avoid repeated lookups. ([\#5836](https://github.com/matrix-org/synapse/issues/5836))
- Whitelist history visbility sytests in worker mode tests. ([\#5843](https://github.com/matrix-org/synapse/issues/5843))
Synapse 1.2.1 (2019-07-26) Synapse 1.2.1 (2019-07-26)
========================== ==========================

View File

@ -7,7 +7,6 @@ include demo/README
include demo/demo.tls.dh include demo/demo.tls.dh
include demo/*.py include demo/*.py
include demo/*.sh include demo/*.sh
include sytest-blacklist
recursive-include synapse/storage/schema *.sql recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.sql.postgres recursive-include synapse/storage/schema *.sql.postgres
@ -34,6 +33,7 @@ exclude Dockerfile
exclude .dockerignore exclude .dockerignore
exclude test_postgresql.sh exclude test_postgresql.sh
exclude .editorconfig exclude .editorconfig
exclude sytest-blacklist
include pyproject.toml include pyproject.toml
recursive-include changelog.d * recursive-include changelog.d *

View File

@ -51,4 +51,4 @@ TOKEN=$(sql "SELECT token FROM access_tokens WHERE user_id='$ADMIN' ORDER BY id
# finally start pruning media: # finally start pruning media:
############################################################################### ###############################################################################
set -x # for debugging the generated string set -x # for debugging the generated string
curl --header "Authorization: Bearer $TOKEN" -v POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP" curl --header "Authorization: Bearer $TOKEN" -X POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"

View File

@ -4,7 +4,8 @@ After=matrix-synapse.service
BindsTo=matrix-synapse.service BindsTo=matrix-synapse.service
[Service] [Service]
Type=simple Type=notify
NotifyAccess=main
User=matrix-synapse User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse EnvironmentFile=/etc/default/matrix-synapse

View File

@ -2,7 +2,8 @@
Description=Synapse Matrix Homeserver Description=Synapse Matrix Homeserver
[Service] [Service]
Type=simple Type=notify
NotifyAccess=main
User=matrix-synapse User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse EnvironmentFile=/etc/default/matrix-synapse

View File

@ -14,7 +14,9 @@
Description=Synapse Matrix homeserver Description=Synapse Matrix homeserver
[Service] [Service]
Type=simple Type=notify
NotifyAccess=main
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-abort Restart=on-abort
User=synapse User=synapse

10
debian/changelog vendored
View File

@ -1,8 +1,7 @@
matrix-synapse-py3 (1.2.1) stable; urgency=medium matrix-synapse-py3 (1.3.0) stable; urgency=medium
* New synapse release 1.2.1. [ Andrew Morgan ]
* Remove libsqlite3-dev from required build dependencies.
-- Synapse Packaging team <packages@matrix.org> Fri, 26 Jul 2019 11:32:47 +0100
matrix-synapse-py3 (1.2.0) stable; urgency=medium matrix-synapse-py3 (1.2.0) stable; urgency=medium
@ -14,8 +13,9 @@ matrix-synapse-py3 (1.2.0) stable; urgency=medium
[ Synapse Packaging team ] [ Synapse Packaging team ]
* New synapse release 1.2.0. * New synapse release 1.2.0.
* New synapse release 1.3.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Jul 2019 14:10:07 +0100 -- Synapse Packaging team <packages@matrix.org> Thu, 15 Aug 2019 12:04:23 +0100
matrix-synapse-py3 (1.1.0) stable; urgency=medium matrix-synapse-py3 (1.1.0) stable; urgency=medium

1
debian/control vendored
View File

@ -15,7 +15,6 @@ Build-Depends:
python3-setuptools, python3-setuptools,
python3-pip, python3-pip,
python3-venv, python3-venv,
libsqlite3-dev,
tar, tar,
Standards-Version: 3.9.8 Standards-Version: 3.9.8
Homepage: https://github.com/matrix-org/synapse Homepage: https://github.com/matrix-org/synapse

View File

@ -120,7 +120,6 @@ for port in 8080 8081 8082; do
python3 -m synapse.app.homeserver \ python3 -m synapse.app.homeserver \
--config-path "$DIR/etc/$port.config" \ --config-path "$DIR/etc/$port.config" \
-D \ -D \
-vv \
popd popd
done done

View File

@ -42,6 +42,11 @@ RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
### ###
FROM ${distro} FROM ${distro}
# Get the distro we want to pull from as a dynamic build variable
# (We need to define it in each build stage)
ARG distro=""
ENV distro ${distro}
# Install the build dependencies # Install the build dependencies
# #
# NB: keep this list in sync with the list of build-deps in debian/control # NB: keep this list in sync with the list of build-deps in debian/control

View File

@ -4,7 +4,8 @@
set -ex set -ex
DIST=`lsb_release -c -s` # Get the codename from distro env
DIST=`cut -d ':' -f2 <<< $distro`
# we get a read-only copy of the source: make a writeable copy # we get a read-only copy of the source: make a writeable copy
cp -aT /synapse/source /synapse/build cp -aT /synapse/source /synapse/build

View File

@ -1,4 +1,8 @@
# Code Style Code Style
==========
Formatting tools
----------------
The Synapse codebase uses a number of code formatting tools in order to The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical) errors quickly and automatically check for formatting (and sometimes logical) errors
@ -6,18 +10,18 @@ in code.
The necessary tools are detailed below. The necessary tools are detailed below.
## Formatting tools - **black**
The Synapse codebase uses [black](https://pypi.org/project/black/) as an The Synapse codebase uses `black <https://pypi.org/project/black/>`_ as an
opinionated code formatter, ensuring all comitted code is properly opinionated code formatter, ensuring all comitted code is properly
formatted. formatted.
First install ``black`` with:: First install ``black`` with::
pip install --upgrade black pip install --upgrade black
Have ``black`` auto-format your code (it shouldn't change any Have ``black`` auto-format your code (it shouldn't change any functionality)
functionality) with:: with::
black . --exclude="\.tox|build|env" black . --exclude="\.tox|build|env"
@ -54,17 +58,16 @@ functionality is supported in your editor for a more convenient development
workflow. It is not, however, recommended to run ``flake8`` on save as it workflow. It is not, however, recommended to run ``flake8`` on save as it
takes a while and is very resource intensive. takes a while and is very resource intensive.
## General rules General rules
-------------
- **Naming**: - **Naming**:
- Use camel case for class and type names - Use camel case for class and type names
- Use underscores for functions and variables. - Use underscores for functions and variables.
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``. - **Docstrings**: should follow the `google code style
<https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with `sphinx This is so that we can generate documentation with `sphinx
<http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the <http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples `examples
@ -73,6 +76,8 @@ takes a while and is very resource intensive.
- **Imports**: - **Imports**:
- Imports should be sorted by ``isort`` as described above.
- Prefer to import classes and functions rather than packages or modules. - Prefer to import classes and functions rather than packages or modules.
Example:: Example::
@ -92,25 +97,84 @@ takes a while and is very resource intensive.
This goes against the advice in the Google style guide, but it means that This goes against the advice in the Google style guide, but it means that
errors in the name are caught early (at import time). errors in the name are caught early (at import time).
- Multiple imports from the same package can be combined onto one line::
from synapse.types import GroupID, RoomID, UserID
An effort should be made to keep the individual imports in alphabetical
order.
If the list becomes long, wrap it with parentheses and split it over
multiple lines.
- As per `PEP-8 <https://www.python.org/dev/peps/pep-0008/#imports>`_,
imports should be grouped in the following order, with a blank line between
each group:
1. standard library imports
2. related third party imports
3. local application/library specific imports
- Imports within each group should be sorted alphabetically by module name.
- Avoid wildcard imports (``from synapse.types import *``) and relative - Avoid wildcard imports (``from synapse.types import *``) and relative
imports (``from .types import UserID``). imports (``from .types import UserID``).
Configuration file format
-------------------------
The `sample configuration file <./sample_config.yaml>`_ acts as a reference to
Synapse's configuration options for server administrators. Remember that many
readers will be unfamiliar with YAML and server administration in general, so
that it is important that the file be as easy to understand as possible, which
includes following a consistent format.
Some guidelines follow:
* Sections should be separated with a heading consisting of a single line
prefixed and suffixed with ``##``. There should be **two** blank lines
before the section header, and **one** after.
* Each option should be listed in the file with the following format:
* A comment describing the setting. Each line of this comment should be
prefixed with a hash (``#``) and a space.
The comment should describe the default behaviour (ie, what happens if
the setting is omitted), as well as what the effect will be if the
setting is changed.
Often, the comment end with something like "uncomment the
following to \<do action>".
* A line consisting of only ``#``.
* A commented-out example setting, prefixed with only ``#``.
For boolean (on/off) options, convention is that this example should be
the *opposite* to the default (so the comment will end with "Uncomment
the following to enable [or disable] \<feature\>." For other options,
the example should give some non-default value which is likely to be
useful to the reader.
* There should be a blank line between each option.
* Where several settings are grouped into a single dict, *avoid* the
convention where the whole block is commented out, resulting in comment
lines starting ``# #``, as this is hard to read and confusing to
edit. Instead, leave the top-level config option uncommented, and follow
the conventions above for sub-options. Ensure that your code correctly
handles the top-level option being set to ``None`` (as it will be if no
sub-options are enabled).
* Lines should be wrapped at 80 characters.
Example::
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
Note that the sample configuration is generated from the synapse code and is
maintained by a script, ``scripts-dev/generate_sample_config``. Making sure
that the output from this script matches the desired format is left as an
exercise for the reader!

View File

@ -148,7 +148,7 @@ call any other functions.
d = more_stuff() d = more_stuff()
result = yield d # also fine, of course result = yield d # also fine, of course
defer.returnValue(result) return result
def nonInlineCallbacksFun(): def nonInlineCallbacksFun():
logger.debug("just a wrapper really") logger.debug("just a wrapper really")

View File

@ -278,6 +278,23 @@ listeners:
# Used by phonehome stats to group together related servers. # Used by phonehome stats to group together related servers.
#server_context: context #server_context: context
# Resource-constrained Homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: True
# complexity: 1.0
# complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it. # Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'. # Defaults to 'true'.
# #
@ -548,6 +565,13 @@ log_config: "CONFDIR/SERVERNAME.log.config"
## Media Store ##
# Enable the media store service in the Synapse master. Uncomment the
# following if you are using a separate media store worker.
#
#enable_media_repo: false
# Directory where uploaded images and attachments are stored. # Directory where uploaded images and attachments are stored.
# #
media_store_path: "DATADIR/media_store" media_store_path: "DATADIR/media_store"
@ -785,6 +809,16 @@ uploads_path: "DATADIR/uploads"
# period: 6w # period: 6w
# renew_at: 1w # renew_at: 1w
# renew_email_subject: "Renew your %(app)s account" # renew_email_subject: "Renew your %(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in. # Time that a user's session remains valid for, after they log in.
# #
@ -925,10 +959,6 @@ uploads_path: "DATADIR/uploads"
# #
# macaroon_secret_key: <PRIVATE STRING> # macaroon_secret_key: <PRIVATE STRING>
# Used to enable access token expiration.
#
#expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop # a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent # falsification of values. Must be specified for the User Consent
# forms to work. # forms to work.
@ -1430,3 +1460,19 @@ opentracing:
# #
#homeserver_whitelist: #homeserver_whitelist:
# - ".*" # - ".*"
# Jaeger can be configured to sample traces at different rates.
# All configuration options provided by Jaeger can be set here.
# Jaeger's configuration mostly related to trace sampling which
# is documented here:
# https://www.jaegertracing.io/docs/1.13/sampling/.
#
#jaeger_config:
# sampler:
# type: const
# param: 1
# Logging whether spans were started and reported
#
# logging:
# false

View File

@ -206,6 +206,13 @@ Handles the media repository. It can handle all endpoints starting with::
/_matrix/media/ /_matrix/media/
And the following regular expressions matching media-specific administration
APIs::
^/_synapse/admin/v1/purge_media_cache$
^/_synapse/admin/v1/room/.*/media$
^/_synapse/admin/v1/quarantine_media/.*$
You should also set ``enable_media_repo: False`` in the shared configuration You should also set ``enable_media_repo: False`` in the shared configuration
file to stop the main synapse running background jobs related to managing the file to stop the main synapse running background jobs related to managing the
media repository. media repository.

View File

@ -35,4 +35,4 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.2.1" __version__ = "1.3.0"

View File

@ -128,7 +128,7 @@ class Auth(object):
) )
self._check_joined_room(member, user_id, room_id) self._check_joined_room(member, user_id, room_id)
defer.returnValue(member) return member
@defer.inlineCallbacks @defer.inlineCallbacks
def check_user_was_in_room(self, room_id, user_id): def check_user_was_in_room(self, room_id, user_id):
@ -156,13 +156,13 @@ class Auth(object):
if forgot: if forgot:
raise AuthError(403, "User %s not in room %s" % (user_id, room_id)) raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
defer.returnValue(member) return member
@defer.inlineCallbacks @defer.inlineCallbacks
def check_host_in_room(self, room_id, host): def check_host_in_room(self, room_id, host):
with Measure(self.clock, "check_host_in_room"): with Measure(self.clock, "check_host_in_room"):
latest_event_ids = yield self.store.is_host_joined(room_id, host) latest_event_ids = yield self.store.is_host_joined(room_id, host)
defer.returnValue(latest_event_ids) return latest_event_ids
def _check_joined_room(self, member, user_id, room_id): def _check_joined_room(self, member, user_id, room_id):
if not member or member.membership != Membership.JOIN: if not member or member.membership != Membership.JOIN:
@ -219,9 +219,7 @@ class Auth(object):
device_id="dummy-device", # stubbed device_id="dummy-device", # stubbed
) )
defer.returnValue( return synapse.types.create_requester(user_id, app_service=app_service)
synapse.types.create_requester(user_id, app_service=app_service)
)
user_info = yield self.get_user_by_access_token(access_token, rights) user_info = yield self.get_user_by_access_token(access_token, rights)
user = user_info["user"] user = user_info["user"]
@ -262,11 +260,9 @@ class Auth(object):
request.authenticated_entity = user.to_string() request.authenticated_entity = user.to_string()
defer.returnValue( return synapse.types.create_requester(
synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service user, token_id, is_guest, device_id, app_service=app_service
) )
)
except KeyError: except KeyError:
raise MissingClientTokenError() raise MissingClientTokenError()
@ -276,25 +272,25 @@ class Auth(object):
self.get_access_token_from_request(request) self.get_access_token_from_request(request)
) )
if app_service is None: if app_service is None:
defer.returnValue((None, None)) return (None, None)
if app_service.ip_range_whitelist: if app_service.ip_range_whitelist:
ip_address = IPAddress(self.hs.get_ip_from_request(request)) ip_address = IPAddress(self.hs.get_ip_from_request(request))
if ip_address not in app_service.ip_range_whitelist: if ip_address not in app_service.ip_range_whitelist:
defer.returnValue((None, None)) return (None, None)
if b"user_id" not in request.args: if b"user_id" not in request.args:
defer.returnValue((app_service.sender, app_service)) return (app_service.sender, app_service)
user_id = request.args[b"user_id"][0].decode("utf8") user_id = request.args[b"user_id"][0].decode("utf8")
if app_service.sender == user_id: if app_service.sender == user_id:
defer.returnValue((app_service.sender, app_service)) return (app_service.sender, app_service)
if not app_service.is_interested_in_user(user_id): if not app_service.is_interested_in_user(user_id):
raise AuthError(403, "Application service cannot masquerade as this user.") raise AuthError(403, "Application service cannot masquerade as this user.")
if not (yield self.store.get_user_by_id(user_id)): if not (yield self.store.get_user_by_id(user_id)):
raise AuthError(403, "Application service has not registered this user") raise AuthError(403, "Application service has not registered this user")
defer.returnValue((user_id, app_service)) return (user_id, app_service)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_user_by_access_token(self, token, rights="access"): def get_user_by_access_token(self, token, rights="access"):
@ -330,7 +326,7 @@ class Auth(object):
msg="Access token has expired", soft_logout=True msg="Access token has expired", soft_logout=True
) )
defer.returnValue(r) return r
# otherwise it needs to be a valid macaroon # otherwise it needs to be a valid macaroon
try: try:
@ -378,7 +374,7 @@ class Auth(object):
} }
else: else:
raise RuntimeError("Unknown rights setting %s", rights) raise RuntimeError("Unknown rights setting %s", rights)
defer.returnValue(ret) return ret
except ( except (
_InvalidMacaroonException, _InvalidMacaroonException,
pymacaroons.exceptions.MacaroonException, pymacaroons.exceptions.MacaroonException,
@ -414,21 +410,16 @@ class Auth(object):
try: try:
user_id = self.get_user_id_from_macaroon(macaroon) user_id = self.get_user_id_from_macaroon(macaroon)
has_expiry = False
guest = False guest = False
for caveat in macaroon.caveats: for caveat in macaroon.caveats:
if caveat.caveat_id.startswith("time "): if caveat.caveat_id == "guest = true":
has_expiry = True
elif caveat.caveat_id == "guest = true":
guest = True guest = True
self.validate_macaroon( self.validate_macaroon(macaroon, rights, user_id=user_id)
macaroon, rights, self.hs.config.expire_access_token, user_id=user_id
)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError): except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise InvalidClientTokenError("Invalid macaroon passed.") raise InvalidClientTokenError("Invalid macaroon passed.")
if not has_expiry and rights == "access": if rights == "access":
self.token_cache[token] = (user_id, guest) self.token_cache[token] = (user_id, guest)
return user_id, guest return user_id, guest
@ -454,7 +445,7 @@ class Auth(object):
return caveat.caveat_id[len(user_prefix) :] return caveat.caveat_id[len(user_prefix) :]
raise InvalidClientTokenError("No user caveat in macaroon") raise InvalidClientTokenError("No user caveat in macaroon")
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id): def validate_macaroon(self, macaroon, type_string, user_id):
""" """
validate that a Macaroon is understood by and was signed by this server. validate that a Macaroon is understood by and was signed by this server.
@ -462,7 +453,6 @@ class Auth(object):
macaroon(pymacaroons.Macaroon): The macaroon to validate macaroon(pymacaroons.Macaroon): The macaroon to validate
type_string(str): The kind of token required (e.g. "access", type_string(str): The kind of token required (e.g. "access",
"delete_pusher") "delete_pusher")
verify_expiry(bool): Whether to verify whether the macaroon has expired.
user_id (str): The user_id required user_id (str): The user_id required
""" """
v = pymacaroons.Verifier() v = pymacaroons.Verifier()
@ -475,19 +465,7 @@ class Auth(object):
v.satisfy_exact("type = " + type_string) v.satisfy_exact("type = " + type_string)
v.satisfy_exact("user_id = %s" % user_id) v.satisfy_exact("user_id = %s" % user_id)
v.satisfy_exact("guest = true") v.satisfy_exact("guest = true")
# verify_expiry should really always be True, but there exist access
# tokens in the wild which expire when they should not, so we can't
# enforce expiry yet (so we have to allow any caveat starting with
# 'time < ' in access tokens).
#
# On the other hand, short-term login tokens (as used by CAS login, for
# example) have an expiry time which we do want to enforce.
if verify_expiry:
v.satisfy_general(self._verify_expiry) v.satisfy_general(self._verify_expiry)
else:
v.satisfy_general(lambda c: c.startswith("time < "))
# access_tokens include a nonce for uniqueness: any value is acceptable # access_tokens include a nonce for uniqueness: any value is acceptable
v.satisfy_general(lambda c: c.startswith("nonce = ")) v.satisfy_general(lambda c: c.startswith("nonce = "))
@ -506,7 +484,7 @@ class Auth(object):
def _look_up_user_by_access_token(self, token): def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token) ret = yield self.store.get_user_by_access_token(token)
if not ret: if not ret:
defer.returnValue(None) return None
# we use ret.get() below because *lots* of unit tests stub out # we use ret.get() below because *lots* of unit tests stub out
# get_user_by_access_token in a way where it only returns a couple of # get_user_by_access_token in a way where it only returns a couple of
@ -518,7 +496,7 @@ class Auth(object):
"device_id": ret.get("device_id"), "device_id": ret.get("device_id"),
"valid_until_ms": ret.get("valid_until_ms"), "valid_until_ms": ret.get("valid_until_ms"),
} }
defer.returnValue(user_info) return user_info
def get_appservice_by_req(self, request): def get_appservice_by_req(self, request):
token = self.get_access_token_from_request(request) token = self.get_access_token_from_request(request)
@ -543,7 +521,7 @@ class Auth(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def compute_auth_events(self, event, current_state_ids, for_verification=False): def compute_auth_events(self, event, current_state_ids, for_verification=False):
if event.type == EventTypes.Create: if event.type == EventTypes.Create:
defer.returnValue([]) return []
auth_ids = [] auth_ids = []
@ -604,7 +582,7 @@ class Auth(object):
if member_event.content["membership"] == Membership.JOIN: if member_event.content["membership"] == Membership.JOIN:
auth_ids.append(member_event.event_id) auth_ids.append(member_event.event_id)
defer.returnValue(auth_ids) return auth_ids
@defer.inlineCallbacks @defer.inlineCallbacks
def check_can_change_room_list(self, room_id, user): def check_can_change_room_list(self, room_id, user):
@ -618,7 +596,7 @@ class Auth(object):
is_admin = yield self.is_server_admin(user) is_admin = yield self.is_server_admin(user)
if is_admin: if is_admin:
defer.returnValue(True) return True
user_id = user.to_string() user_id = user.to_string()
yield self.check_joined_room(room_id, user_id) yield self.check_joined_room(room_id, user_id)
@ -712,7 +690,7 @@ class Auth(object):
# * The user is a guest user, and has joined the room # * The user is a guest user, and has joined the room
# else it will throw. # else it will throw.
member_event = yield self.check_user_was_in_room(room_id, user_id) member_event = yield self.check_user_was_in_room(room_id, user_id)
defer.returnValue((member_event.membership, member_event.event_id)) return (member_event.membership, member_event.event_id)
except AuthError: except AuthError:
visibility = yield self.state.get_current_state( visibility = yield self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, "" room_id, EventTypes.RoomHistoryVisibility, ""
@ -721,7 +699,7 @@ class Auth(object):
visibility visibility
and visibility.content["history_visibility"] == "world_readable" and visibility.content["history_visibility"] == "world_readable"
): ):
defer.returnValue((Membership.JOIN, None)) return (Membership.JOIN, None)
return return
raise AuthError( raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN

View File

@ -61,6 +61,7 @@ class Codes(object):
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION" INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION" WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION"
EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT" EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT"
USER_DEACTIVATED = "M_USER_DEACTIVATED"
class CodeMessageException(RuntimeError): class CodeMessageException(RuntimeError):
@ -151,7 +152,7 @@ class UserDeactivatedError(SynapseError):
msg (str): The human-readable error message msg (str): The human-readable error message
""" """
super(UserDeactivatedError, self).__init__( super(UserDeactivatedError, self).__init__(
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.UNKNOWN code=http_client.FORBIDDEN, msg=msg, errcode=Codes.USER_DEACTIVATED
) )

View File

@ -132,7 +132,7 @@ class Filtering(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def get_user_filter(self, user_localpart, filter_id): def get_user_filter(self, user_localpart, filter_id):
result = yield self.store.get_user_filter(user_localpart, filter_id) result = yield self.store.get_user_filter(user_localpart, filter_id)
defer.returnValue(FilterCollection(result)) return FilterCollection(result)
def add_user_filter(self, user_localpart, user_filter): def add_user_filter(self, user_localpart, user_filter):
self.check_valid_filter(user_filter) self.check_valid_filter(user_filter)

View File

@ -15,10 +15,12 @@
import gc import gc
import logging import logging
import os
import signal import signal
import sys import sys
import traceback import traceback
import sdnotify
from daemonize import Daemonize from daemonize import Daemonize
from twisted.internet import defer, error, reactor from twisted.internet import defer, error, reactor
@ -242,9 +244,16 @@ def start(hs, listeners=None):
if hasattr(signal, "SIGHUP"): if hasattr(signal, "SIGHUP"):
def handle_sighup(*args, **kwargs): def handle_sighup(*args, **kwargs):
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sd_channel = sdnotify.SystemdNotifier()
sd_channel.notify("RELOADING=1")
for i in _sighup_callbacks: for i in _sighup_callbacks:
i(hs) i(hs)
sd_channel.notify("READY=1")
signal.signal(signal.SIGHUP, handle_sighup) signal.signal(signal.SIGHUP, handle_sighup)
register_sighup(refresh_certificate) register_sighup(refresh_certificate)
@ -260,6 +269,7 @@ def start(hs, listeners=None):
hs.get_datastore().start_profiling() hs.get_datastore().start_profiling()
setup_sentry(hs) setup_sentry(hs)
setup_sdnotify(hs)
except Exception: except Exception:
traceback.print_exc(file=sys.stderr) traceback.print_exc(file=sys.stderr)
reactor = hs.get_reactor() reactor = hs.get_reactor()
@ -292,6 +302,25 @@ def setup_sentry(hs):
scope.set_tag("worker_name", name) scope.set_tag("worker_name", name)
def setup_sdnotify(hs):
"""Adds process state hooks to tell systemd what we are up to.
"""
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sd_channel = sdnotify.SystemdNotifier()
hs.get_reactor().addSystemEventTrigger(
"after",
"startup",
lambda: sd_channel.notify("READY=1\nMAINPID=%s" % (os.getpid())),
)
hs.get_reactor().addSystemEventTrigger(
"before", "shutdown", lambda: sd_channel.notify("STOPPING=1")
)
def install_dns_limiter(reactor, max_dns_requests_in_flight=100): def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
"""Replaces the resolver with one that limits the number of in flight DNS """Replaces the resolver with one that limits the number of in flight DNS
requests. requests.

View File

@ -168,7 +168,9 @@ def start(config_options):
) )
ps.setup() ps.setup()
reactor.callWhenRunning(_base.start, ps, config.worker_listeners) reactor.addSystemEventTrigger(
"before", "startup", _base.start, ps, config.worker_listeners
)
_base.start_worker_reactor("synapse-appservice", config) _base.start_worker_reactor("synapse-appservice", config)

View File

@ -194,7 +194,9 @@ def start(config_options):
) )
ss.setup() ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners) reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-client-reader", config) _base.start_worker_reactor("synapse-client-reader", config)

View File

@ -193,7 +193,9 @@ def start(config_options):
) )
ss.setup() ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners) reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-event-creator", config) _base.start_worker_reactor("synapse-event-creator", config)

View File

@ -175,7 +175,9 @@ def start(config_options):
) )
ss.setup() ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners) reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-reader", config) _base.start_worker_reactor("synapse-federation-reader", config)

View File

@ -198,7 +198,9 @@ def start(config_options):
) )
ss.setup() ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners) reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-sender", config) _base.start_worker_reactor("synapse-federation-sender", config)

View File

@ -70,12 +70,12 @@ class PresenceStatusStubServlet(RestServlet):
except HttpResponseException as e: except HttpResponseException as e:
raise e.to_synapse_error() raise e.to_synapse_error()
defer.returnValue((200, result)) return (200, result)
@defer.inlineCallbacks @defer.inlineCallbacks
def on_PUT(self, request, user_id): def on_PUT(self, request, user_id):
yield self.auth.get_user_by_req(request) yield self.auth.get_user_by_req(request)
defer.returnValue((200, {})) return (200, {})
class KeyUploadServlet(RestServlet): class KeyUploadServlet(RestServlet):
@ -126,11 +126,11 @@ class KeyUploadServlet(RestServlet):
self.main_uri + request.uri.decode("ascii"), body, headers=headers self.main_uri + request.uri.decode("ascii"), body, headers=headers
) )
defer.returnValue((200, result)) return (200, result)
else: else:
# Just interested in counts. # Just interested in counts.
result = yield self.store.count_e2e_one_time_keys(user_id, device_id) result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue((200, {"one_time_key_counts": result})) return (200, {"one_time_key_counts": result})
class FrontendProxySlavedStore( class FrontendProxySlavedStore(
@ -247,7 +247,9 @@ def start(config_options):
) )
ss.setup() ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners) reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-frontend-proxy", config) _base.start_worker_reactor("synapse-frontend-proxy", config)

4
synapse/app/homeserver.py Executable file → Normal file
View File

@ -406,7 +406,7 @@ def setup(config_options):
if provision: if provision:
yield acme.provision_certificate() yield acme.provision_certificate()
defer.returnValue(provision) return provision
@defer.inlineCallbacks @defer.inlineCallbacks
def reprovision_acme(): def reprovision_acme():
@ -447,7 +447,7 @@ def setup(config_options):
reactor.stop() reactor.stop()
sys.exit(1) sys.exit(1)
reactor.callWhenRunning(start) reactor.addSystemEventTrigger("before", "startup", start)
return hs return hs

View File

@ -26,6 +26,7 @@ from synapse.app import _base
from synapse.config._base import ConfigError from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
@ -35,6 +36,7 @@ from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.media.v0.content_repository import ContentRepoResource from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.engines import create_engine from synapse.storage.engines import create_engine
@ -71,6 +73,12 @@ class MediaRepositoryServer(HomeServer):
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy) resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "media": elif name == "media":
media_repo = self.get_media_repository_resource() media_repo = self.get_media_repository_resource()
# We need to serve the admin servlets for media on the
# worker.
admin_resource = JsonResource(self, canonical_json=False)
register_servlets_for_media_repo(self, admin_resource)
resources.update( resources.update(
{ {
MEDIA_PREFIX: media_repo, MEDIA_PREFIX: media_repo,
@ -78,6 +86,7 @@ class MediaRepositoryServer(HomeServer):
CONTENT_REPO_PREFIX: ContentRepoResource( CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path self, self.config.uploads_path
), ),
"/_synapse/admin": admin_resource,
} }
) )
@ -161,7 +170,9 @@ def start(config_options):
) )
ss.setup() ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners) reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-media-repository", config) _base.start_worker_reactor("synapse-media-repository", config)

View File

@ -216,7 +216,7 @@ def start(config_options):
_base.start(ps, config.worker_listeners) _base.start(ps, config.worker_listeners)
ps.get_pusherpool().start() ps.get_pusherpool().start()
reactor.callWhenRunning(start) reactor.addSystemEventTrigger("before", "startup", start)
_base.start_worker_reactor("synapse-pusher", config) _base.start_worker_reactor("synapse-pusher", config)

View File

@ -451,7 +451,9 @@ def start(config_options):
) )
ss.setup() ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners) reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-synchrotron", config) _base.start_worker_reactor("synapse-synchrotron", config)

View File

@ -224,7 +224,9 @@ def start(config_options):
) )
ss.setup() ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners) reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-user-dir", config) _base.start_worker_reactor("synapse-user-dir", config)

View File

@ -175,21 +175,21 @@ class ApplicationService(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _matches_user(self, event, store): def _matches_user(self, event, store):
if not event: if not event:
defer.returnValue(False) return False
if self.is_interested_in_user(event.sender): if self.is_interested_in_user(event.sender):
defer.returnValue(True) return True
# also check m.room.member state key # also check m.room.member state key
if event.type == EventTypes.Member and self.is_interested_in_user( if event.type == EventTypes.Member and self.is_interested_in_user(
event.state_key event.state_key
): ):
defer.returnValue(True) return True
if not store: if not store:
defer.returnValue(False) return False
does_match = yield self._matches_user_in_member_list(event.room_id, store) does_match = yield self._matches_user_in_member_list(event.room_id, store)
defer.returnValue(does_match) return does_match
@cachedInlineCallbacks(num_args=1, cache_context=True) @cachedInlineCallbacks(num_args=1, cache_context=True)
def _matches_user_in_member_list(self, room_id, store, cache_context): def _matches_user_in_member_list(self, room_id, store, cache_context):
@ -200,8 +200,8 @@ class ApplicationService(object):
# check joined member events # check joined member events
for user_id in member_list: for user_id in member_list:
if self.is_interested_in_user(user_id): if self.is_interested_in_user(user_id):
defer.returnValue(True) return True
defer.returnValue(False) return False
def _matches_room_id(self, event): def _matches_room_id(self, event):
if hasattr(event, "room_id"): if hasattr(event, "room_id"):
@ -211,13 +211,13 @@ class ApplicationService(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _matches_aliases(self, event, store): def _matches_aliases(self, event, store):
if not store or not event: if not store or not event:
defer.returnValue(False) return False
alias_list = yield store.get_aliases_for_room(event.room_id) alias_list = yield store.get_aliases_for_room(event.room_id)
for alias in alias_list: for alias in alias_list:
if self.is_interested_in_alias(alias): if self.is_interested_in_alias(alias):
defer.returnValue(True) return True
defer.returnValue(False) return False
@defer.inlineCallbacks @defer.inlineCallbacks
def is_interested(self, event, store=None): def is_interested(self, event, store=None):
@ -231,15 +231,15 @@ class ApplicationService(object):
""" """
# Do cheap checks first # Do cheap checks first
if self._matches_room_id(event): if self._matches_room_id(event):
defer.returnValue(True) return True
if (yield self._matches_aliases(event, store)): if (yield self._matches_aliases(event, store)):
defer.returnValue(True) return True
if (yield self._matches_user(event, store)): if (yield self._matches_user(event, store)):
defer.returnValue(True) return True
defer.returnValue(False) return False
def is_interested_in_user(self, user_id): def is_interested_in_user(self, user_id):
return ( return (

View File

@ -97,40 +97,40 @@ class ApplicationServiceApi(SimpleHttpClient):
@defer.inlineCallbacks @defer.inlineCallbacks
def query_user(self, service, user_id): def query_user(self, service, user_id):
if service.url is None: if service.url is None:
defer.returnValue(False) return False
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id)) uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
response = None response = None
try: try:
response = yield self.get_json(uri, {"access_token": service.hs_token}) response = yield self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object if response is not None: # just an empty json object
defer.returnValue(True) return True
except CodeMessageException as e: except CodeMessageException as e:
if e.code == 404: if e.code == 404:
defer.returnValue(False) return False
return return
logger.warning("query_user to %s received %s", uri, e.code) logger.warning("query_user to %s received %s", uri, e.code)
except Exception as ex: except Exception as ex:
logger.warning("query_user to %s threw exception %s", uri, ex) logger.warning("query_user to %s threw exception %s", uri, ex)
defer.returnValue(False) return False
@defer.inlineCallbacks @defer.inlineCallbacks
def query_alias(self, service, alias): def query_alias(self, service, alias):
if service.url is None: if service.url is None:
defer.returnValue(False) return False
uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias)) uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
response = None response = None
try: try:
response = yield self.get_json(uri, {"access_token": service.hs_token}) response = yield self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object if response is not None: # just an empty json object
defer.returnValue(True) return True
except CodeMessageException as e: except CodeMessageException as e:
logger.warning("query_alias to %s received %s", uri, e.code) logger.warning("query_alias to %s received %s", uri, e.code)
if e.code == 404: if e.code == 404:
defer.returnValue(False) return False
return return
except Exception as ex: except Exception as ex:
logger.warning("query_alias to %s threw exception %s", uri, ex) logger.warning("query_alias to %s threw exception %s", uri, ex)
defer.returnValue(False) return False
@defer.inlineCallbacks @defer.inlineCallbacks
def query_3pe(self, service, kind, protocol, fields): def query_3pe(self, service, kind, protocol, fields):
@ -141,7 +141,7 @@ class ApplicationServiceApi(SimpleHttpClient):
else: else:
raise ValueError("Unrecognised 'kind' argument %r to query_3pe()", kind) raise ValueError("Unrecognised 'kind' argument %r to query_3pe()", kind)
if service.url is None: if service.url is None:
defer.returnValue([]) return []
uri = "%s%s/thirdparty/%s/%s" % ( uri = "%s%s/thirdparty/%s/%s" % (
service.url, service.url,
@ -155,7 +155,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning( logger.warning(
"query_3pe to %s returned an invalid response %r", uri, response "query_3pe to %s returned an invalid response %r", uri, response
) )
defer.returnValue([]) return []
ret = [] ret = []
for r in response: for r in response:
@ -166,14 +166,14 @@ class ApplicationServiceApi(SimpleHttpClient):
"query_3pe to %s returned an invalid result %r", uri, r "query_3pe to %s returned an invalid result %r", uri, r
) )
defer.returnValue(ret) return ret
except Exception as ex: except Exception as ex:
logger.warning("query_3pe to %s threw exception %s", uri, ex) logger.warning("query_3pe to %s threw exception %s", uri, ex)
defer.returnValue([]) return []
def get_3pe_protocol(self, service, protocol): def get_3pe_protocol(self, service, protocol):
if service.url is None: if service.url is None:
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def _get(): def _get():
@ -189,7 +189,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning( logger.warning(
"query_3pe_protocol to %s did not return a" " valid result", uri "query_3pe_protocol to %s did not return a" " valid result", uri
) )
defer.returnValue(None) return None
for instance in info.get("instances", []): for instance in info.get("instances", []):
network_id = instance.get("network_id", None) network_id = instance.get("network_id", None)
@ -198,10 +198,10 @@ class ApplicationServiceApi(SimpleHttpClient):
service.id, network_id service.id, network_id
).to_string() ).to_string()
defer.returnValue(info) return info
except Exception as ex: except Exception as ex:
logger.warning("query_3pe_protocol to %s threw exception %s", uri, ex) logger.warning("query_3pe_protocol to %s threw exception %s", uri, ex)
defer.returnValue(None) return None
key = (service.id, protocol) key = (service.id, protocol)
return self.protocol_meta_cache.wrap(key, _get) return self.protocol_meta_cache.wrap(key, _get)
@ -209,7 +209,7 @@ class ApplicationServiceApi(SimpleHttpClient):
@defer.inlineCallbacks @defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None): def push_bulk(self, service, events, txn_id=None):
if service.url is None: if service.url is None:
defer.returnValue(True) return True
events = self._serialize(events) events = self._serialize(events)
@ -229,14 +229,14 @@ class ApplicationServiceApi(SimpleHttpClient):
) )
sent_transactions_counter.labels(service.id).inc() sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events)) sent_events_counter.labels(service.id).inc(len(events))
defer.returnValue(True) return True
return return
except CodeMessageException as e: except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code) logger.warning("push_bulk to %s received %s", uri, e.code)
except Exception as ex: except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex) logger.warning("push_bulk to %s threw exception %s", uri, ex)
failed_transactions_counter.labels(service.id).inc() failed_transactions_counter.labels(service.id).inc()
defer.returnValue(False) return False
def _serialize(self, events): def _serialize(self, events):
time_now = self.clock.time_msec() time_now = self.clock.time_msec()

View File

@ -193,7 +193,7 @@ class _TransactionController(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _is_service_up(self, service): def _is_service_up(self, service):
state = yield self.store.get_appservice_state(service) state = yield self.store.get_appservice_state(service)
defer.returnValue(state == ApplicationServiceState.UP or state is None) return state == ApplicationServiceState.UP or state is None
class _Recoverer(object): class _Recoverer(object):
@ -208,7 +208,7 @@ class _Recoverer(object):
r.service.id, r.service.id,
) )
r.recover() r.recover()
defer.returnValue(recoverers) return recoverers
def __init__(self, clock, store, as_api, service, callback): def __init__(self, clock, store, as_api, service, callback):
self.clock = clock self.clock = clock

View File

@ -116,8 +116,6 @@ class KeyConfig(Config):
seed = bytes(self.signing_key[0]) seed = bytes(self.signing_key[0])
self.macaroon_secret_key = hashlib.sha256(seed).digest() self.macaroon_secret_key = hashlib.sha256(seed).digest()
self.expire_access_token = config.get("expire_access_token", False)
# a secret which is used to calculate HMACs for form values, to stop # a secret which is used to calculate HMACs for form values, to stop
# falsification of values # falsification of values
self.form_secret = config.get("form_secret", None) self.form_secret = config.get("form_secret", None)
@ -144,10 +142,6 @@ class KeyConfig(Config):
# #
%(macaroon_secret_key)s %(macaroon_secret_key)s
# Used to enable access token expiration.
#
#expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop # a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent # falsification of values. Must be specified for the User Consent
# forms to work. # forms to work.

View File

@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging import logging
import logging.config import logging.config
import os import os
@ -75,10 +76,8 @@ root:
class LoggingConfig(Config): class LoggingConfig(Config):
def read_config(self, config, **kwargs): def read_config(self, config, **kwargs):
self.verbosity = config.get("verbose", 0)
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
self.log_config = self.abspath(config.get("log_config")) self.log_config = self.abspath(config.get("log_config"))
self.log_file = self.abspath(config.get("log_file")) self.no_redirect_stdio = config.get("no_redirect_stdio", False)
def generate_config_section(self, config_dir_path, server_name, **kwargs): def generate_config_section(self, config_dir_path, server_name, **kwargs):
log_config = os.path.join(config_dir_path, server_name + ".log.config") log_config = os.path.join(config_dir_path, server_name + ".log.config")
@ -94,38 +93,12 @@ class LoggingConfig(Config):
) )
def read_arguments(self, args): def read_arguments(self, args):
if args.verbose is not None:
self.verbosity = args.verbose
if args.no_redirect_stdio is not None: if args.no_redirect_stdio is not None:
self.no_redirect_stdio = args.no_redirect_stdio self.no_redirect_stdio = args.no_redirect_stdio
if args.log_config is not None:
self.log_config = args.log_config
if args.log_file is not None:
self.log_file = args.log_file
@staticmethod @staticmethod
def add_arguments(parser): def add_arguments(parser):
logging_group = parser.add_argument_group("logging") logging_group = parser.add_argument_group("logging")
logging_group.add_argument(
"-v",
"--verbose",
dest="verbose",
action="count",
help="The verbosity level. Specify multiple times to increase "
"verbosity. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"-f",
"--log-file",
dest="log_file",
help="File to log to. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"--log-config",
dest="log_config",
default=None,
help="Python logging config file",
)
logging_group.add_argument( logging_group.add_argument(
"-n", "-n",
"--no-redirect-stdio", "--no-redirect-stdio",
@ -153,58 +126,29 @@ def setup_logging(config, use_worker_options=False):
config (LoggingConfig | synapse.config.workers.WorkerConfig): config (LoggingConfig | synapse.config.workers.WorkerConfig):
configuration data configuration data
use_worker_options (bool): True to use 'worker_log_config' and use_worker_options (bool): True to use the 'worker_log_config' option
'worker_log_file' options instead of 'log_config' and 'log_file'. instead of 'log_config'.
register_sighup (func | None): Function to call to register a register_sighup (func | None): Function to call to register a
sighup handler. sighup handler.
""" """
log_config = config.worker_log_config if use_worker_options else config.log_config log_config = config.worker_log_config if use_worker_options else config.log_config
log_file = config.worker_log_file if use_worker_options else config.log_file
if log_config is None:
log_format = ( log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s" "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s" " - %(message)s"
) )
if log_config is None:
# We don't have a logfile, so fall back to the 'verbosity' param from
# the config or cmdline. (Note that we generate a log config for new
# installs, so this will be an unusual case)
level = logging.INFO
level_for_storage = logging.INFO
if config.verbosity:
level = logging.DEBUG
if config.verbosity > 1:
level_for_storage = logging.DEBUG
logger = logging.getLogger("") logger = logging.getLogger("")
logger.setLevel(level) logger.setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(level_for_storage)
formatter = logging.Formatter(log_format) formatter = logging.Formatter(log_format)
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3, encoding="utf8"
)
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
else:
handler = logging.StreamHandler() handler = logging.StreamHandler()
def sighup(*args):
pass
handler.setFormatter(formatter) handler.setFormatter(formatter)
handler.addFilter(LoggingContextFilter(request="")) handler.addFilter(LoggingContextFilter(request=""))
logger.addHandler(handler) logger.addHandler(handler)
else: else:
@ -218,7 +162,6 @@ def setup_logging(config, use_worker_options=False):
logging.info("Reloaded log config from %s due to SIGHUP", log_config) logging.info("Reloaded log config from %s due to SIGHUP", log_config)
load_log_config() load_log_config()
appbase.register_sighup(sighup) appbase.register_sighup(sighup)
# make sure that the first thing we log is a thing we can grep backwards # make sure that the first thing we log is a thing we can grep backwards

View File

@ -13,8 +13,11 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import os
from distutils.util import strtobool from distutils.util import strtobool
import pkg_resources
from synapse.config._base import Config, ConfigError from synapse.config._base import Config, ConfigError
from synapse.types import RoomAlias from synapse.types import RoomAlias
from synapse.util.stringutils import random_string_with_symbols from synapse.util.stringutils import random_string_with_symbols
@ -41,9 +44,37 @@ class AccountValidityConfig(Config):
self.startup_job_max_delta = self.period * 10.0 / 100.0 self.startup_job_max_delta = self.period * 10.0 / 100.0
if self.renew_by_email_enabled and "public_baseurl" not in synapse_config: if self.renew_by_email_enabled:
if "public_baseurl" not in synapse_config:
raise ConfigError("Can't send renewal emails without 'public_baseurl'") raise ConfigError("Can't send renewal emails without 'public_baseurl'")
template_dir = config.get("template_dir")
if not template_dir:
template_dir = pkg_resources.resource_filename("synapse", "res/templates")
if "account_renewed_html_path" in config:
file_path = os.path.join(template_dir, config["account_renewed_html_path"])
self.account_renewed_html_content = self.read_file(
file_path, "account_validity.account_renewed_html_path"
)
else:
self.account_renewed_html_content = (
"<html><body>Your account has been successfully renewed.</body><html>"
)
if "invalid_token_html_path" in config:
file_path = os.path.join(template_dir, config["invalid_token_html_path"])
self.invalid_token_html_content = self.read_file(
file_path, "account_validity.invalid_token_html_path"
)
else:
self.invalid_token_html_content = (
"<html><body>Invalid renewal token.</body><html>"
)
class RegistrationConfig(Config): class RegistrationConfig(Config):
def read_config(self, config, **kwargs): def read_config(self, config, **kwargs):
@ -145,6 +176,16 @@ class RegistrationConfig(Config):
# period: 6w # period: 6w
# renew_at: 1w # renew_at: 1w
# renew_email_subject: "Renew your %%(app)s account" # renew_email_subject: "Renew your %%(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in. # Time that a user's session remains valid for, after they log in.
# #

View File

@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import os import os
from collections import namedtuple from collections import namedtuple
@ -87,6 +88,18 @@ def parse_thumbnail_requirements(thumbnail_sizes):
class ContentRepositoryConfig(Config): class ContentRepositoryConfig(Config):
def read_config(self, config, **kwargs): def read_config(self, config, **kwargs):
# Only enable the media repo if either the media repo is enabled or the
# current worker app is the media repo.
if (
self.enable_media_repo is False
and config.get("worker_app") != "synapse.app.media_repository"
):
self.can_load_media_repo = False
return
else:
self.can_load_media_repo = True
self.max_upload_size = self.parse_size(config.get("max_upload_size", "10M")) self.max_upload_size = self.parse_size(config.get("max_upload_size", "10M"))
self.max_image_pixels = self.parse_size(config.get("max_image_pixels", "32M")) self.max_image_pixels = self.parse_size(config.get("max_image_pixels", "32M"))
self.max_spider_size = self.parse_size(config.get("max_spider_size", "10M")) self.max_spider_size = self.parse_size(config.get("max_spider_size", "10M"))
@ -202,6 +215,13 @@ class ContentRepositoryConfig(Config):
return ( return (
r""" r"""
## Media Store ##
# Enable the media store service in the Synapse master. Uncomment the
# following if you are using a separate media store worker.
#
#enable_media_repo: false
# Directory where uploaded images and attachments are stored. # Directory where uploaded images and attachments are stored.
# #
media_store_path: "%(media_store)s" media_store_path: "%(media_store)s"

View File

@ -18,6 +18,7 @@
import logging import logging
import os.path import os.path
import attr
from netaddr import IPSet from netaddr import IPSet
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
@ -38,6 +39,12 @@ DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
DEFAULT_ROOM_VERSION = "4" DEFAULT_ROOM_VERSION = "4"
ROOM_COMPLEXITY_TOO_GREAT = (
"Your homeserver is unable to join rooms this large or complex. "
"Please speak to your server administrator, or upgrade your instance "
"to join this room."
)
class ServerConfig(Config): class ServerConfig(Config):
def read_config(self, config, **kwargs): def read_config(self, config, **kwargs):
@ -247,6 +254,23 @@ class ServerConfig(Config):
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None)) self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
@attr.s
class LimitRemoteRoomsConfig(object):
enabled = attr.ib(
validator=attr.validators.instance_of(bool), default=False
)
complexity = attr.ib(
validator=attr.validators.instance_of((int, float)), default=1.0
)
complexity_error = attr.ib(
validator=attr.validators.instance_of(str),
default=ROOM_COMPLEXITY_TOO_GREAT,
)
self.limit_remote_rooms = LimitRemoteRoomsConfig(
**config.get("limit_remote_rooms", {})
)
bind_port = config.get("bind_port") bind_port = config.get("bind_port")
if bind_port: if bind_port:
if config.get("no_tls", False): if config.get("no_tls", False):
@ -617,6 +641,23 @@ class ServerConfig(Config):
# Used by phonehome stats to group together related servers. # Used by phonehome stats to group together related servers.
#server_context: context #server_context: context
# Resource-constrained Homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: True
# complexity: 1.0
# complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it. # Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'. # Defaults to 'true'.
# #

View File

@ -23,6 +23,12 @@ class TracerConfig(Config):
opentracing_config = {} opentracing_config = {}
self.opentracer_enabled = opentracing_config.get("enabled", False) self.opentracer_enabled = opentracing_config.get("enabled", False)
self.jaeger_config = opentracing_config.get(
"jaeger_config",
{"sampler": {"type": "const", "param": 1}, "logging": False},
)
if not self.opentracer_enabled: if not self.opentracer_enabled:
return return
@ -56,4 +62,20 @@ class TracerConfig(Config):
# #
#homeserver_whitelist: #homeserver_whitelist:
# - ".*" # - ".*"
# Jaeger can be configured to sample traces at different rates.
# All configuration options provided by Jaeger can be set here.
# Jaeger's configuration mostly related to trace sampling which
# is documented here:
# https://www.jaegertracing.io/docs/1.13/sampling/.
#
#jaeger_config:
# sampler:
# type: const
# param: 1
# Logging whether spans were started and reported
#
# logging:
# false
""" """

View File

@ -31,7 +31,6 @@ class WorkerConfig(Config):
self.worker_listeners = config.get("worker_listeners", []) self.worker_listeners = config.get("worker_listeners", [])
self.worker_daemonize = config.get("worker_daemonize") self.worker_daemonize = config.get("worker_daemonize")
self.worker_pid_file = config.get("worker_pid_file") self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config") self.worker_log_config = config.get("worker_log_config")
# The host used to connect to the main synapse # The host used to connect to the main synapse
@ -78,9 +77,5 @@ class WorkerConfig(Config):
if args.daemonize is not None: if args.daemonize is not None:
self.worker_daemonize = args.daemonize self.worker_daemonize = args.daemonize
if args.log_config is not None:
self.worker_log_config = args.log_config
if args.log_file is not None:
self.worker_log_file = args.log_file
if args.manhole is not None: if args.manhole is not None:
self.worker_manhole = args.worker_manhole self.worker_manhole = args.worker_manhole

View File

@ -31,6 +31,7 @@ from twisted.internet.ssl import (
platformTrust, platformTrust,
) )
from twisted.python.failure import Failure from twisted.python.failure import Failure
from twisted.web.iweb import IPolicyForHTTPS
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -74,6 +75,7 @@ class ServerContextFactory(ContextFactory):
return self._context return self._context
@implementer(IPolicyForHTTPS)
class ClientTLSOptionsFactory(object): class ClientTLSOptionsFactory(object):
"""Factory for Twisted SSLClientConnectionCreators that are used to make connections """Factory for Twisted SSLClientConnectionCreators that are used to make connections
to remote servers for federation. to remote servers for federation.
@ -146,6 +148,12 @@ class ClientTLSOptionsFactory(object):
f = Failure() f = Failure()
tls_protocol.failVerification(f) tls_protocol.failVerification(f)
def creatorForNetloc(self, hostname, port):
"""Implements the IPolicyForHTTPS interace so that this can be passed
directly to agents.
"""
return self.get_options(hostname)
@implementer(IOpenSSLClientConnectionCreator) @implementer(IOpenSSLClientConnectionCreator)
class SSLClientConnectionCreator(object): class SSLClientConnectionCreator(object):

View File

@ -238,27 +238,9 @@ class Keyring(object):
""" """
try: try:
# create a deferred for each server we're going to look up the keys ctx = LoggingContext.current_context()
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred() for rq in verify_requests
}
# We want to wait for any previous lookups to complete before # map from server name to a set of outstanding request ids
# proceeding.
yield self.wait_for_previous_lookups(server_to_deferred)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
server_to_request_ids = {} server_to_request_ids = {}
for verify_request in verify_requests: for verify_request in verify_requests:
@ -266,40 +248,61 @@ class Keyring(object):
request_id = id(verify_request) request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id) server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request): # Wait for any previous lookups to complete before proceeding.
server_name = verify_request.server_name yield self.wait_for_previous_lookups(server_to_request_ids.keys())
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id) # take out a lock on each of the servers by sticking a Deferred in
if not server_to_request_ids[server_name]: # key_downloads
d = server_to_deferred.pop(server_name, None) for server_name in server_to_request_ids.keys():
if d: self.key_downloads[server_name] = defer.Deferred()
logger.debug("Got key lookup lock on %s", server_name)
# When we've finished fetching all the keys for a given server_name,
# drop the lock by resolving the deferred in key_downloads.
def drop_server_lock(server_name):
d = self.key_downloads.pop(server_name)
d.callback(None) d.callback(None)
def lookup_done(res, verify_request):
server_name = verify_request.server_name
server_requests = server_to_request_ids[server_name]
server_requests.remove(id(verify_request))
# if there are no more requests for this server, we can drop the lock.
if not server_requests:
with PreserveLoggingContext(ctx):
logger.debug("Releasing key lookup lock on %s", server_name)
# ... but not immediately, as that can cause stack explosions if
# we get a long queue of lookups.
self.clock.call_later(0, drop_server_lock, server_name)
return res return res
for verify_request in verify_requests: for verify_request in verify_requests:
verify_request.key_ready.addBoth(remove_deferreds, verify_request) verify_request.key_ready.addBoth(lookup_done, verify_request)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
except Exception: except Exception:
logger.exception("Error starting key lookups") logger.exception("Error starting key lookups")
@defer.inlineCallbacks @defer.inlineCallbacks
def wait_for_previous_lookups(self, server_to_deferred): def wait_for_previous_lookups(self, server_names):
"""Waits for any previous key lookups for the given servers to finish. """Waits for any previous key lookups for the given servers to finish.
Args: Args:
server_to_deferred (dict[str, Deferred]): server_name to deferred which gets server_names (Iterable[str]): list of servers which we want to look up
resolved once we've finished looking up keys for that server.
The Deferreds should be regular twisted ones which call their
callbacks with no logcontext.
Returns: a Deferred which resolves once all key lookups for the given Returns:
servers have completed. Follows the synapse rules of logcontext Deferred[None]: resolves once all key lookups for the given servers have
preservation. completed. Follows the synapse rules of logcontext preservation.
""" """
loop_count = 1 loop_count = 1
while True: while True:
wait_on = [ wait_on = [
(server_name, self.key_downloads[server_name]) (server_name, self.key_downloads[server_name])
for server_name in server_to_deferred.keys() for server_name in server_names
if server_name in self.key_downloads if server_name in self.key_downloads
] ]
if not wait_on: if not wait_on:
@ -314,19 +317,6 @@ class Keyring(object):
loop_count += 1 loop_count += 1
ctx = LoggingContext.current_context()
def rm(r, server_name_):
with PreserveLoggingContext(ctx):
logger.debug("Releasing key lookup lock on %s", server_name_)
self.key_downloads.pop(server_name_, None)
return r
for server_name, deferred in server_to_deferred.items():
logger.debug("Got key lookup lock on %s", server_name)
self.key_downloads[server_name] = deferred
deferred.addBoth(rm, server_name)
def _get_server_verify_keys(self, verify_requests): def _get_server_verify_keys(self, verify_requests):
"""Tries to find at least one key for each verify request """Tries to find at least one key for each verify request
@ -472,7 +462,7 @@ class StoreKeyFetcher(KeyFetcher):
keys = {} keys = {}
for (server_name, key_id), key in res.items(): for (server_name, key_id), key in res.items():
keys.setdefault(server_name, {})[key_id] = key keys.setdefault(server_name, {})[key_id] = key
defer.returnValue(keys) return keys
class BaseV2KeyFetcher(object): class BaseV2KeyFetcher(object):
@ -576,7 +566,7 @@ class BaseV2KeyFetcher(object):
).addErrback(unwrapFirstError) ).addErrback(unwrapFirstError)
) )
defer.returnValue(verify_keys) return verify_keys
class PerspectivesKeyFetcher(BaseV2KeyFetcher): class PerspectivesKeyFetcher(BaseV2KeyFetcher):
@ -598,7 +588,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
result = yield self.get_server_verify_key_v2_indirect( result = yield self.get_server_verify_key_v2_indirect(
keys_to_fetch, key_server keys_to_fetch, key_server
) )
defer.returnValue(result) return result
except KeyLookupError as e: except KeyLookupError as e:
logger.warning( logger.warning(
"Key lookup failed from %r: %s", key_server.server_name, e "Key lookup failed from %r: %s", key_server.server_name, e
@ -611,7 +601,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
str(e), str(e),
) )
defer.returnValue({}) return {}
results = yield make_deferred_yieldable( results = yield make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
@ -625,7 +615,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
for server_name, keys in result.items(): for server_name, keys in result.items():
union_of_keys.setdefault(server_name, {}).update(keys) union_of_keys.setdefault(server_name, {}).update(keys)
defer.returnValue(union_of_keys) return union_of_keys
@defer.inlineCallbacks @defer.inlineCallbacks
def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server): def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server):
@ -711,7 +701,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
perspective_name, time_now_ms, added_keys perspective_name, time_now_ms, added_keys
) )
defer.returnValue(keys) return keys
def _validate_perspectives_response(self, key_server, response): def _validate_perspectives_response(self, key_server, response):
"""Optionally check the signature on the result of a /key/query request """Optionally check the signature on the result of a /key/query request
@ -853,7 +843,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
) )
keys.update(response_keys) keys.update(response_keys)
defer.returnValue(keys) return keys
@defer.inlineCallbacks @defer.inlineCallbacks

View File

@ -144,8 +144,7 @@ class EventBuilder(object):
if self._origin_server_ts is not None: if self._origin_server_ts is not None:
event_dict["origin_server_ts"] = self._origin_server_ts event_dict["origin_server_ts"] = self._origin_server_ts
defer.returnValue( return create_local_event_from_event_dict(
create_local_event_from_event_dict(
clock=self._clock, clock=self._clock,
hostname=self._hostname, hostname=self._hostname,
signing_key=self._signing_key, signing_key=self._signing_key,
@ -153,7 +152,6 @@ class EventBuilder(object):
event_dict=event_dict, event_dict=event_dict,
internal_metadata_dict=self.internal_metadata.get_dict(), internal_metadata_dict=self.internal_metadata.get_dict(),
) )
)
class EventBuilderFactory(object): class EventBuilderFactory(object):

View File

@ -133,8 +133,7 @@ class EventContext(object):
else: else:
prev_state_id = None prev_state_id = None
defer.returnValue( return {
{
"prev_state_id": prev_state_id, "prev_state_id": prev_state_id,
"event_type": event.type, "event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None, "event_state_key": event.state_key if event.is_state() else None,
@ -145,7 +144,6 @@ class EventContext(object):
"prev_state_events": self.prev_state_events, "prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None, "app_service_id": self.app_service.id if self.app_service else None,
} }
)
@staticmethod @staticmethod
def deserialize(store, input): def deserialize(store, input):
@ -202,7 +200,7 @@ class EventContext(object):
yield make_deferred_yieldable(self._fetching_state_deferred) yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._current_state_ids) return self._current_state_ids
@defer.inlineCallbacks @defer.inlineCallbacks
def get_prev_state_ids(self, store): def get_prev_state_ids(self, store):
@ -222,7 +220,7 @@ class EventContext(object):
yield make_deferred_yieldable(self._fetching_state_deferred) yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._prev_state_ids) return self._prev_state_ids
def get_cached_current_state_ids(self): def get_cached_current_state_ids(self):
"""Gets the current state IDs if we have them already cached. """Gets the current state IDs if we have them already cached.

View File

@ -51,7 +51,7 @@ class ThirdPartyEventRules(object):
defer.Deferred[bool]: True if the event should be allowed, False if not. defer.Deferred[bool]: True if the event should be allowed, False if not.
""" """
if self.third_party_rules is None: if self.third_party_rules is None:
defer.returnValue(True) return True
prev_state_ids = yield context.get_prev_state_ids(self.store) prev_state_ids = yield context.get_prev_state_ids(self.store)
@ -61,7 +61,7 @@ class ThirdPartyEventRules(object):
state_events[key] = yield self.store.get_event(event_id, allow_none=True) state_events[key] = yield self.store.get_event(event_id, allow_none=True)
ret = yield self.third_party_rules.check_event_allowed(event, state_events) ret = yield self.third_party_rules.check_event_allowed(event, state_events)
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def on_create_room(self, requester, config, is_requester_admin): def on_create_room(self, requester, config, is_requester_admin):
@ -98,7 +98,7 @@ class ThirdPartyEventRules(object):
""" """
if self.third_party_rules is None: if self.third_party_rules is None:
defer.returnValue(True) return True
state_ids = yield self.store.get_filtered_current_state_ids(room_id) state_ids = yield self.store.get_filtered_current_state_ids(room_id)
room_state_events = yield self.store.get_events(state_ids.values()) room_state_events = yield self.store.get_events(state_ids.values())
@ -110,4 +110,4 @@ class ThirdPartyEventRules(object):
ret = yield self.third_party_rules.check_threepid_can_be_invited( ret = yield self.third_party_rules.check_threepid_can_be_invited(
medium, address, state_events medium, address, state_events
) )
defer.returnValue(ret) return ret

View File

@ -360,7 +360,7 @@ class EventClientSerializer(object):
""" """
# To handle the case of presence events and the like # To handle the case of presence events and the like
if not isinstance(event, EventBase): if not isinstance(event, EventBase):
defer.returnValue(event) return event
event_id = event.event_id event_id = event.event_id
serialized_event = serialize_event(event, time_now, **kwargs) serialized_event = serialize_event(event, time_now, **kwargs)
@ -406,7 +406,7 @@ class EventClientSerializer(object):
"sender": edit.sender, "sender": edit.sender,
} }
defer.returnValue(serialized_event) return serialized_event
def serialize_events(self, events, time_now, **kwargs): def serialize_events(self, events, time_now, **kwargs):
"""Serializes multiple events. """Serializes multiple events.

View File

@ -95,10 +95,10 @@ class EventValidator(object):
elif event.type == EventTypes.Topic: elif event.type == EventTypes.Topic:
self._ensure_strings(event.content, ["topic"]) self._ensure_strings(event.content, ["topic"])
self._ensure_state_event(event)
elif event.type == EventTypes.Name: elif event.type == EventTypes.Name:
self._ensure_strings(event.content, ["name"]) self._ensure_strings(event.content, ["name"])
self._ensure_state_event(event)
elif event.type == EventTypes.Member: elif event.type == EventTypes.Member:
if "membership" not in event.content: if "membership" not in event.content:
raise SynapseError(400, "Content has not membership key") raise SynapseError(400, "Content has not membership key")
@ -106,9 +106,25 @@ class EventValidator(object):
if event.content["membership"] not in Membership.LIST: if event.content["membership"] not in Membership.LIST:
raise SynapseError(400, "Invalid membership key") raise SynapseError(400, "Invalid membership key")
self._ensure_state_event(event)
elif event.type == EventTypes.Tombstone:
if "replacement_room" not in event.content:
raise SynapseError(400, "Content has no replacement_room key")
if event.content["replacement_room"] == event.room_id:
raise SynapseError(
400, "Tombstone cannot reference the room it was sent in"
)
self._ensure_state_event(event)
def _ensure_strings(self, d, keys): def _ensure_strings(self, d, keys):
for s in keys: for s in keys:
if s not in d: if s not in d:
raise SynapseError(400, "'%s' not in content" % (s,)) raise SynapseError(400, "'%s' not in content" % (s,))
if not isinstance(d[s], string_types): if not isinstance(d[s], string_types):
raise SynapseError(400, "'%s' not a string type" % (s,)) raise SynapseError(400, "'%s' not a string type" % (s,))
def _ensure_state_event(self, event):
if not event.is_state():
raise SynapseError(400, "'%s' must be state events" % (event.type,))

View File

@ -106,7 +106,7 @@ class FederationBase(object):
"Failed to find copy of %s with valid signature", pdu.event_id "Failed to find copy of %s with valid signature", pdu.event_id
) )
defer.returnValue(res) return res
handle = preserve_fn(handle_check_result) handle = preserve_fn(handle_check_result)
deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)] deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)]
@ -116,9 +116,9 @@ class FederationBase(object):
).addErrback(unwrapFirstError) ).addErrback(unwrapFirstError)
if include_none: if include_none:
defer.returnValue(valid_pdus) return valid_pdus
else: else:
defer.returnValue([p for p in valid_pdus if p]) return [p for p in valid_pdus if p]
def _check_sigs_and_hash(self, room_version, pdu): def _check_sigs_and_hash(self, room_version, pdu):
return make_deferred_yieldable( return make_deferred_yieldable(

View File

@ -213,7 +213,7 @@ class FederationClient(FederationBase):
).addErrback(unwrapFirstError) ).addErrback(unwrapFirstError)
) )
defer.returnValue(pdus) return pdus
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -245,7 +245,7 @@ class FederationClient(FederationBase):
ev = self._get_pdu_cache.get(event_id) ev = self._get_pdu_cache.get(event_id)
if ev: if ev:
defer.returnValue(ev) return ev
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {}) pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
@ -307,7 +307,7 @@ class FederationClient(FederationBase):
if signed_pdu: if signed_pdu:
self._get_pdu_cache[event_id] = signed_pdu self._get_pdu_cache[event_id] = signed_pdu
defer.returnValue(signed_pdu) return signed_pdu
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -355,7 +355,7 @@ class FederationClient(FederationBase):
auth_chain.sort(key=lambda e: e.depth) auth_chain.sort(key=lambda e: e.depth)
defer.returnValue((pdus, auth_chain)) return (pdus, auth_chain)
except HttpResponseException as e: except HttpResponseException as e:
if e.code == 400 or e.code == 404: if e.code == 400 or e.code == 404:
logger.info("Failed to use get_room_state_ids API, falling back") logger.info("Failed to use get_room_state_ids API, falling back")
@ -404,7 +404,7 @@ class FederationClient(FederationBase):
signed_auth.sort(key=lambda e: e.depth) signed_auth.sort(key=lambda e: e.depth)
defer.returnValue((signed_pdus, signed_auth)) return (signed_pdus, signed_auth)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_events_from_store_or_dest(self, destination, room_id, event_ids): def get_events_from_store_or_dest(self, destination, room_id, event_ids):
@ -429,7 +429,7 @@ class FederationClient(FederationBase):
missing_events.discard(k) missing_events.discard(k)
if not missing_events: if not missing_events:
defer.returnValue((signed_events, failed_to_fetch)) return (signed_events, failed_to_fetch)
logger.debug( logger.debug(
"Fetching unknown state/auth events %s for room %s", "Fetching unknown state/auth events %s for room %s",
@ -465,7 +465,7 @@ class FederationClient(FederationBase):
# We removed all events we successfully fetched from `batch` # We removed all events we successfully fetched from `batch`
failed_to_fetch.update(batch) failed_to_fetch.update(batch)
defer.returnValue((signed_events, failed_to_fetch)) return (signed_events, failed_to_fetch)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -485,7 +485,7 @@ class FederationClient(FederationBase):
signed_auth.sort(key=lambda e: e.depth) signed_auth.sort(key=lambda e: e.depth)
defer.returnValue(signed_auth) return signed_auth
@defer.inlineCallbacks @defer.inlineCallbacks
def _try_destination_list(self, description, destinations, callback): def _try_destination_list(self, description, destinations, callback):
@ -511,9 +511,8 @@ class FederationClient(FederationBase):
The [Deferred] result of callback, if it succeeds The [Deferred] result of callback, if it succeeds
Raises: Raises:
SynapseError if the chosen remote server returns a 300/400 code. SynapseError if the chosen remote server returns a 300/400 code, or
no servers were reachable.
RuntimeError if no servers were reachable.
""" """
for destination in destinations: for destination in destinations:
if destination == self.server_name: if destination == self.server_name:
@ -521,7 +520,7 @@ class FederationClient(FederationBase):
try: try:
res = yield callback(destination) res = yield callback(destination)
defer.returnValue(res) return res
except InvalidResponseError as e: except InvalidResponseError as e:
logger.warn("Failed to %s via %s: %s", description, destination, e) logger.warn("Failed to %s via %s: %s", description, destination, e)
except HttpResponseException as e: except HttpResponseException as e:
@ -538,7 +537,7 @@ class FederationClient(FederationBase):
except Exception: except Exception:
logger.warn("Failed to %s via %s", description, destination, exc_info=1) logger.warn("Failed to %s via %s", description, destination, exc_info=1)
raise RuntimeError("Failed to %s via any server" % (description,)) raise SynapseError(502, "Failed to %s via any server" % (description,))
def make_membership_event( def make_membership_event(
self, destinations, room_id, user_id, membership, content, params self, destinations, room_id, user_id, membership, content, params
@ -615,7 +614,7 @@ class FederationClient(FederationBase):
event_dict=pdu_dict, event_dict=pdu_dict,
) )
defer.returnValue((destination, ev, event_format)) return (destination, ev, event_format)
return self._try_destination_list( return self._try_destination_list(
"make_" + membership, destinations, send_request "make_" + membership, destinations, send_request
@ -728,13 +727,11 @@ class FederationClient(FederationBase):
check_authchain_validity(signed_auth) check_authchain_validity(signed_auth)
defer.returnValue( return {
{
"state": signed_state, "state": signed_state,
"auth_chain": signed_auth, "auth_chain": signed_auth,
"origin": destination, "origin": destination,
} }
)
return self._try_destination_list("send_join", destinations, send_request) return self._try_destination_list("send_join", destinations, send_request)
@ -758,7 +755,7 @@ class FederationClient(FederationBase):
# FIXME: We should handle signature failures more gracefully. # FIXME: We should handle signature failures more gracefully.
defer.returnValue(pdu) return pdu
@defer.inlineCallbacks @defer.inlineCallbacks
def _do_send_invite(self, destination, pdu, room_version): def _do_send_invite(self, destination, pdu, room_version):
@ -786,7 +783,7 @@ class FederationClient(FederationBase):
"invite_room_state": pdu.unsigned.get("invite_room_state", []), "invite_room_state": pdu.unsigned.get("invite_room_state", []),
}, },
) )
defer.returnValue(content) return content
except HttpResponseException as e: except HttpResponseException as e:
if e.code in [400, 404]: if e.code in [400, 404]:
err = e.to_synapse_error() err = e.to_synapse_error()
@ -821,7 +818,7 @@ class FederationClient(FederationBase):
event_id=pdu.event_id, event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now), content=pdu.get_pdu_json(time_now),
) )
defer.returnValue(content) return content
def send_leave(self, destinations, pdu): def send_leave(self, destinations, pdu):
"""Sends a leave event to one of a list of homeservers. """Sends a leave event to one of a list of homeservers.
@ -856,7 +853,7 @@ class FederationClient(FederationBase):
) )
logger.debug("Got content: %s", content) logger.debug("Got content: %s", content)
defer.returnValue(None) return None
return self._try_destination_list("send_leave", destinations, send_request) return self._try_destination_list("send_leave", destinations, send_request)
@ -917,7 +914,7 @@ class FederationClient(FederationBase):
"missing": content.get("missing", []), "missing": content.get("missing", []),
} }
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def get_missing_events( def get_missing_events(
@ -974,7 +971,7 @@ class FederationClient(FederationBase):
# get_missing_events # get_missing_events
signed_events = [] signed_events = []
defer.returnValue(signed_events) return signed_events
@defer.inlineCallbacks @defer.inlineCallbacks
def forward_third_party_invite(self, destinations, room_id, event_dict): def forward_third_party_invite(self, destinations, room_id, event_dict):
@ -986,7 +983,7 @@ class FederationClient(FederationBase):
yield self.transport_layer.exchange_third_party_invite( yield self.transport_layer.exchange_third_party_invite(
destination=destination, room_id=room_id, event_dict=event_dict destination=destination, room_id=room_id, event_dict=event_dict
) )
defer.returnValue(None) return None
except CodeMessageException: except CodeMessageException:
raise raise
except Exception as e: except Exception as e:
@ -995,3 +992,39 @@ class FederationClient(FederationBase):
) )
raise RuntimeError("Failed to send to any server.") raise RuntimeError("Failed to send to any server.")
@defer.inlineCallbacks
def get_room_complexity(self, destination, room_id):
"""
Fetch the complexity of a remote room from another server.
Args:
destination (str): The remote server
room_id (str): The room ID to ask about.
Returns:
Deferred[dict] or Deferred[None]: Dict contains the complexity
metric versions, while None means we could not fetch the complexity.
"""
try:
complexity = yield self.transport_layer.get_room_complexity(
destination=destination, room_id=room_id
)
defer.returnValue(complexity)
except CodeMessageException as e:
# We didn't manage to get it -- probably a 404. We are okay if other
# servers don't give it to us.
logger.debug(
"Failed to fetch room complexity via %s for %s, got a %d",
destination,
room_id,
e.code,
)
except Exception:
logger.exception(
"Failed to fetch room complexity via %s for %s", destination, room_id
)
# If we don't manage to find it, return None. It's not an error if a
# server doesn't give it to us.
defer.returnValue(None)

View File

@ -99,7 +99,7 @@ class FederationServer(FederationBase):
res = self._transaction_from_pdus(pdus).get_dict() res = self._transaction_from_pdus(pdus).get_dict()
defer.returnValue((200, res)) return (200, res)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -126,7 +126,7 @@ class FederationServer(FederationBase):
origin, transaction, request_time origin, transaction, request_time
) )
defer.returnValue(result) return result
@defer.inlineCallbacks @defer.inlineCallbacks
def _handle_incoming_transaction(self, origin, transaction, request_time): def _handle_incoming_transaction(self, origin, transaction, request_time):
@ -147,8 +147,7 @@ class FederationServer(FederationBase):
"[%s] We've already responded to this request", "[%s] We've already responded to this request",
transaction.transaction_id, transaction.transaction_id,
) )
defer.returnValue(response) return response
return
logger.debug("[%s] Transaction is new", transaction.transaction_id) logger.debug("[%s] Transaction is new", transaction.transaction_id)
@ -163,7 +162,7 @@ class FederationServer(FederationBase):
yield self.transaction_actions.set_response( yield self.transaction_actions.set_response(
origin, transaction, 400, response origin, transaction, 400, response
) )
defer.returnValue((400, response)) return (400, response)
received_pdus_counter.inc(len(transaction.pdus)) received_pdus_counter.inc(len(transaction.pdus))
@ -265,7 +264,7 @@ class FederationServer(FederationBase):
logger.debug("Returning: %s", str(response)) logger.debug("Returning: %s", str(response))
yield self.transaction_actions.set_response(origin, transaction, 200, response) yield self.transaction_actions.set_response(origin, transaction, 200, response)
defer.returnValue((200, response)) return (200, response)
@defer.inlineCallbacks @defer.inlineCallbacks
def received_edu(self, origin, edu_type, content): def received_edu(self, origin, edu_type, content):
@ -298,7 +297,7 @@ class FederationServer(FederationBase):
event_id, event_id,
) )
defer.returnValue((200, resp)) return (200, resp)
@defer.inlineCallbacks @defer.inlineCallbacks
def on_state_ids_request(self, origin, room_id, event_id): def on_state_ids_request(self, origin, room_id, event_id):
@ -315,9 +314,7 @@ class FederationServer(FederationBase):
state_ids = yield self.handler.get_state_ids_for_pdu(room_id, event_id) state_ids = yield self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = yield self.store.get_auth_chain_ids(state_ids) auth_chain_ids = yield self.store.get_auth_chain_ids(state_ids)
defer.returnValue( return (200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids})
(200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids})
)
@defer.inlineCallbacks @defer.inlineCallbacks
def _on_context_state_request_compute(self, room_id, event_id): def _on_context_state_request_compute(self, room_id, event_id):
@ -336,12 +333,10 @@ class FederationServer(FederationBase):
) )
) )
defer.returnValue( return {
{
"pdus": [pdu.get_pdu_json() for pdu in pdus], "pdus": [pdu.get_pdu_json() for pdu in pdus],
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain], "auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
} }
)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -349,15 +344,15 @@ class FederationServer(FederationBase):
pdu = yield self.handler.get_persisted_pdu(origin, event_id) pdu = yield self.handler.get_persisted_pdu(origin, event_id)
if pdu: if pdu:
defer.returnValue((200, self._transaction_from_pdus([pdu]).get_dict())) return (200, self._transaction_from_pdus([pdu]).get_dict())
else: else:
defer.returnValue((404, "")) return (404, "")
@defer.inlineCallbacks @defer.inlineCallbacks
def on_query_request(self, query_type, args): def on_query_request(self, query_type, args):
received_queries_counter.labels(query_type).inc() received_queries_counter.labels(query_type).inc()
resp = yield self.registry.on_query(query_type, args) resp = yield self.registry.on_query(query_type, args)
defer.returnValue((200, resp)) return (200, resp)
@defer.inlineCallbacks @defer.inlineCallbacks
def on_make_join_request(self, origin, room_id, user_id, supported_versions): def on_make_join_request(self, origin, room_id, user_id, supported_versions):
@ -371,9 +366,7 @@ class FederationServer(FederationBase):
pdu = yield self.handler.on_make_join_request(origin, room_id, user_id) pdu = yield self.handler.on_make_join_request(origin, room_id, user_id)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
defer.returnValue( return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
{"event": pdu.get_pdu_json(time_now), "room_version": room_version}
)
@defer.inlineCallbacks @defer.inlineCallbacks
def on_invite_request(self, origin, content, room_version): def on_invite_request(self, origin, content, room_version):
@ -391,7 +384,7 @@ class FederationServer(FederationBase):
yield self.check_server_matches_acl(origin_host, pdu.room_id) yield self.check_server_matches_acl(origin_host, pdu.room_id)
ret_pdu = yield self.handler.on_invite_request(origin, pdu) ret_pdu = yield self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
defer.returnValue({"event": ret_pdu.get_pdu_json(time_now)}) return {"event": ret_pdu.get_pdu_json(time_now)}
@defer.inlineCallbacks @defer.inlineCallbacks
def on_send_join_request(self, origin, content, room_id): def on_send_join_request(self, origin, content, room_id):
@ -407,8 +400,7 @@ class FederationServer(FederationBase):
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures) logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
res_pdus = yield self.handler.on_send_join_request(origin, pdu) res_pdus = yield self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
defer.returnValue( return (
(
200, 200,
{ {
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]], "state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
@ -417,7 +409,6 @@ class FederationServer(FederationBase):
], ],
}, },
) )
)
@defer.inlineCallbacks @defer.inlineCallbacks
def on_make_leave_request(self, origin, room_id, user_id): def on_make_leave_request(self, origin, room_id, user_id):
@ -428,9 +419,7 @@ class FederationServer(FederationBase):
room_version = yield self.store.get_room_version(room_id) room_version = yield self.store.get_room_version(room_id)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
defer.returnValue( return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
{"event": pdu.get_pdu_json(time_now), "room_version": room_version}
)
@defer.inlineCallbacks @defer.inlineCallbacks
def on_send_leave_request(self, origin, content, room_id): def on_send_leave_request(self, origin, content, room_id):
@ -445,7 +434,7 @@ class FederationServer(FederationBase):
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures) logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu) yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {})) return (200, {})
@defer.inlineCallbacks @defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id): def on_event_auth(self, origin, room_id, event_id):
@ -456,7 +445,7 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id) auth_pdus = yield self.handler.on_event_auth(event_id)
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]} res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
defer.returnValue((200, res)) return (200, res)
@defer.inlineCallbacks @defer.inlineCallbacks
def on_query_auth_request(self, origin, content, room_id, event_id): def on_query_auth_request(self, origin, content, room_id, event_id):
@ -509,7 +498,7 @@ class FederationServer(FederationBase):
"missing": ret.get("missing", []), "missing": ret.get("missing", []),
} }
defer.returnValue((200, send_content)) return (200, send_content)
@log_function @log_function
def on_query_client_keys(self, origin, content): def on_query_client_keys(self, origin, content):
@ -548,7 +537,7 @@ class FederationServer(FederationBase):
), ),
) )
defer.returnValue({"one_time_keys": json_result}) return {"one_time_keys": json_result}
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -580,9 +569,7 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
defer.returnValue( return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
{"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
)
@log_function @log_function
def on_openid_userinfo(self, token): def on_openid_userinfo(self, token):
@ -676,14 +663,14 @@ class FederationServer(FederationBase):
ret = yield self.handler.exchange_third_party_invite( ret = yield self.handler.exchange_third_party_invite(
sender_user_id, target_user_id, room_id, signed sender_user_id, target_user_id, room_id, signed
) )
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def on_exchange_third_party_invite_request(self, origin, room_id, event_dict): def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
ret = yield self.handler.on_exchange_third_party_invite_request( ret = yield self.handler.on_exchange_third_party_invite_request(
origin, room_id, event_dict origin, room_id, event_dict
) )
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def check_server_matches_acl(self, server_name, room_id): def check_server_matches_acl(self, server_name, room_id):

View File

@ -374,7 +374,7 @@ class PerDestinationQueue(object):
assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs" assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
defer.returnValue((edus, now_stream_id)) return (edus, now_stream_id)
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_to_device_message_edus(self, limit): def _get_to_device_message_edus(self, limit):
@ -393,4 +393,4 @@ class PerDestinationQueue(object):
for content in contents for content in contents
] ]
defer.returnValue((edus, stream_id)) return (edus, stream_id)

View File

@ -133,4 +133,4 @@ class TransactionManager(object):
) )
success = False success = False
defer.returnValue(success) return success

View File

@ -21,7 +21,11 @@ from six.moves import urllib
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import Membership from synapse.api.constants import Membership
from synapse.api.urls import FEDERATION_V1_PREFIX, FEDERATION_V2_PREFIX from synapse.api.urls import (
FEDERATION_UNSTABLE_PREFIX,
FEDERATION_V1_PREFIX,
FEDERATION_V2_PREFIX,
)
from synapse.logging.utils import log_function from synapse.logging.utils import log_function
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -183,7 +187,7 @@ class TransportLayerClient(object):
try_trailing_slash_on_400=True, try_trailing_slash_on_400=True,
) )
defer.returnValue(response) return response
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -201,7 +205,7 @@ class TransportLayerClient(object):
ignore_backoff=ignore_backoff, ignore_backoff=ignore_backoff,
) )
defer.returnValue(content) return content
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -259,7 +263,7 @@ class TransportLayerClient(object):
ignore_backoff=ignore_backoff, ignore_backoff=ignore_backoff,
) )
defer.returnValue(content) return content
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -270,7 +274,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content destination=destination, path=path, data=content
) )
defer.returnValue(response) return response
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -288,7 +292,7 @@ class TransportLayerClient(object):
ignore_backoff=True, ignore_backoff=True,
) )
defer.returnValue(response) return response
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -299,7 +303,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
defer.returnValue(response) return response
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -310,7 +314,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
defer.returnValue(response) return response
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -339,7 +343,7 @@ class TransportLayerClient(object):
destination=remote_server, path=path, args=args, ignore_backoff=True destination=remote_server, path=path, args=args, ignore_backoff=True
) )
defer.returnValue(response) return response
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -350,7 +354,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=event_dict destination=destination, path=path, data=event_dict
) )
defer.returnValue(response) return response
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -359,7 +363,7 @@ class TransportLayerClient(object):
content = yield self.client.get_json(destination=destination, path=path) content = yield self.client.get_json(destination=destination, path=path)
defer.returnValue(content) return content
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -370,7 +374,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content destination=destination, path=path, data=content
) )
defer.returnValue(content) return content
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -402,7 +406,7 @@ class TransportLayerClient(object):
content = yield self.client.post_json( content = yield self.client.post_json(
destination=destination, path=path, data=query_content, timeout=timeout destination=destination, path=path, data=query_content, timeout=timeout
) )
defer.returnValue(content) return content
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -426,7 +430,7 @@ class TransportLayerClient(object):
content = yield self.client.get_json( content = yield self.client.get_json(
destination=destination, path=path, timeout=timeout destination=destination, path=path, timeout=timeout
) )
defer.returnValue(content) return content
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -460,7 +464,7 @@ class TransportLayerClient(object):
content = yield self.client.post_json( content = yield self.client.post_json(
destination=destination, path=path, data=query_content, timeout=timeout destination=destination, path=path, data=query_content, timeout=timeout
) )
defer.returnValue(content) return content
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -488,7 +492,7 @@ class TransportLayerClient(object):
timeout=timeout, timeout=timeout,
) )
defer.returnValue(content) return content
@log_function @log_function
def get_group_profile(self, destination, group_id, requester_user_id): def get_group_profile(self, destination, group_id, requester_user_id):
@ -935,6 +939,23 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
def get_room_complexity(self, destination, room_id):
"""
Args:
destination (str): The remote server
room_id (str): The room ID to ask about.
"""
path = _create_path(FEDERATION_UNSTABLE_PREFIX, "/rooms/%s/complexity", room_id)
return self.client.get_json(destination=destination, path=path)
def _create_path(federation_prefix, path, *args):
"""
Ensures that all args are url encoded.
"""
return federation_prefix + path % tuple(urllib.parse.quote(arg, "") for arg in args)
def _create_v1_path(path, *args): def _create_v1_path(path, *args):
"""Creates a path against V1 federation API from the path template and """Creates a path against V1 federation API from the path template and
@ -951,9 +972,7 @@ def _create_v1_path(path, *args):
Returns: Returns:
str str
""" """
return FEDERATION_V1_PREFIX + path % tuple( return _create_path(FEDERATION_V1_PREFIX, path, *args)
urllib.parse.quote(arg, "") for arg in args
)
def _create_v2_path(path, *args): def _create_v2_path(path, *args):
@ -971,6 +990,4 @@ def _create_v2_path(path, *args):
Returns: Returns:
str str
""" """
return FEDERATION_V2_PREFIX + path % tuple( return _create_path(FEDERATION_V2_PREFIX, path, *args)
urllib.parse.quote(arg, "") for arg in args
)

View File

@ -19,6 +19,8 @@ import functools
import logging import logging
import re import re
from twisted.internet.defer import maybeDeferred
import synapse import synapse
import synapse.logging.opentracing as opentracing import synapse.logging.opentracing as opentracing
from synapse.api.errors import Codes, FederationDeniedError, SynapseError from synapse.api.errors import Codes, FederationDeniedError, SynapseError
@ -745,8 +747,12 @@ class PublicRoomList(BaseFederationServlet):
else: else:
network_tuple = ThirdPartyInstanceID(None, None) network_tuple = ThirdPartyInstanceID(None, None)
data = await self.handler.get_local_public_room_list( data = await maybeDeferred(
limit, since_token, network_tuple=network_tuple, from_federation=True self.handler.get_local_public_room_list,
limit,
since_token,
network_tuple=network_tuple,
from_federation=True,
) )
return 200, data return 200, data

View File

@ -157,7 +157,7 @@ class GroupAttestionRenewer(object):
yield self.store.update_remote_attestion(group_id, user_id, attestation) yield self.store.update_remote_attestion(group_id, user_id, attestation)
defer.returnValue({}) return {}
def _start_renew_attestations(self): def _start_renew_attestations(self):
return run_as_background_process("renew_attestations", self._renew_attestations) return run_as_background_process("renew_attestations", self._renew_attestations)

View File

@ -85,7 +85,7 @@ class GroupsServerHandler(object):
if not is_admin: if not is_admin:
raise SynapseError(403, "User is not admin in group") raise SynapseError(403, "User is not admin in group")
defer.returnValue(group) return group
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_summary(self, group_id, requester_user_id): def get_group_summary(self, group_id, requester_user_id):
@ -151,8 +151,7 @@ class GroupsServerHandler(object):
group_id, requester_user_id group_id, requester_user_id
) )
defer.returnValue( return {
{
"profile": profile, "profile": profile,
"users_section": { "users_section": {
"users": users, "users": users,
@ -166,7 +165,6 @@ class GroupsServerHandler(object):
}, },
"user": membership_info, "user": membership_info,
} }
)
@defer.inlineCallbacks @defer.inlineCallbacks
def update_group_summary_room( def update_group_summary_room(
@ -192,7 +190,7 @@ class GroupsServerHandler(object):
is_public=is_public, is_public=is_public,
) )
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_group_summary_room( def delete_group_summary_room(
@ -208,7 +206,7 @@ class GroupsServerHandler(object):
group_id=group_id, room_id=room_id, category_id=category_id group_id=group_id, room_id=room_id, category_id=category_id
) )
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def set_group_join_policy(self, group_id, requester_user_id, content): def set_group_join_policy(self, group_id, requester_user_id, content):
@ -228,7 +226,7 @@ class GroupsServerHandler(object):
yield self.store.set_group_join_policy(group_id, join_policy=join_policy) yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_categories(self, group_id, requester_user_id): def get_group_categories(self, group_id, requester_user_id):
@ -237,7 +235,7 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True) yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
categories = yield self.store.get_group_categories(group_id=group_id) categories = yield self.store.get_group_categories(group_id=group_id)
defer.returnValue({"categories": categories}) return {"categories": categories}
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_category(self, group_id, requester_user_id, category_id): def get_group_category(self, group_id, requester_user_id, category_id):
@ -249,7 +247,7 @@ class GroupsServerHandler(object):
group_id=group_id, category_id=category_id group_id=group_id, category_id=category_id
) )
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def update_group_category(self, group_id, requester_user_id, category_id, content): def update_group_category(self, group_id, requester_user_id, category_id, content):
@ -269,7 +267,7 @@ class GroupsServerHandler(object):
profile=profile, profile=profile,
) )
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_group_category(self, group_id, requester_user_id, category_id): def delete_group_category(self, group_id, requester_user_id, category_id):
@ -283,7 +281,7 @@ class GroupsServerHandler(object):
group_id=group_id, category_id=category_id group_id=group_id, category_id=category_id
) )
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_roles(self, group_id, requester_user_id): def get_group_roles(self, group_id, requester_user_id):
@ -292,7 +290,7 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True) yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
roles = yield self.store.get_group_roles(group_id=group_id) roles = yield self.store.get_group_roles(group_id=group_id)
defer.returnValue({"roles": roles}) return {"roles": roles}
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_role(self, group_id, requester_user_id, role_id): def get_group_role(self, group_id, requester_user_id, role_id):
@ -301,7 +299,7 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True) yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
res = yield self.store.get_group_role(group_id=group_id, role_id=role_id) res = yield self.store.get_group_role(group_id=group_id, role_id=role_id)
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def update_group_role(self, group_id, requester_user_id, role_id, content): def update_group_role(self, group_id, requester_user_id, role_id, content):
@ -319,7 +317,7 @@ class GroupsServerHandler(object):
group_id=group_id, role_id=role_id, is_public=is_public, profile=profile group_id=group_id, role_id=role_id, is_public=is_public, profile=profile
) )
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_group_role(self, group_id, requester_user_id, role_id): def delete_group_role(self, group_id, requester_user_id, role_id):
@ -331,7 +329,7 @@ class GroupsServerHandler(object):
yield self.store.remove_group_role(group_id=group_id, role_id=role_id) yield self.store.remove_group_role(group_id=group_id, role_id=role_id)
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def update_group_summary_user( def update_group_summary_user(
@ -355,7 +353,7 @@ class GroupsServerHandler(object):
is_public=is_public, is_public=is_public,
) )
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id): def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id):
@ -369,7 +367,7 @@ class GroupsServerHandler(object):
group_id=group_id, user_id=user_id, role_id=role_id group_id=group_id, user_id=user_id, role_id=role_id
) )
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_profile(self, group_id, requester_user_id): def get_group_profile(self, group_id, requester_user_id):
@ -391,7 +389,7 @@ class GroupsServerHandler(object):
group_description = {key: group[key] for key in cols} group_description = {key: group[key] for key in cols}
group_description["is_openly_joinable"] = group["join_policy"] == "open" group_description["is_openly_joinable"] = group["join_policy"] == "open"
defer.returnValue(group_description) return group_description
else: else:
raise SynapseError(404, "Unknown group") raise SynapseError(404, "Unknown group")
@ -461,9 +459,7 @@ class GroupsServerHandler(object):
# TODO: If admin add lists of users whose attestations have timed out # TODO: If admin add lists of users whose attestations have timed out
defer.returnValue( return {"chunk": chunk, "total_user_count_estimate": len(user_results)}
{"chunk": chunk, "total_user_count_estimate": len(user_results)}
)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_invited_users_in_group(self, group_id, requester_user_id): def get_invited_users_in_group(self, group_id, requester_user_id):
@ -494,9 +490,7 @@ class GroupsServerHandler(object):
logger.warn("Error getting profile for %s: %s", user_id, e) logger.warn("Error getting profile for %s: %s", user_id, e)
user_profiles.append(user_profile) user_profiles.append(user_profile)
defer.returnValue( return {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
{"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_rooms_in_group(self, group_id, requester_user_id): def get_rooms_in_group(self, group_id, requester_user_id):
@ -533,9 +527,7 @@ class GroupsServerHandler(object):
chunk.sort(key=lambda e: -e["num_joined_members"]) chunk.sort(key=lambda e: -e["num_joined_members"])
defer.returnValue( return {"chunk": chunk, "total_room_count_estimate": len(room_results)}
{"chunk": chunk, "total_room_count_estimate": len(room_results)}
)
@defer.inlineCallbacks @defer.inlineCallbacks
def add_room_to_group(self, group_id, requester_user_id, room_id, content): def add_room_to_group(self, group_id, requester_user_id, room_id, content):
@ -551,7 +543,7 @@ class GroupsServerHandler(object):
yield self.store.add_room_to_group(group_id, room_id, is_public=is_public) yield self.store.add_room_to_group(group_id, room_id, is_public=is_public)
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def update_room_in_group( def update_room_in_group(
@ -574,7 +566,7 @@ class GroupsServerHandler(object):
else: else:
raise SynapseError(400, "Uknown config option") raise SynapseError(400, "Uknown config option")
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def remove_room_from_group(self, group_id, requester_user_id, room_id): def remove_room_from_group(self, group_id, requester_user_id, room_id):
@ -586,7 +578,7 @@ class GroupsServerHandler(object):
yield self.store.remove_room_from_group(group_id, room_id) yield self.store.remove_room_from_group(group_id, room_id)
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def invite_to_group(self, group_id, user_id, requester_user_id, content): def invite_to_group(self, group_id, user_id, requester_user_id, content):
@ -644,9 +636,9 @@ class GroupsServerHandler(object):
) )
elif res["state"] == "invite": elif res["state"] == "invite":
yield self.store.add_group_invite(group_id, user_id) yield self.store.add_group_invite(group_id, user_id)
defer.returnValue({"state": "invite"}) return {"state": "invite"}
elif res["state"] == "reject": elif res["state"] == "reject":
defer.returnValue({"state": "reject"}) return {"state": "reject"}
else: else:
raise SynapseError(502, "Unknown state returned by HS") raise SynapseError(502, "Unknown state returned by HS")
@ -679,7 +671,7 @@ class GroupsServerHandler(object):
remote_attestation=remote_attestation, remote_attestation=remote_attestation,
) )
defer.returnValue(local_attestation) return local_attestation
@defer.inlineCallbacks @defer.inlineCallbacks
def accept_invite(self, group_id, requester_user_id, content): def accept_invite(self, group_id, requester_user_id, content):
@ -699,7 +691,7 @@ class GroupsServerHandler(object):
local_attestation = yield self._add_user(group_id, requester_user_id, content) local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({"state": "join", "attestation": local_attestation}) return {"state": "join", "attestation": local_attestation}
@defer.inlineCallbacks @defer.inlineCallbacks
def join_group(self, group_id, requester_user_id, content): def join_group(self, group_id, requester_user_id, content):
@ -716,7 +708,7 @@ class GroupsServerHandler(object):
local_attestation = yield self._add_user(group_id, requester_user_id, content) local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({"state": "join", "attestation": local_attestation}) return {"state": "join", "attestation": local_attestation}
@defer.inlineCallbacks @defer.inlineCallbacks
def knock(self, group_id, requester_user_id, content): def knock(self, group_id, requester_user_id, content):
@ -769,7 +761,7 @@ class GroupsServerHandler(object):
if not self.hs.is_mine_id(user_id): if not self.hs.is_mine_id(user_id):
yield self.store.maybe_delete_remote_profile_cache(user_id) yield self.store.maybe_delete_remote_profile_cache(user_id)
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def create_group(self, group_id, requester_user_id, content): def create_group(self, group_id, requester_user_id, content):
@ -845,7 +837,7 @@ class GroupsServerHandler(object):
avatar_url=user_profile.get("avatar_url"), avatar_url=user_profile.get("avatar_url"),
) )
defer.returnValue({"group_id": group_id}) return {"group_id": group_id}
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_group(self, group_id, requester_user_id): def delete_group(self, group_id, requester_user_id):

View File

@ -51,8 +51,8 @@ class AccountDataEventSource(object):
{"type": account_data_type, "content": content, "room_id": room_id} {"type": account_data_type, "content": content, "room_id": room_id}
) )
defer.returnValue((results, current_stream_id)) return (results, current_stream_id)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_pagination_rows(self, user, config, key): def get_pagination_rows(self, user, config, key):
defer.returnValue(([], config.to_id)) return ([], config.to_id)

View File

@ -193,7 +193,7 @@ class AccountValidityHandler(object):
if threepid["medium"] == "email": if threepid["medium"] == "email":
addresses.append(threepid["address"]) addresses.append(threepid["address"])
defer.returnValue(addresses) return addresses
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_renewal_token(self, user_id): def _get_renewal_token(self, user_id):
@ -214,7 +214,7 @@ class AccountValidityHandler(object):
try: try:
renewal_token = stringutils.random_string(32) renewal_token = stringutils.random_string(32)
yield self.store.set_renewal_token_for_user(user_id, renewal_token) yield self.store.set_renewal_token_for_user(user_id, renewal_token)
defer.returnValue(renewal_token) return renewal_token
except StoreError: except StoreError:
attempts += 1 attempts += 1
raise StoreError(500, "Couldn't generate a unique string as refresh string.") raise StoreError(500, "Couldn't generate a unique string as refresh string.")
@ -226,11 +226,19 @@ class AccountValidityHandler(object):
Args: Args:
renewal_token (str): Token sent with the renewal request. renewal_token (str): Token sent with the renewal request.
Returns:
bool: Whether the provided token is valid.
""" """
try:
user_id = yield self.store.get_user_from_renewal_token(renewal_token) user_id = yield self.store.get_user_from_renewal_token(renewal_token)
except StoreError:
defer.returnValue(False)
logger.debug("Renewing an account for user %s", user_id) logger.debug("Renewing an account for user %s", user_id)
yield self.renew_account_for_user(user_id) yield self.renew_account_for_user(user_id)
defer.returnValue(True)
@defer.inlineCallbacks @defer.inlineCallbacks
def renew_account_for_user(self, user_id, expiration_ts=None, email_sent=False): def renew_account_for_user(self, user_id, expiration_ts=None, email_sent=False):
"""Renews the account attached to a given user by pushing back the """Renews the account attached to a given user by pushing back the
@ -254,4 +262,4 @@ class AccountValidityHandler(object):
user_id=user_id, expiration_ts=expiration_ts, email_sent=email_sent user_id=user_id, expiration_ts=expiration_ts, email_sent=email_sent
) )
defer.returnValue(expiration_ts) return expiration_ts

View File

@ -100,4 +100,4 @@ class AcmeHandler(object):
logger.exception("Failed saving!") logger.exception("Failed saving!")
raise raise
defer.returnValue(True) return True

View File

@ -49,7 +49,7 @@ class AdminHandler(BaseHandler):
"devices": {"": {"sessions": [{"connections": connections}]}}, "devices": {"": {"sessions": [{"connections": connections}]}},
} }
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def get_users(self): def get_users(self):
@ -61,7 +61,7 @@ class AdminHandler(BaseHandler):
""" """
ret = yield self.store.get_users() ret = yield self.store.get_users()
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def get_users_paginate(self, order, start, limit): def get_users_paginate(self, order, start, limit):
@ -78,7 +78,7 @@ class AdminHandler(BaseHandler):
""" """
ret = yield self.store.get_users_paginate(order, start, limit) ret = yield self.store.get_users_paginate(order, start, limit)
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def search_users(self, term): def search_users(self, term):
@ -92,7 +92,7 @@ class AdminHandler(BaseHandler):
""" """
ret = yield self.store.search_users(term) ret = yield self.store.search_users(term)
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def export_user_data(self, user_id, writer): def export_user_data(self, user_id, writer):
@ -225,7 +225,7 @@ class AdminHandler(BaseHandler):
state = yield self.store.get_state_for_event(event_id) state = yield self.store.get_state_for_event(event_id)
writer.write_state(room_id, event_id, state) writer.write_state(room_id, event_id, state)
defer.returnValue(writer.finished()) return writer.finished()
class ExfiltrationWriter(object): class ExfiltrationWriter(object):

View File

@ -167,8 +167,8 @@ class ApplicationServicesHandler(object):
for user_service in user_query_services: for user_service in user_query_services:
is_known_user = yield self.appservice_api.query_user(user_service, user_id) is_known_user = yield self.appservice_api.query_user(user_service, user_id)
if is_known_user: if is_known_user:
defer.returnValue(True) return True
defer.returnValue(False) return False
@defer.inlineCallbacks @defer.inlineCallbacks
def query_room_alias_exists(self, room_alias): def query_room_alias_exists(self, room_alias):
@ -192,7 +192,7 @@ class ApplicationServicesHandler(object):
if is_known_alias: if is_known_alias:
# the alias exists now so don't query more ASes. # the alias exists now so don't query more ASes.
result = yield self.store.get_association_from_room_alias(room_alias) result = yield self.store.get_association_from_room_alias(room_alias)
defer.returnValue(result) return result
@defer.inlineCallbacks @defer.inlineCallbacks
def query_3pe(self, kind, protocol, fields): def query_3pe(self, kind, protocol, fields):
@ -215,7 +215,7 @@ class ApplicationServicesHandler(object):
if success: if success:
ret.extend(result) ret.extend(result)
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def get_3pe_protocols(self, only_protocol=None): def get_3pe_protocols(self, only_protocol=None):
@ -254,7 +254,7 @@ class ApplicationServicesHandler(object):
for p in protocols.keys(): for p in protocols.keys():
protocols[p] = _merge_instances(protocols[p]) protocols[p] = _merge_instances(protocols[p])
defer.returnValue(protocols) return protocols
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_services_for_event(self, event): def _get_services_for_event(self, event):
@ -276,7 +276,7 @@ class ApplicationServicesHandler(object):
if (yield s.is_interested(event, self.store)): if (yield s.is_interested(event, self.store)):
interested_list.append(s) interested_list.append(s)
defer.returnValue(interested_list) return interested_list
def _get_services_for_user(self, user_id): def _get_services_for_user(self, user_id):
services = self.store.get_app_services() services = self.store.get_app_services()
@ -293,23 +293,23 @@ class ApplicationServicesHandler(object):
if not self.is_mine_id(user_id): if not self.is_mine_id(user_id):
# we don't know if they are unknown or not since it isn't one of our # we don't know if they are unknown or not since it isn't one of our
# users. We can't poke ASes. # users. We can't poke ASes.
defer.returnValue(False) return False
return return
user_info = yield self.store.get_user_by_id(user_id) user_info = yield self.store.get_user_by_id(user_id)
if user_info: if user_info:
defer.returnValue(False) return False
return return
# user not found; could be the AS though, so check. # user not found; could be the AS though, so check.
services = self.store.get_app_services() services = self.store.get_app_services()
service_list = [s for s in services if s.sender == user_id] service_list = [s for s in services if s.sender == user_id]
defer.returnValue(len(service_list) == 0) return len(service_list) == 0
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_user_exists(self, user_id): def _check_user_exists(self, user_id):
unknown_user = yield self._is_unknown_user(user_id) unknown_user = yield self._is_unknown_user(user_id)
if unknown_user: if unknown_user:
exists = yield self.query_user_exists(user_id) exists = yield self.query_user_exists(user_id)
defer.returnValue(exists) return exists
defer.returnValue(True) return True

View File

@ -155,7 +155,7 @@ class AuthHandler(BaseHandler):
if user_id != requester.user.to_string(): if user_id != requester.user.to_string():
raise AuthError(403, "Invalid auth") raise AuthError(403, "Invalid auth")
defer.returnValue(params) return params
@defer.inlineCallbacks @defer.inlineCallbacks
def check_auth(self, flows, clientdict, clientip, password_servlet=False): def check_auth(self, flows, clientdict, clientip, password_servlet=False):
@ -280,7 +280,7 @@ class AuthHandler(BaseHandler):
creds, creds,
list(clientdict), list(clientdict),
) )
defer.returnValue((creds, clientdict, session["id"])) return (creds, clientdict, session["id"])
ret = self._auth_dict_for_flows(flows, session) ret = self._auth_dict_for_flows(flows, session)
ret["completed"] = list(creds) ret["completed"] = list(creds)
@ -307,8 +307,8 @@ class AuthHandler(BaseHandler):
if result: if result:
creds[stagetype] = result creds[stagetype] = result
self._save_session(sess) self._save_session(sess)
defer.returnValue(True) return True
defer.returnValue(False) return False
def get_session_id(self, clientdict): def get_session_id(self, clientdict):
""" """
@ -379,7 +379,7 @@ class AuthHandler(BaseHandler):
res = yield checker( res = yield checker(
authdict, clientip=clientip, password_servlet=password_servlet authdict, clientip=clientip, password_servlet=password_servlet
) )
defer.returnValue(res) return res
# build a v1-login-style dict out of the authdict and fall back to the # build a v1-login-style dict out of the authdict and fall back to the
# v1 code # v1 code
@ -389,7 +389,7 @@ class AuthHandler(BaseHandler):
raise SynapseError(400, "", Codes.MISSING_PARAM) raise SynapseError(400, "", Codes.MISSING_PARAM)
(canonical_id, callback) = yield self.validate_login(user_id, authdict) (canonical_id, callback) = yield self.validate_login(user_id, authdict)
defer.returnValue(canonical_id) return canonical_id
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_recaptcha(self, authdict, clientip, **kwargs): def _check_recaptcha(self, authdict, clientip, **kwargs):
@ -433,7 +433,7 @@ class AuthHandler(BaseHandler):
resp_body.get("hostname"), resp_body.get("hostname"),
) )
if resp_body["success"]: if resp_body["success"]:
defer.returnValue(True) return True
raise LoginError(401, "", errcode=Codes.UNAUTHORIZED) raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
def _check_email_identity(self, authdict, **kwargs): def _check_email_identity(self, authdict, **kwargs):
@ -502,7 +502,7 @@ class AuthHandler(BaseHandler):
threepid["threepid_creds"] = authdict["threepid_creds"] threepid["threepid_creds"] = authdict["threepid_creds"]
defer.returnValue(threepid) return threepid
def _get_params_recaptcha(self): def _get_params_recaptcha(self):
return {"public_key": self.hs.config.recaptcha_public_key} return {"public_key": self.hs.config.recaptcha_public_key}
@ -606,7 +606,7 @@ class AuthHandler(BaseHandler):
yield self.store.delete_access_token(access_token) yield self.store.delete_access_token(access_token)
raise StoreError(400, "Login raced against device deletion") raise StoreError(400, "Login raced against device deletion")
defer.returnValue(access_token) return access_token
@defer.inlineCallbacks @defer.inlineCallbacks
def check_user_exists(self, user_id): def check_user_exists(self, user_id):
@ -629,8 +629,8 @@ class AuthHandler(BaseHandler):
self.ratelimit_login_per_account(user_id) self.ratelimit_login_per_account(user_id)
res = yield self._find_user_id_and_pwd_hash(user_id) res = yield self._find_user_id_and_pwd_hash(user_id)
if res is not None: if res is not None:
defer.returnValue(res[0]) return res[0]
defer.returnValue(None) return None
@defer.inlineCallbacks @defer.inlineCallbacks
def _find_user_id_and_pwd_hash(self, user_id): def _find_user_id_and_pwd_hash(self, user_id):
@ -661,7 +661,7 @@ class AuthHandler(BaseHandler):
user_id, user_id,
user_infos.keys(), user_infos.keys(),
) )
defer.returnValue(result) return result
def get_supported_login_types(self): def get_supported_login_types(self):
"""Get a the login types supported for the /login API """Get a the login types supported for the /login API
@ -722,7 +722,7 @@ class AuthHandler(BaseHandler):
known_login_type = True known_login_type = True
is_valid = yield provider.check_password(qualified_user_id, password) is_valid = yield provider.check_password(qualified_user_id, password)
if is_valid: if is_valid:
defer.returnValue((qualified_user_id, None)) return (qualified_user_id, None)
if not hasattr(provider, "get_supported_login_types") or not hasattr( if not hasattr(provider, "get_supported_login_types") or not hasattr(
provider, "check_auth" provider, "check_auth"
@ -756,7 +756,7 @@ class AuthHandler(BaseHandler):
if result: if result:
if isinstance(result, str): if isinstance(result, str):
result = (result, None) result = (result, None)
defer.returnValue(result) return result
if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled: if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled:
known_login_type = True known_login_type = True
@ -766,7 +766,7 @@ class AuthHandler(BaseHandler):
) )
if canonical_user_id: if canonical_user_id:
defer.returnValue((canonical_user_id, None)) return (canonical_user_id, None)
if not known_login_type: if not known_login_type:
raise SynapseError(400, "Unknown login type %s" % login_type) raise SynapseError(400, "Unknown login type %s" % login_type)
@ -814,9 +814,9 @@ class AuthHandler(BaseHandler):
if isinstance(result, str): if isinstance(result, str):
# If it's a str, set callback function to None # If it's a str, set callback function to None
result = (result, None) result = (result, None)
defer.returnValue(result) return result
defer.returnValue((None, None)) return (None, None)
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_local_password(self, user_id, password): def _check_local_password(self, user_id, password):
@ -838,7 +838,7 @@ class AuthHandler(BaseHandler):
""" """
lookupres = yield self._find_user_id_and_pwd_hash(user_id) lookupres = yield self._find_user_id_and_pwd_hash(user_id)
if not lookupres: if not lookupres:
defer.returnValue(None) return None
(user_id, password_hash) = lookupres (user_id, password_hash) = lookupres
# If the password hash is None, the account has likely been deactivated # If the password hash is None, the account has likely been deactivated
@ -850,8 +850,8 @@ class AuthHandler(BaseHandler):
result = yield self.validate_hash(password, password_hash) result = yield self.validate_hash(password, password_hash)
if not result: if not result:
logger.warn("Failed password login for user %s", user_id) logger.warn("Failed password login for user %s", user_id)
defer.returnValue(None) return None
defer.returnValue(user_id) return user_id
@defer.inlineCallbacks @defer.inlineCallbacks
def validate_short_term_login_token_and_get_user_id(self, login_token): def validate_short_term_login_token_and_get_user_id(self, login_token):
@ -860,12 +860,12 @@ class AuthHandler(BaseHandler):
try: try:
macaroon = pymacaroons.Macaroon.deserialize(login_token) macaroon = pymacaroons.Macaroon.deserialize(login_token)
user_id = auth_api.get_user_id_from_macaroon(macaroon) user_id = auth_api.get_user_id_from_macaroon(macaroon)
auth_api.validate_macaroon(macaroon, "login", True, user_id) auth_api.validate_macaroon(macaroon, "login", user_id)
except Exception: except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN) raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
self.ratelimit_login_per_account(user_id) self.ratelimit_login_per_account(user_id)
yield self.auth.check_auth_blocking(user_id) yield self.auth.check_auth_blocking(user_id)
defer.returnValue(user_id) return user_id
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_access_token(self, access_token): def delete_access_token(self, access_token):
@ -976,7 +976,7 @@ class AuthHandler(BaseHandler):
) )
yield self.store.user_delete_threepid(user_id, medium, address) yield self.store.user_delete_threepid(user_id, medium, address)
defer.returnValue(result) return result
def _save_session(self, session): def _save_session(self, session):
# TODO: Persistent storage # TODO: Persistent storage

View File

@ -125,7 +125,7 @@ class DeactivateAccountHandler(BaseHandler):
# Mark the user as deactivated. # Mark the user as deactivated.
yield self.store.set_user_deactivated_status(user_id, True) yield self.store.set_user_deactivated_status(user_id, True)
defer.returnValue(identity_server_supports_unbinding) return identity_server_supports_unbinding
def _start_user_parting(self): def _start_user_parting(self):
""" """

View File

@ -64,7 +64,7 @@ class DeviceWorkerHandler(BaseHandler):
for device in devices: for device in devices:
_update_device_from_client_ips(device, ips) _update_device_from_client_ips(device, ips)
defer.returnValue(devices) return devices
@defer.inlineCallbacks @defer.inlineCallbacks
def get_device(self, user_id, device_id): def get_device(self, user_id, device_id):
@ -85,7 +85,7 @@ class DeviceWorkerHandler(BaseHandler):
raise errors.NotFoundError raise errors.NotFoundError
ips = yield self.store.get_last_client_ip_by_device(user_id, device_id) ips = yield self.store.get_last_client_ip_by_device(user_id, device_id)
_update_device_from_client_ips(device, ips) _update_device_from_client_ips(device, ips)
defer.returnValue(device) return device
@measure_func("device.get_user_ids_changed") @measure_func("device.get_user_ids_changed")
@defer.inlineCallbacks @defer.inlineCallbacks
@ -200,9 +200,7 @@ class DeviceWorkerHandler(BaseHandler):
possibly_joined = [] possibly_joined = []
possibly_left = [] possibly_left = []
defer.returnValue( return {"changed": list(possibly_joined), "left": list(possibly_left)}
{"changed": list(possibly_joined), "left": list(possibly_left)}
)
class DeviceHandler(DeviceWorkerHandler): class DeviceHandler(DeviceWorkerHandler):
@ -211,12 +209,12 @@ class DeviceHandler(DeviceWorkerHandler):
self.federation_sender = hs.get_federation_sender() self.federation_sender = hs.get_federation_sender()
self._edu_updater = DeviceListEduUpdater(hs, self) self.device_list_updater = DeviceListUpdater(hs, self)
federation_registry = hs.get_federation_registry() federation_registry = hs.get_federation_registry()
federation_registry.register_edu_handler( federation_registry.register_edu_handler(
"m.device_list_update", self._edu_updater.incoming_device_list_update "m.device_list_update", self.device_list_updater.incoming_device_list_update
) )
federation_registry.register_query_handler( federation_registry.register_query_handler(
"user_devices", self.on_federation_query_user_devices "user_devices", self.on_federation_query_user_devices
@ -250,7 +248,7 @@ class DeviceHandler(DeviceWorkerHandler):
) )
if new_device: if new_device:
yield self.notify_device_update(user_id, [device_id]) yield self.notify_device_update(user_id, [device_id])
defer.returnValue(device_id) return device_id
# if the device id is not specified, we'll autogen one, but loop a few # if the device id is not specified, we'll autogen one, but loop a few
# times in case of a clash. # times in case of a clash.
@ -264,7 +262,7 @@ class DeviceHandler(DeviceWorkerHandler):
) )
if new_device: if new_device:
yield self.notify_device_update(user_id, [device_id]) yield self.notify_device_update(user_id, [device_id])
defer.returnValue(device_id) return device_id
attempts += 1 attempts += 1
raise errors.StoreError(500, "Couldn't generate a device ID.") raise errors.StoreError(500, "Couldn't generate a device ID.")
@ -411,9 +409,7 @@ class DeviceHandler(DeviceWorkerHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def on_federation_query_user_devices(self, user_id): def on_federation_query_user_devices(self, user_id):
stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id) stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
defer.returnValue( return {"user_id": user_id, "stream_id": stream_id, "devices": devices}
{"user_id": user_id, "stream_id": stream_id, "devices": devices}
)
@defer.inlineCallbacks @defer.inlineCallbacks
def user_left_room(self, user, room_id): def user_left_room(self, user, room_id):
@ -430,7 +426,7 @@ def _update_device_from_client_ips(device, client_ips):
device.update({"last_seen_ts": ip.get("last_seen"), "last_seen_ip": ip.get("ip")}) device.update({"last_seen_ts": ip.get("last_seen"), "last_seen_ip": ip.get("ip")})
class DeviceListEduUpdater(object): class DeviceListUpdater(object):
"Handles incoming device list updates from federation and updates the DB" "Handles incoming device list updates from federation and updates the DB"
def __init__(self, hs, device_handler): def __init__(self, hs, device_handler):
@ -523,75 +519,7 @@ class DeviceListEduUpdater(object):
logger.debug("Need to re-sync devices for %r? %r", user_id, resync) logger.debug("Need to re-sync devices for %r? %r", user_id, resync)
if resync: if resync:
# Fetch all devices for the user. yield self.user_device_resync(user_id)
origin = get_domain_from_id(user_id)
try:
result = yield self.federation.query_user_devices(origin, user_id)
except (
NotRetryingDestination,
RequestSendFailed,
HttpResponseException,
):
# TODO: Remember that we are now out of sync and try again
# later
logger.warn("Failed to handle device list update for %s", user_id)
# We abort on exceptions rather than accepting the update
# as otherwise synapse will 'forget' that its device list
# is out of date. If we bail then we will retry the resync
# next time we get a device list update for this user_id.
# This makes it more likely that the device lists will
# eventually become consistent.
return
except FederationDeniedError as e:
logger.info(e)
return
except Exception:
# TODO: Remember that we are now out of sync and try again
# later
logger.exception(
"Failed to handle device list update for %s", user_id
)
return
stream_id = result["stream_id"]
devices = result["devices"]
# If the remote server has more than ~1000 devices for this user
# we assume that something is going horribly wrong (e.g. a bot
# that logs in and creates a new device every time it tries to
# send a message). Maintaining lots of devices per user in the
# cache can cause serious performance issues as if this request
# takes more than 60s to complete, internal replication from the
# inbound federation worker to the synapse master may time out
# causing the inbound federation to fail and causing the remote
# server to retry, causing a DoS. So in this scenario we give
# up on storing the total list of devices and only handle the
# delta instead.
if len(devices) > 1000:
logger.warn(
"Ignoring device list snapshot for %s as it has >1K devs (%d)",
user_id,
len(devices),
)
devices = []
for device in devices:
logger.debug(
"Handling resync update %r/%r, ID: %r",
user_id,
device["device_id"],
stream_id,
)
yield self.store.update_remote_device_list_cache(
user_id, devices, stream_id
)
device_ids = [device["device_id"] for device in devices]
yield self.device_handler.notify_device_update(user_id, device_ids)
# We clobber the seen updates since we've re-synced from a given
# point.
self._seen_updates[user_id] = set([stream_id])
else: else:
# Simply update the single device, since we know that is the only # Simply update the single device, since we know that is the only
# change (because of the single prev_id matching the current cache) # change (because of the single prev_id matching the current cache)
@ -623,7 +551,7 @@ class DeviceListEduUpdater(object):
for _, stream_id, prev_ids, _ in updates: for _, stream_id, prev_ids, _ in updates:
if not prev_ids: if not prev_ids:
# We always do a resync if there are no previous IDs # We always do a resync if there are no previous IDs
defer.returnValue(True) return True
for prev_id in prev_ids: for prev_id in prev_ids:
if prev_id == extremity: if prev_id == extremity:
@ -633,8 +561,82 @@ class DeviceListEduUpdater(object):
elif prev_id in stream_id_in_updates: elif prev_id in stream_id_in_updates:
continue continue
else: else:
defer.returnValue(True) return True
stream_id_in_updates.add(stream_id) stream_id_in_updates.add(stream_id)
defer.returnValue(False) return False
@defer.inlineCallbacks
def user_device_resync(self, user_id):
"""Fetches all devices for a user and updates the device cache with them.
Args:
user_id (str): The user's id whose device_list will be updated.
Returns:
Deferred[dict]: a dict with device info as under the "devices" in the result of this
request:
https://matrix.org/docs/spec/server_server/r0.1.2#get-matrix-federation-v1-user-devices-userid
"""
# Fetch all devices for the user.
origin = get_domain_from_id(user_id)
try:
result = yield self.federation.query_user_devices(origin, user_id)
except (NotRetryingDestination, RequestSendFailed, HttpResponseException):
# TODO: Remember that we are now out of sync and try again
# later
logger.warn("Failed to handle device list update for %s", user_id)
# We abort on exceptions rather than accepting the update
# as otherwise synapse will 'forget' that its device list
# is out of date. If we bail then we will retry the resync
# next time we get a device list update for this user_id.
# This makes it more likely that the device lists will
# eventually become consistent.
return
except FederationDeniedError as e:
logger.info(e)
return
except Exception:
# TODO: Remember that we are now out of sync and try again
# later
logger.exception("Failed to handle device list update for %s", user_id)
return
stream_id = result["stream_id"]
devices = result["devices"]
# If the remote server has more than ~1000 devices for this user
# we assume that something is going horribly wrong (e.g. a bot
# that logs in and creates a new device every time it tries to
# send a message). Maintaining lots of devices per user in the
# cache can cause serious performance issues as if this request
# takes more than 60s to complete, internal replication from the
# inbound federation worker to the synapse master may time out
# causing the inbound federation to fail and causing the remote
# server to retry, causing a DoS. So in this scenario we give
# up on storing the total list of devices and only handle the
# delta instead.
if len(devices) > 1000:
logger.warn(
"Ignoring device list snapshot for %s as it has >1K devs (%d)",
user_id,
len(devices),
)
devices = []
for device in devices:
logger.debug(
"Handling resync update %r/%r, ID: %r",
user_id,
device["device_id"],
stream_id,
)
yield self.store.update_remote_device_list_cache(user_id, devices, stream_id)
device_ids = [device["device_id"] for device in devices]
yield self.device_handler.notify_device_update(user_id, device_ids)
# We clobber the seen updates since we've re-synced from a given
# point.
self._seen_updates[user_id] = set([stream_id])
defer.returnValue(result)

View File

@ -210,7 +210,7 @@ class DirectoryHandler(BaseHandler):
except AuthError as e: except AuthError as e:
logger.info("Failed to update alias events: %s", e) logger.info("Failed to update alias events: %s", e)
defer.returnValue(room_id) return room_id
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_appservice_association(self, service, room_alias): def delete_appservice_association(self, service, room_alias):
@ -229,7 +229,7 @@ class DirectoryHandler(BaseHandler):
room_id = yield self.store.delete_room_alias(room_alias) room_id = yield self.store.delete_room_alias(room_alias)
defer.returnValue(room_id) return room_id
@defer.inlineCallbacks @defer.inlineCallbacks
def get_association(self, room_alias): def get_association(self, room_alias):
@ -277,8 +277,7 @@ class DirectoryHandler(BaseHandler):
else: else:
servers = list(servers) servers = list(servers)
defer.returnValue({"room_id": room_id, "servers": servers}) return {"room_id": room_id, "servers": servers}
return
@defer.inlineCallbacks @defer.inlineCallbacks
def on_directory_query(self, args): def on_directory_query(self, args):
@ -289,7 +288,7 @@ class DirectoryHandler(BaseHandler):
result = yield self.get_association_from_room_alias(room_alias) result = yield self.get_association_from_room_alias(room_alias)
if result is not None: if result is not None:
defer.returnValue({"room_id": result.room_id, "servers": result.servers}) return {"room_id": result.room_id, "servers": result.servers}
else: else:
raise SynapseError( raise SynapseError(
404, 404,
@ -342,7 +341,7 @@ class DirectoryHandler(BaseHandler):
# Query AS to see if it exists # Query AS to see if it exists
as_handler = self.appservice_handler as_handler = self.appservice_handler
result = yield as_handler.query_room_alias_exists(room_alias) result = yield as_handler.query_room_alias_exists(room_alias)
defer.returnValue(result) return result
def can_modify_alias(self, alias, user_id=None): def can_modify_alias(self, alias, user_id=None):
# Any application service "interested" in an alias they are regexing on # Any application service "interested" in an alias they are regexing on
@ -369,10 +368,10 @@ class DirectoryHandler(BaseHandler):
creator = yield self.store.get_room_alias_creator(alias.to_string()) creator = yield self.store.get_room_alias_creator(alias.to_string())
if creator is not None and creator == user_id: if creator is not None and creator == user_id:
defer.returnValue(True) return True
is_admin = yield self.auth.is_server_admin(UserID.from_string(user_id)) is_admin = yield self.auth.is_server_admin(UserID.from_string(user_id))
defer.returnValue(is_admin) return is_admin
@defer.inlineCallbacks @defer.inlineCallbacks
def edit_published_room_list(self, requester, room_id, visibility): def edit_published_room_list(self, requester, room_id, visibility):

View File

@ -25,6 +25,7 @@ from twisted.internet import defer
from synapse.api.errors import CodeMessageException, SynapseError from synapse.api.errors import CodeMessageException, SynapseError
from synapse.logging.context import make_deferred_yieldable, run_in_background from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import UserID, get_domain_from_id from synapse.types import UserID, get_domain_from_id
from synapse.util import unwrapFirstError
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -65,6 +66,7 @@ class E2eKeysHandler(object):
} }
} }
""" """
device_keys_query = query_body.get("device_keys", {}) device_keys_query = query_body.get("device_keys", {})
# separate users by domain. # separate users by domain.
@ -121,7 +123,56 @@ class E2eKeysHandler(object):
# Now fetch any devices that we don't have in our cache # Now fetch any devices that we don't have in our cache
@defer.inlineCallbacks @defer.inlineCallbacks
def do_remote_query(destination): def do_remote_query(destination):
"""This is called when we are querying the device list of a user on
a remote homeserver and their device list is not in the device list
cache. If we share a room with this user and we're not querying for
specific user we will update the cache
with their device list."""
destination_query = remote_queries_not_in_cache[destination] destination_query = remote_queries_not_in_cache[destination]
# We first consider whether we wish to update the device list cache with
# the users device list. We want to track a user's devices when the
# authenticated user shares a room with the queried user and the query
# has not specified a particular device.
# If we update the cache for the queried user we remove them from further
# queries. We use the more efficient batched query_client_keys for all
# remaining users
user_ids_updated = []
for (user_id, device_list) in destination_query.items():
if user_id in user_ids_updated:
continue
if device_list:
continue
room_ids = yield self.store.get_rooms_for_user(user_id)
if not room_ids:
continue
# We've decided we're sharing a room with this user and should
# probably be tracking their device lists. However, we haven't
# done an initial sync on the device list so we do it now.
try:
user_devices = yield self.device_handler.device_list_updater.user_device_resync(
user_id
)
user_devices = user_devices["devices"]
for device in user_devices:
results[user_id] = {device["device_id"]: device["keys"]}
user_ids_updated.append(user_id)
except Exception as e:
failures[destination] = _exception_to_failure(e)
if len(destination_query) == len(user_ids_updated):
# We've updated all the users in the query and we do not need to
# make any further remote calls.
return
# Remove all the users from the query which we have updated
for user_id in user_ids_updated:
destination_query.pop(user_id)
try: try:
remote_result = yield self.federation.query_client_keys( remote_result = yield self.federation.query_client_keys(
destination, {"device_keys": destination_query}, timeout=timeout destination, {"device_keys": destination_query}, timeout=timeout
@ -132,7 +183,8 @@ class E2eKeysHandler(object):
results[user_id] = keys results[user_id] = keys
except Exception as e: except Exception as e:
failures[destination] = _exception_to_failure(e) failure = _exception_to_failure(e)
failures[destination] = failure
yield make_deferred_yieldable( yield make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
@ -141,10 +193,10 @@ class E2eKeysHandler(object):
for destination in remote_queries_not_in_cache for destination in remote_queries_not_in_cache
], ],
consumeErrors=True, consumeErrors=True,
) ).addErrback(unwrapFirstError)
) )
defer.returnValue({"device_keys": results, "failures": failures}) return {"device_keys": results, "failures": failures}
@defer.inlineCallbacks @defer.inlineCallbacks
def query_local_devices(self, query): def query_local_devices(self, query):
@ -189,7 +241,7 @@ class E2eKeysHandler(object):
r["unsigned"]["device_display_name"] = display_name r["unsigned"]["device_display_name"] = display_name
result_dict[user_id][device_id] = r result_dict[user_id][device_id] = r
defer.returnValue(result_dict) return result_dict
@defer.inlineCallbacks @defer.inlineCallbacks
def on_federation_query_client_keys(self, query_body): def on_federation_query_client_keys(self, query_body):
@ -197,7 +249,7 @@ class E2eKeysHandler(object):
""" """
device_keys_query = query_body.get("device_keys", {}) device_keys_query = query_body.get("device_keys", {})
res = yield self.query_local_devices(device_keys_query) res = yield self.query_local_devices(device_keys_query)
defer.returnValue({"device_keys": res}) return {"device_keys": res}
@defer.inlineCallbacks @defer.inlineCallbacks
def claim_one_time_keys(self, query, timeout): def claim_one_time_keys(self, query, timeout):
@ -234,8 +286,10 @@ class E2eKeysHandler(object):
for user_id, keys in remote_result["one_time_keys"].items(): for user_id, keys in remote_result["one_time_keys"].items():
if user_id in device_keys: if user_id in device_keys:
json_result[user_id] = keys json_result[user_id] = keys
except Exception as e: except Exception as e:
failures[destination] = _exception_to_failure(e) failure = _exception_to_failure(e)
failures[destination] = failure
yield make_deferred_yieldable( yield make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
@ -259,10 +313,11 @@ class E2eKeysHandler(object):
), ),
) )
defer.returnValue({"one_time_keys": json_result, "failures": failures}) return {"one_time_keys": json_result, "failures": failures}
@defer.inlineCallbacks @defer.inlineCallbacks
def upload_keys_for_user(self, user_id, device_id, keys): def upload_keys_for_user(self, user_id, device_id, keys):
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
# TODO: Validate the JSON to make sure it has the right keys. # TODO: Validate the JSON to make sure it has the right keys.
@ -297,7 +352,7 @@ class E2eKeysHandler(object):
result = yield self.store.count_e2e_one_time_keys(user_id, device_id) result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue({"one_time_key_counts": result}) return {"one_time_key_counts": result}
@defer.inlineCallbacks @defer.inlineCallbacks
def _upload_one_time_keys_for_user( def _upload_one_time_keys_for_user(

View File

@ -84,7 +84,7 @@ class E2eRoomKeysHandler(object):
user_id, version, room_id, session_id user_id, version, room_id, session_id
) )
defer.returnValue(results) return results
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_room_keys(self, user_id, version, room_id=None, session_id=None): def delete_room_keys(self, user_id, version, room_id=None, session_id=None):
@ -262,7 +262,7 @@ class E2eRoomKeysHandler(object):
new_version = yield self.store.create_e2e_room_keys_version( new_version = yield self.store.create_e2e_room_keys_version(
user_id, version_info user_id, version_info
) )
defer.returnValue(new_version) return new_version
@defer.inlineCallbacks @defer.inlineCallbacks
def get_version_info(self, user_id, version=None): def get_version_info(self, user_id, version=None):
@ -292,7 +292,7 @@ class E2eRoomKeysHandler(object):
raise NotFoundError("Unknown backup version") raise NotFoundError("Unknown backup version")
else: else:
raise raise
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_version(self, user_id, version=None): def delete_version(self, user_id, version=None):
@ -350,4 +350,4 @@ class E2eRoomKeysHandler(object):
user_id, version, version_info user_id, version, version_info
) )
defer.returnValue({}) return {}

View File

@ -143,7 +143,7 @@ class EventStreamHandler(BaseHandler):
"end": tokens[1].to_string(), "end": tokens[1].to_string(),
} }
defer.returnValue(chunk) return chunk
class EventHandler(BaseHandler): class EventHandler(BaseHandler):
@ -166,7 +166,7 @@ class EventHandler(BaseHandler):
event = yield self.store.get_event(event_id, check_room_id=room_id) event = yield self.store.get_event(event_id, check_room_id=room_id)
if not event: if not event:
defer.returnValue(None) return None
return return
users = yield self.store.get_users_in_room(event.room_id) users = yield self.store.get_users_in_room(event.room_id)
@ -179,4 +179,4 @@ class EventHandler(BaseHandler):
if not filtered: if not filtered:
raise AuthError(403, "You don't have permission to access that event.") raise AuthError(403, "You don't have permission to access that event.")
defer.returnValue(event) return event

View File

@ -210,7 +210,7 @@ class FederationHandler(BaseHandler):
event_id, event_id,
origin, origin,
) )
defer.returnValue(None) return None
state = None state = None
auth_chain = [] auth_chain = []
@ -676,7 +676,7 @@ class FederationHandler(BaseHandler):
events = [e for e in events if e.event_id not in seen_events] events = [e for e in events if e.event_id not in seen_events]
if not events: if not events:
defer.returnValue([]) return []
event_map = {e.event_id: e for e in events} event_map = {e.event_id: e for e in events}
@ -838,7 +838,7 @@ class FederationHandler(BaseHandler):
# TODO: We can probably do something more clever here. # TODO: We can probably do something more clever here.
yield self._handle_new_event(dest, event, backfilled=True) yield self._handle_new_event(dest, event, backfilled=True)
defer.returnValue(events) return events
@defer.inlineCallbacks @defer.inlineCallbacks
def maybe_backfill(self, room_id, current_depth): def maybe_backfill(self, room_id, current_depth):
@ -894,7 +894,7 @@ class FederationHandler(BaseHandler):
) )
if not filtered_extremities: if not filtered_extremities:
defer.returnValue(False) return False
# Check if we reached a point where we should start backfilling. # Check if we reached a point where we should start backfilling.
sorted_extremeties_tuple = sorted(extremities.items(), key=lambda e: -int(e[1])) sorted_extremeties_tuple = sorted(extremities.items(), key=lambda e: -int(e[1]))
@ -965,7 +965,7 @@ class FederationHandler(BaseHandler):
# If this succeeded then we probably already have the # If this succeeded then we probably already have the
# appropriate stuff. # appropriate stuff.
# TODO: We can probably do something more intelligent here. # TODO: We can probably do something more intelligent here.
defer.returnValue(True) return True
except SynapseError as e: except SynapseError as e:
logger.info("Failed to backfill from %s because %s", dom, e) logger.info("Failed to backfill from %s because %s", dom, e)
continue continue
@ -978,6 +978,9 @@ class FederationHandler(BaseHandler):
except NotRetryingDestination as e: except NotRetryingDestination as e:
logger.info(str(e)) logger.info(str(e))
continue continue
except RequestSendFailed as e:
logger.info("Falied to get backfill from %s because %s", dom, e)
continue
except FederationDeniedError as e: except FederationDeniedError as e:
logger.info(e) logger.info(e)
continue continue
@ -985,11 +988,11 @@ class FederationHandler(BaseHandler):
logger.exception("Failed to backfill from %s because %s", dom, e) logger.exception("Failed to backfill from %s because %s", dom, e)
continue continue
defer.returnValue(False) return False
success = yield try_backfill(likely_domains) success = yield try_backfill(likely_domains)
if success: if success:
defer.returnValue(True) return True
# Huh, well *those* domains didn't work out. Lets try some domains # Huh, well *those* domains didn't work out. Lets try some domains
# from the time. # from the time.
@ -1031,11 +1034,11 @@ class FederationHandler(BaseHandler):
[dom for dom, _ in likely_domains if dom not in tried_domains] [dom for dom, _ in likely_domains if dom not in tried_domains]
) )
if success: if success:
defer.returnValue(True) return True
tried_domains.update(dom for dom, _ in likely_domains) tried_domains.update(dom for dom, _ in likely_domains)
defer.returnValue(False) return False
def _sanity_check_event(self, ev): def _sanity_check_event(self, ev):
""" """
@ -1082,7 +1085,7 @@ class FederationHandler(BaseHandler):
pdu=event, pdu=event,
) )
defer.returnValue(pdu) return pdu
@defer.inlineCallbacks @defer.inlineCallbacks
def on_event_auth(self, event_id): def on_event_auth(self, event_id):
@ -1090,7 +1093,7 @@ class FederationHandler(BaseHandler):
auth = yield self.store.get_auth_chain( auth = yield self.store.get_auth_chain(
[auth_id for auth_id in event.auth_event_ids()], include_given=True [auth_id for auth_id in event.auth_event_ids()], include_given=True
) )
defer.returnValue([e for e in auth]) return [e for e in auth]
@log_function @log_function
@defer.inlineCallbacks @defer.inlineCallbacks
@ -1177,7 +1180,7 @@ class FederationHandler(BaseHandler):
run_in_background(self._handle_queued_pdus, room_queue) run_in_background(self._handle_queued_pdus, room_queue)
defer.returnValue(True) return True
@defer.inlineCallbacks @defer.inlineCallbacks
def _handle_queued_pdus(self, room_queue): def _handle_queued_pdus(self, room_queue):
@ -1264,7 +1267,7 @@ class FederationHandler(BaseHandler):
room_version, event, context, do_sig_check=False room_version, event, context, do_sig_check=False
) )
defer.returnValue(event) return event
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -1325,7 +1328,7 @@ class FederationHandler(BaseHandler):
state = yield self.store.get_events(list(prev_state_ids.values())) state = yield self.store.get_events(list(prev_state_ids.values()))
defer.returnValue({"state": list(state.values()), "auth_chain": auth_chain}) return {"state": list(state.values()), "auth_chain": auth_chain}
@defer.inlineCallbacks @defer.inlineCallbacks
def on_invite_request(self, origin, pdu): def on_invite_request(self, origin, pdu):
@ -1381,7 +1384,7 @@ class FederationHandler(BaseHandler):
context = yield self.state_handler.compute_event_context(event) context = yield self.state_handler.compute_event_context(event)
yield self.persist_events_and_notify([(event, context)]) yield self.persist_events_and_notify([(event, context)])
defer.returnValue(event) return event
@defer.inlineCallbacks @defer.inlineCallbacks
def do_remotely_reject_invite(self, target_hosts, room_id, user_id): def do_remotely_reject_invite(self, target_hosts, room_id, user_id):
@ -1406,7 +1409,7 @@ class FederationHandler(BaseHandler):
context = yield self.state_handler.compute_event_context(event) context = yield self.state_handler.compute_event_context(event)
yield self.persist_events_and_notify([(event, context)]) yield self.persist_events_and_notify([(event, context)])
defer.returnValue(event) return event
@defer.inlineCallbacks @defer.inlineCallbacks
def _make_and_verify_event( def _make_and_verify_event(
@ -1424,7 +1427,7 @@ class FederationHandler(BaseHandler):
assert event.user_id == user_id assert event.user_id == user_id
assert event.state_key == user_id assert event.state_key == user_id
assert event.room_id == room_id assert event.room_id == room_id
defer.returnValue((origin, event, format_ver)) return (origin, event, format_ver)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -1484,7 +1487,7 @@ class FederationHandler(BaseHandler):
logger.warn("Failed to create new leave %r because %s", event, e) logger.warn("Failed to create new leave %r because %s", event, e)
raise e raise e
defer.returnValue(event) return event
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -1517,7 +1520,7 @@ class FederationHandler(BaseHandler):
event.signatures, event.signatures,
) )
defer.returnValue(None) return None
@defer.inlineCallbacks @defer.inlineCallbacks
def get_state_for_pdu(self, room_id, event_id): def get_state_for_pdu(self, room_id, event_id):
@ -1545,9 +1548,9 @@ class FederationHandler(BaseHandler):
del results[(event.type, event.state_key)] del results[(event.type, event.state_key)]
res = list(results.values()) res = list(results.values())
defer.returnValue(res) return res
else: else:
defer.returnValue([]) return []
@defer.inlineCallbacks @defer.inlineCallbacks
def get_state_ids_for_pdu(self, room_id, event_id): def get_state_ids_for_pdu(self, room_id, event_id):
@ -1572,9 +1575,9 @@ class FederationHandler(BaseHandler):
else: else:
results.pop((event.type, event.state_key), None) results.pop((event.type, event.state_key), None)
defer.returnValue(list(results.values())) return list(results.values())
else: else:
defer.returnValue([]) return []
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -1587,7 +1590,7 @@ class FederationHandler(BaseHandler):
events = yield filter_events_for_server(self.store, origin, events) events = yield filter_events_for_server(self.store, origin, events)
defer.returnValue(events) return events
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -1617,9 +1620,9 @@ class FederationHandler(BaseHandler):
events = yield filter_events_for_server(self.store, origin, [event]) events = yield filter_events_for_server(self.store, origin, [event])
event = events[0] event = events[0]
defer.returnValue(event) return event
else: else:
defer.returnValue(None) return None
def get_min_depth_for_context(self, context): def get_min_depth_for_context(self, context):
return self.store.get_min_depth(context) return self.store.get_min_depth(context)
@ -1651,7 +1654,7 @@ class FederationHandler(BaseHandler):
self.store.remove_push_actions_from_staging, event.event_id self.store.remove_push_actions_from_staging, event.event_id
) )
defer.returnValue(context) return context
@defer.inlineCallbacks @defer.inlineCallbacks
def _handle_new_events(self, origin, event_infos, backfilled=False): def _handle_new_events(self, origin, event_infos, backfilled=False):
@ -1674,7 +1677,7 @@ class FederationHandler(BaseHandler):
auth_events=ev_info.get("auth_events"), auth_events=ev_info.get("auth_events"),
backfilled=backfilled, backfilled=backfilled,
) )
defer.returnValue(res) return res
contexts = yield make_deferred_yieldable( contexts = yield make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
@ -1833,7 +1836,7 @@ class FederationHandler(BaseHandler):
if event.type == EventTypes.GuestAccess and not context.rejected: if event.type == EventTypes.GuestAccess and not context.rejected:
yield self.maybe_kick_guest_users(event) yield self.maybe_kick_guest_users(event)
defer.returnValue(context) return context
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_for_soft_fail(self, event, state, backfilled): def _check_for_soft_fail(self, event, state, backfilled):
@ -1952,7 +1955,7 @@ class FederationHandler(BaseHandler):
logger.debug("on_query_auth returning: %s", ret) logger.debug("on_query_auth returning: %s", ret)
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def on_get_missing_events( def on_get_missing_events(
@ -1975,7 +1978,7 @@ class FederationHandler(BaseHandler):
self.store, origin, missing_events self.store, origin, missing_events
) )
defer.returnValue(missing_events) return missing_events
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -2451,8 +2454,7 @@ class FederationHandler(BaseHandler):
logger.debug("construct_auth_difference returning") logger.debug("construct_auth_difference returning")
defer.returnValue( return {
{
"auth_chain": local_auth, "auth_chain": local_auth,
"rejects": { "rejects": {
e.event_id: {"reason": reason_map[e.event_id], "proof": None} e.event_id: {"reason": reason_map[e.event_id], "proof": None}
@ -2460,7 +2462,6 @@ class FederationHandler(BaseHandler):
}, },
"missing": [e.event_id for e in missing_locals], "missing": [e.event_id for e in missing_locals],
} }
)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
@ -2608,7 +2609,7 @@ class FederationHandler(BaseHandler):
builder=builder builder=builder
) )
EventValidator().validate_new(event) EventValidator().validate_new(event)
defer.returnValue((event, context)) return (event, context)
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_signature(self, event, context): def _check_signature(self, event, context):
@ -2798,3 +2799,28 @@ class FederationHandler(BaseHandler):
) )
else: else:
return user_joined_room(self.distributor, user, room_id) return user_joined_room(self.distributor, user, room_id)
@defer.inlineCallbacks
def get_room_complexity(self, remote_room_hosts, room_id):
"""
Fetch the complexity of a remote room over federation.
Args:
remote_room_hosts (list[str]): The remote servers to ask.
room_id (str): The room ID to ask about.
Returns:
Deferred[dict] or Deferred[None]: Dict contains the complexity
metric versions, while None means we could not fetch the complexity.
"""
for host in remote_room_hosts:
res = yield self.federation_client.get_room_complexity(host, room_id)
# We got a result, return it.
if res:
defer.returnValue(res)
# We fell off the bottom, couldn't get the complexity from anyone. Oh
# well.
defer.returnValue(None)

View File

@ -126,9 +126,12 @@ class GroupsLocalHandler(object):
group_id, requester_user_id group_id, requester_user_id
) )
else: else:
try:
res = yield self.transport_client.get_group_summary( res = yield self.transport_client.get_group_summary(
get_domain_from_id(group_id), group_id, requester_user_id get_domain_from_id(group_id), group_id, requester_user_id
) )
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
group_server_name = get_domain_from_id(group_id) group_server_name = get_domain_from_id(group_id)
@ -162,7 +165,7 @@ class GroupsLocalHandler(object):
res.setdefault("user", {})["is_publicised"] = is_publicised res.setdefault("user", {})["is_publicised"] = is_publicised
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def create_group(self, group_id, user_id, content): def create_group(self, group_id, user_id, content):
@ -183,9 +186,12 @@ class GroupsLocalHandler(object):
content["user_profile"] = yield self.profile_handler.get_profile(user_id) content["user_profile"] = yield self.profile_handler.get_profile(user_id)
try:
res = yield self.transport_client.create_group( res = yield self.transport_client.create_group(
get_domain_from_id(group_id), group_id, user_id, content get_domain_from_id(group_id), group_id, user_id, content
) )
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
remote_attestation = res["attestation"] remote_attestation = res["attestation"]
yield self.attestations.verify_attestation( yield self.attestations.verify_attestation(
@ -207,7 +213,7 @@ class GroupsLocalHandler(object):
) )
self.notifier.on_new_event("groups_key", token, users=[user_id]) self.notifier.on_new_event("groups_key", token, users=[user_id])
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def get_users_in_group(self, group_id, requester_user_id): def get_users_in_group(self, group_id, requester_user_id):
@ -217,13 +223,16 @@ class GroupsLocalHandler(object):
res = yield self.groups_server_handler.get_users_in_group( res = yield self.groups_server_handler.get_users_in_group(
group_id, requester_user_id group_id, requester_user_id
) )
defer.returnValue(res) return res
group_server_name = get_domain_from_id(group_id) group_server_name = get_domain_from_id(group_id)
try:
res = yield self.transport_client.get_users_in_group( res = yield self.transport_client.get_users_in_group(
get_domain_from_id(group_id), group_id, requester_user_id get_domain_from_id(group_id), group_id, requester_user_id
) )
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
chunk = res["chunk"] chunk = res["chunk"]
valid_entries = [] valid_entries = []
@ -244,7 +253,7 @@ class GroupsLocalHandler(object):
res["chunk"] = valid_entries res["chunk"] = valid_entries
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def join_group(self, group_id, user_id, content): def join_group(self, group_id, user_id, content):
@ -258,9 +267,12 @@ class GroupsLocalHandler(object):
local_attestation = self.attestations.create_attestation(group_id, user_id) local_attestation = self.attestations.create_attestation(group_id, user_id)
content["attestation"] = local_attestation content["attestation"] = local_attestation
try:
res = yield self.transport_client.join_group( res = yield self.transport_client.join_group(
get_domain_from_id(group_id), group_id, user_id, content get_domain_from_id(group_id), group_id, user_id, content
) )
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
remote_attestation = res["attestation"] remote_attestation = res["attestation"]
@ -285,7 +297,7 @@ class GroupsLocalHandler(object):
) )
self.notifier.on_new_event("groups_key", token, users=[user_id]) self.notifier.on_new_event("groups_key", token, users=[user_id])
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def accept_invite(self, group_id, user_id, content): def accept_invite(self, group_id, user_id, content):
@ -299,9 +311,12 @@ class GroupsLocalHandler(object):
local_attestation = self.attestations.create_attestation(group_id, user_id) local_attestation = self.attestations.create_attestation(group_id, user_id)
content["attestation"] = local_attestation content["attestation"] = local_attestation
try:
res = yield self.transport_client.accept_group_invite( res = yield self.transport_client.accept_group_invite(
get_domain_from_id(group_id), group_id, user_id, content get_domain_from_id(group_id), group_id, user_id, content
) )
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
remote_attestation = res["attestation"] remote_attestation = res["attestation"]
@ -326,7 +341,7 @@ class GroupsLocalHandler(object):
) )
self.notifier.on_new_event("groups_key", token, users=[user_id]) self.notifier.on_new_event("groups_key", token, users=[user_id])
defer.returnValue({}) return {}
@defer.inlineCallbacks @defer.inlineCallbacks
def invite(self, group_id, user_id, requester_user_id, config): def invite(self, group_id, user_id, requester_user_id, config):
@ -338,6 +353,7 @@ class GroupsLocalHandler(object):
group_id, user_id, requester_user_id, content group_id, user_id, requester_user_id, content
) )
else: else:
try:
res = yield self.transport_client.invite_to_group( res = yield self.transport_client.invite_to_group(
get_domain_from_id(group_id), get_domain_from_id(group_id),
group_id, group_id,
@ -345,8 +361,10 @@ class GroupsLocalHandler(object):
requester_user_id, requester_user_id,
content, content,
) )
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def on_invite(self, group_id, user_id, content): def on_invite(self, group_id, user_id, content):
@ -377,7 +395,7 @@ class GroupsLocalHandler(object):
logger.warn("No profile for user %s: %s", user_id, e) logger.warn("No profile for user %s: %s", user_id, e)
user_profile = {} user_profile = {}
defer.returnValue({"state": "invite", "user_profile": user_profile}) return {"state": "invite", "user_profile": user_profile}
@defer.inlineCallbacks @defer.inlineCallbacks
def remove_user_from_group(self, group_id, user_id, requester_user_id, content): def remove_user_from_group(self, group_id, user_id, requester_user_id, content):
@ -398,6 +416,7 @@ class GroupsLocalHandler(object):
) )
else: else:
content["requester_user_id"] = requester_user_id content["requester_user_id"] = requester_user_id
try:
res = yield self.transport_client.remove_user_from_group( res = yield self.transport_client.remove_user_from_group(
get_domain_from_id(group_id), get_domain_from_id(group_id),
group_id, group_id,
@ -405,8 +424,10 @@ class GroupsLocalHandler(object):
user_id, user_id,
content, content,
) )
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def user_removed_from_group(self, group_id, user_id, content): def user_removed_from_group(self, group_id, user_id, content):
@ -421,7 +442,7 @@ class GroupsLocalHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def get_joined_groups(self, user_id): def get_joined_groups(self, user_id):
group_ids = yield self.store.get_joined_groups(user_id) group_ids = yield self.store.get_joined_groups(user_id)
defer.returnValue({"groups": group_ids}) return {"groups": group_ids}
@defer.inlineCallbacks @defer.inlineCallbacks
def get_publicised_groups_for_user(self, user_id): def get_publicised_groups_for_user(self, user_id):
@ -433,14 +454,18 @@ class GroupsLocalHandler(object):
for app_service in self.store.get_app_services(): for app_service in self.store.get_app_services():
result.extend(app_service.get_groups_for_user(user_id)) result.extend(app_service.get_groups_for_user(user_id))
defer.returnValue({"groups": result}) return {"groups": result}
else: else:
try:
bulk_result = yield self.transport_client.bulk_get_publicised_groups( bulk_result = yield self.transport_client.bulk_get_publicised_groups(
get_domain_from_id(user_id), [user_id] get_domain_from_id(user_id), [user_id]
) )
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
result = bulk_result.get("users", {}).get(user_id) result = bulk_result.get("users", {}).get(user_id)
# TODO: Verify attestations # TODO: Verify attestations
defer.returnValue({"groups": result}) return {"groups": result}
@defer.inlineCallbacks @defer.inlineCallbacks
def bulk_get_publicised_groups(self, user_ids, proxy=True): def bulk_get_publicised_groups(self, user_ids, proxy=True):
@ -475,4 +500,4 @@ class GroupsLocalHandler(object):
for app_service in self.store.get_app_services(): for app_service in self.store.get_app_services():
results[uid].extend(app_service.get_groups_for_user(uid)) results[uid].extend(app_service.get_groups_for_user(uid))
defer.returnValue({"users": results}) return {"users": results}

View File

@ -82,7 +82,7 @@ class IdentityHandler(BaseHandler):
"%s is not a trusted ID server: rejecting 3pid " + "credentials", "%s is not a trusted ID server: rejecting 3pid " + "credentials",
id_server, id_server,
) )
defer.returnValue(None) return None
try: try:
data = yield self.http_client.get_json( data = yield self.http_client.get_json(
@ -95,8 +95,8 @@ class IdentityHandler(BaseHandler):
raise e.to_synapse_error() raise e.to_synapse_error()
if "medium" in data: if "medium" in data:
defer.returnValue(data) return data
defer.returnValue(None) return None
@defer.inlineCallbacks @defer.inlineCallbacks
def bind_threepid(self, creds, mxid): def bind_threepid(self, creds, mxid):
@ -133,7 +133,7 @@ class IdentityHandler(BaseHandler):
) )
except CodeMessageException as e: except CodeMessageException as e:
data = json.loads(e.msg) # XXX WAT? data = json.loads(e.msg) # XXX WAT?
defer.returnValue(data) return data
@defer.inlineCallbacks @defer.inlineCallbacks
def try_unbind_threepid(self, mxid, threepid): def try_unbind_threepid(self, mxid, threepid):
@ -161,7 +161,7 @@ class IdentityHandler(BaseHandler):
# We don't know where to unbind, so we don't have a choice but to return # We don't know where to unbind, so we don't have a choice but to return
if not id_servers: if not id_servers:
defer.returnValue(False) return False
changed = True changed = True
for id_server in id_servers: for id_server in id_servers:
@ -169,7 +169,7 @@ class IdentityHandler(BaseHandler):
mxid, threepid, id_server mxid, threepid, id_server
) )
defer.returnValue(changed) return changed
@defer.inlineCallbacks @defer.inlineCallbacks
def try_unbind_threepid_with_id_server(self, mxid, threepid, id_server): def try_unbind_threepid_with_id_server(self, mxid, threepid, id_server):
@ -224,7 +224,7 @@ class IdentityHandler(BaseHandler):
id_server=id_server, id_server=id_server,
) )
defer.returnValue(changed) return changed
@defer.inlineCallbacks @defer.inlineCallbacks
def requestEmailToken( def requestEmailToken(
@ -250,7 +250,7 @@ class IdentityHandler(BaseHandler):
% (id_server, "/_matrix/identity/api/v1/validate/email/requestToken"), % (id_server, "/_matrix/identity/api/v1/validate/email/requestToken"),
params, params,
) )
defer.returnValue(data) return data
except HttpResponseException as e: except HttpResponseException as e:
logger.info("Proxied requestToken failed: %r", e) logger.info("Proxied requestToken failed: %r", e)
raise e.to_synapse_error() raise e.to_synapse_error()
@ -278,7 +278,7 @@ class IdentityHandler(BaseHandler):
% (id_server, "/_matrix/identity/api/v1/validate/msisdn/requestToken"), % (id_server, "/_matrix/identity/api/v1/validate/msisdn/requestToken"),
params, params,
) )
defer.returnValue(data) return data
except HttpResponseException as e: except HttpResponseException as e:
logger.info("Proxied requestToken failed: %r", e) logger.info("Proxied requestToken failed: %r", e)
raise e.to_synapse_error() raise e.to_synapse_error()

View File

@ -250,7 +250,7 @@ class InitialSyncHandler(BaseHandler):
"end": now_token.to_string(), "end": now_token.to_string(),
} }
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def room_initial_sync(self, requester, room_id, pagin_config=None): def room_initial_sync(self, requester, room_id, pagin_config=None):
@ -301,7 +301,7 @@ class InitialSyncHandler(BaseHandler):
result["account_data"] = account_data_events result["account_data"] = account_data_events
defer.returnValue(result) return result
@defer.inlineCallbacks @defer.inlineCallbacks
def _room_initial_sync_parted( def _room_initial_sync_parted(
@ -330,15 +330,12 @@ class InitialSyncHandler(BaseHandler):
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
defer.returnValue( return {
{
"membership": membership, "membership": membership,
"room_id": room_id, "room_id": room_id,
"messages": { "messages": {
"chunk": ( "chunk": (
yield self._event_serializer.serialize_events( yield self._event_serializer.serialize_events(messages, time_now)
messages, time_now
)
), ),
"start": start_token.to_string(), "start": start_token.to_string(),
"end": end_token.to_string(), "end": end_token.to_string(),
@ -351,7 +348,6 @@ class InitialSyncHandler(BaseHandler):
"presence": [], "presence": [],
"receipts": [], "receipts": [],
} }
)
@defer.inlineCallbacks @defer.inlineCallbacks
def _room_initial_sync_joined( def _room_initial_sync_joined(
@ -384,13 +380,13 @@ class InitialSyncHandler(BaseHandler):
def get_presence(): def get_presence():
# If presence is disabled, return an empty list # If presence is disabled, return an empty list
if not self.hs.config.use_presence: if not self.hs.config.use_presence:
defer.returnValue([]) return []
states = yield presence_handler.get_states( states = yield presence_handler.get_states(
[m.user_id for m in room_members], as_event=True [m.user_id for m in room_members], as_event=True
) )
defer.returnValue(states) return states
@defer.inlineCallbacks @defer.inlineCallbacks
def get_receipts(): def get_receipts():
@ -399,7 +395,7 @@ class InitialSyncHandler(BaseHandler):
) )
if not receipts: if not receipts:
receipts = [] receipts = []
defer.returnValue(receipts) return receipts
presence, receipts, (messages, token) = yield make_deferred_yieldable( presence, receipts, (messages, token) = yield make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
@ -442,7 +438,7 @@ class InitialSyncHandler(BaseHandler):
if not is_peeking: if not is_peeking:
ret["membership"] = membership ret["membership"] = membership
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_in_room_or_world_readable(self, room_id, user_id): def _check_in_room_or_world_readable(self, room_id, user_id):
@ -453,7 +449,7 @@ class InitialSyncHandler(BaseHandler):
# * The user is a guest user, and has joined the room # * The user is a guest user, and has joined the room
# else it will throw. # else it will throw.
member_event = yield self.auth.check_user_was_in_room(room_id, user_id) member_event = yield self.auth.check_user_was_in_room(room_id, user_id)
defer.returnValue((member_event.membership, member_event.event_id)) return (member_event.membership, member_event.event_id)
return return
except AuthError: except AuthError:
visibility = yield self.state_handler.get_current_state( visibility = yield self.state_handler.get_current_state(
@ -463,7 +459,7 @@ class InitialSyncHandler(BaseHandler):
visibility visibility
and visibility.content["history_visibility"] == "world_readable" and visibility.content["history_visibility"] == "world_readable"
): ):
defer.returnValue((Membership.JOIN, None)) return (Membership.JOIN, None)
return return
raise AuthError( raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN 403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN

View File

@ -87,7 +87,7 @@ class MessageHandler(object):
) )
data = room_state[membership_event_id].get(key) data = room_state[membership_event_id].get(key)
defer.returnValue(data) return data
@defer.inlineCallbacks @defer.inlineCallbacks
def get_state_events( def get_state_events(
@ -174,7 +174,7 @@ class MessageHandler(object):
# events, as clients won't use them. # events, as clients won't use them.
bundle_aggregations=False, bundle_aggregations=False,
) )
defer.returnValue(events) return events
@defer.inlineCallbacks @defer.inlineCallbacks
def get_joined_members(self, requester, room_id): def get_joined_members(self, requester, room_id):
@ -213,15 +213,13 @@ class MessageHandler(object):
# Loop fell through, AS has no interested users in room # Loop fell through, AS has no interested users in room
raise AuthError(403, "Appservice not in room") raise AuthError(403, "Appservice not in room")
defer.returnValue( return {
{
user_id: { user_id: {
"avatar_url": profile.avatar_url, "avatar_url": profile.avatar_url,
"display_name": profile.display_name, "display_name": profile.display_name,
} }
for user_id, profile in iteritems(users_with_profile) for user_id, profile in iteritems(users_with_profile)
} }
)
class EventCreationHandler(object): class EventCreationHandler(object):
@ -380,7 +378,11 @@ class EventCreationHandler(object):
# tolerate them in event_auth.check(). # tolerate them in event_auth.check().
prev_state_ids = yield context.get_prev_state_ids(self.store) prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_event_id = prev_state_ids.get((EventTypes.Member, event.sender)) prev_event_id = prev_state_ids.get((EventTypes.Member, event.sender))
prev_event = yield self.store.get_event(prev_event_id, allow_none=True) prev_event = (
yield self.store.get_event(prev_event_id, allow_none=True)
if prev_event_id
else None
)
if not prev_event or prev_event.membership != Membership.JOIN: if not prev_event or prev_event.membership != Membership.JOIN:
logger.warning( logger.warning(
( (
@ -398,7 +400,7 @@ class EventCreationHandler(object):
self.validator.validate_new(event) self.validator.validate_new(event)
defer.returnValue((event, context)) return (event, context)
def _is_exempt_from_privacy_policy(self, builder, requester): def _is_exempt_from_privacy_policy(self, builder, requester):
""""Determine if an event to be sent is exempt from having to consent """"Determine if an event to be sent is exempt from having to consent
@ -425,9 +427,9 @@ class EventCreationHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _is_server_notices_room(self, room_id): def _is_server_notices_room(self, room_id):
if self.config.server_notices_mxid is None: if self.config.server_notices_mxid is None:
defer.returnValue(False) return False
user_ids = yield self.store.get_users_in_room(room_id) user_ids = yield self.store.get_users_in_room(room_id)
defer.returnValue(self.config.server_notices_mxid in user_ids) return self.config.server_notices_mxid in user_ids
@defer.inlineCallbacks @defer.inlineCallbacks
def assert_accepted_privacy_policy(self, requester): def assert_accepted_privacy_policy(self, requester):
@ -507,7 +509,7 @@ class EventCreationHandler(object):
event.event_id, event.event_id,
prev_state.event_id, prev_state.event_id,
) )
defer.returnValue(prev_state) return prev_state
yield self.handle_new_client_event( yield self.handle_new_client_event(
requester=requester, event=event, context=context, ratelimit=ratelimit requester=requester, event=event, context=context, ratelimit=ratelimit
@ -523,6 +525,8 @@ class EventCreationHandler(object):
""" """
prev_state_ids = yield context.get_prev_state_ids(self.store) prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_event_id = prev_state_ids.get((event.type, event.state_key)) prev_event_id = prev_state_ids.get((event.type, event.state_key))
if not prev_event_id:
return
prev_event = yield self.store.get_event(prev_event_id, allow_none=True) prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
if not prev_event: if not prev_event:
return return
@ -531,7 +535,7 @@ class EventCreationHandler(object):
prev_content = encode_canonical_json(prev_event.content) prev_content = encode_canonical_json(prev_event.content)
next_content = encode_canonical_json(event.content) next_content = encode_canonical_json(event.content)
if prev_content == next_content: if prev_content == next_content:
defer.returnValue(prev_event) return prev_event
return return
@defer.inlineCallbacks @defer.inlineCallbacks
@ -563,7 +567,7 @@ class EventCreationHandler(object):
yield self.send_nonmember_event( yield self.send_nonmember_event(
requester, event, context, ratelimit=ratelimit requester, event, context, ratelimit=ratelimit
) )
defer.returnValue(event) return event
@measure_func("create_new_client_event") @measure_func("create_new_client_event")
@defer.inlineCallbacks @defer.inlineCallbacks
@ -626,7 +630,7 @@ class EventCreationHandler(object):
logger.debug("Created event %s", event.event_id) logger.debug("Created event %s", event.event_id)
defer.returnValue((event, context)) return (event, context)
@measure_func("handle_new_client_event") @measure_func("handle_new_client_event")
@defer.inlineCallbacks @defer.inlineCallbacks
@ -791,7 +795,6 @@ class EventCreationHandler(object):
get_prev_content=False, get_prev_content=False,
allow_rejected=False, allow_rejected=False,
allow_none=True, allow_none=True,
check_room_id=event.room_id,
) )
# we can make some additional checks now if we have the original event. # we can make some additional checks now if we have the original event.
@ -799,6 +802,9 @@ class EventCreationHandler(object):
if original_event.type == EventTypes.Create: if original_event.type == EventTypes.Create:
raise AuthError(403, "Redacting create events is not permitted") raise AuthError(403, "Redacting create events is not permitted")
if original_event.room_id != event.room_id:
raise SynapseError(400, "Cannot redact event from a different room")
prev_state_ids = yield context.get_prev_state_ids(self.store) prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_events_ids = yield self.auth.compute_auth_events( auth_events_ids = yield self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True event, prev_state_ids, for_verification=True

View File

@ -242,13 +242,11 @@ class PaginationHandler(object):
) )
if not events: if not events:
defer.returnValue( return {
{
"chunk": [], "chunk": [],
"start": pagin_config.from_token.to_string(), "start": pagin_config.from_token.to_string(),
"end": next_token.to_string(), "end": next_token.to_string(),
} }
)
state = None state = None
if event_filter and event_filter.lazy_load_members() and len(events) > 0: if event_filter and event_filter.lazy_load_members() and len(events) > 0:
@ -286,4 +284,4 @@ class PaginationHandler(object):
) )
) )
defer.returnValue(chunk) return chunk

View File

@ -333,7 +333,7 @@ class PresenceHandler(object):
"""Checks the presence of users that have timed out and updates as """Checks the presence of users that have timed out and updates as
appropriate. appropriate.
""" """
logger.info("Handling presence timeouts") logger.debug("Handling presence timeouts")
now = self.clock.time_msec() now = self.clock.time_msec()
# Fetch the list of users that *may* have timed out. Things may have # Fetch the list of users that *may* have timed out. Things may have
@ -461,7 +461,7 @@ class PresenceHandler(object):
if affect_presence: if affect_presence:
run_in_background(_end) run_in_background(_end)
defer.returnValue(_user_syncing()) return _user_syncing()
def get_currently_syncing_users(self): def get_currently_syncing_users(self):
"""Get the set of user ids that are currently syncing on this HS. """Get the set of user ids that are currently syncing on this HS.
@ -556,7 +556,7 @@ class PresenceHandler(object):
"""Get the current presence state for a user. """Get the current presence state for a user.
""" """
res = yield self.current_state_for_users([user_id]) res = yield self.current_state_for_users([user_id])
defer.returnValue(res[user_id]) return res[user_id]
@defer.inlineCallbacks @defer.inlineCallbacks
def current_state_for_users(self, user_ids): def current_state_for_users(self, user_ids):
@ -585,7 +585,7 @@ class PresenceHandler(object):
states.update(new) states.update(new)
self.user_to_current_state.update(new) self.user_to_current_state.update(new)
defer.returnValue(states) return states
@defer.inlineCallbacks @defer.inlineCallbacks
def _persist_and_notify(self, states): def _persist_and_notify(self, states):
@ -681,7 +681,7 @@ class PresenceHandler(object):
def get_state(self, target_user, as_event=False): def get_state(self, target_user, as_event=False):
results = yield self.get_states([target_user.to_string()], as_event=as_event) results = yield self.get_states([target_user.to_string()], as_event=as_event)
defer.returnValue(results[0]) return results[0]
@defer.inlineCallbacks @defer.inlineCallbacks
def get_states(self, target_user_ids, as_event=False): def get_states(self, target_user_ids, as_event=False):
@ -703,17 +703,15 @@ class PresenceHandler(object):
now = self.clock.time_msec() now = self.clock.time_msec()
if as_event: if as_event:
defer.returnValue( return [
[
{ {
"type": "m.presence", "type": "m.presence",
"content": format_user_presence_state(state, now), "content": format_user_presence_state(state, now),
} }
for state in updates for state in updates
] ]
)
else: else:
defer.returnValue(updates) return updates
@defer.inlineCallbacks @defer.inlineCallbacks
def set_state(self, target_user, state, ignore_status_msg=False): def set_state(self, target_user, state, ignore_status_msg=False):
@ -757,9 +755,9 @@ class PresenceHandler(object):
) )
if observer_room_ids & observed_room_ids: if observer_room_ids & observed_room_ids:
defer.returnValue(True) return True
defer.returnValue(False) return False
@defer.inlineCallbacks @defer.inlineCallbacks
def get_all_presence_updates(self, last_id, current_id): def get_all_presence_updates(self, last_id, current_id):
@ -778,7 +776,7 @@ class PresenceHandler(object):
# TODO(markjh): replicate the unpersisted changes. # TODO(markjh): replicate the unpersisted changes.
# This could use the in-memory stores for recent changes. # This could use the in-memory stores for recent changes.
rows = yield self.store.get_all_presence_updates(last_id, current_id) rows = yield self.store.get_all_presence_updates(last_id, current_id)
defer.returnValue(rows) return rows
def notify_new_event(self): def notify_new_event(self):
"""Called when new events have happened. Handles users and servers """Called when new events have happened. Handles users and servers
@ -1034,7 +1032,7 @@ class PresenceEventSource(object):
# #
# Hence this guard where we just return nothing so that the sync # Hence this guard where we just return nothing so that the sync
# doesn't return. C.f. #5503. # doesn't return. C.f. #5503.
defer.returnValue(([], max_token)) return ([], max_token)
presence = self.get_presence_handler() presence = self.get_presence_handler()
stream_change_cache = self.store.presence_stream_cache stream_change_cache = self.store.presence_stream_cache
@ -1068,18 +1066,12 @@ class PresenceEventSource(object):
updates = yield presence.current_state_for_users(user_ids_changed) updates = yield presence.current_state_for_users(user_ids_changed)
if include_offline: if include_offline:
defer.returnValue((list(updates.values()), max_token)) return (list(updates.values()), max_token)
else: else:
defer.returnValue( return (
( [s for s in itervalues(updates) if s.state != PresenceState.OFFLINE],
[
s
for s in itervalues(updates)
if s.state != PresenceState.OFFLINE
],
max_token, max_token,
) )
)
def get_current_key(self): def get_current_key(self):
return self.store.get_current_presence_token() return self.store.get_current_presence_token()
@ -1107,7 +1099,7 @@ class PresenceEventSource(object):
) )
users_interested_in.update(user_ids) users_interested_in.update(user_ids)
defer.returnValue(users_interested_in) return users_interested_in
def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now): def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now):
@ -1287,7 +1279,7 @@ def get_interested_parties(store, states):
# Always notify self # Always notify self
users_to_states.setdefault(state.user_id, []).append(state) users_to_states.setdefault(state.user_id, []).append(state)
defer.returnValue((room_ids_to_states, users_to_states)) return (room_ids_to_states, users_to_states)
@defer.inlineCallbacks @defer.inlineCallbacks
@ -1321,4 +1313,4 @@ def get_interested_remotes(store, states, state_handler):
host = get_domain_from_id(user_id) host = get_domain_from_id(user_id)
hosts_and_states.append(([host], states)) hosts_and_states.append(([host], states))
defer.returnValue(hosts_and_states) return hosts_and_states

View File

@ -73,7 +73,7 @@ class BaseProfileHandler(BaseHandler):
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND) raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
raise raise
defer.returnValue({"displayname": displayname, "avatar_url": avatar_url}) return {"displayname": displayname, "avatar_url": avatar_url}
else: else:
try: try:
result = yield self.federation.make_query( result = yield self.federation.make_query(
@ -82,7 +82,7 @@ class BaseProfileHandler(BaseHandler):
args={"user_id": user_id}, args={"user_id": user_id},
ignore_backoff=True, ignore_backoff=True,
) )
defer.returnValue(result) return result
except RequestSendFailed as e: except RequestSendFailed as e:
raise_from(SynapseError(502, "Failed to fetch profile"), e) raise_from(SynapseError(502, "Failed to fetch profile"), e)
except HttpResponseException as e: except HttpResponseException as e:
@ -108,10 +108,10 @@ class BaseProfileHandler(BaseHandler):
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND) raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
raise raise
defer.returnValue({"displayname": displayname, "avatar_url": avatar_url}) return {"displayname": displayname, "avatar_url": avatar_url}
else: else:
profile = yield self.store.get_from_remote_profile_cache(user_id) profile = yield self.store.get_from_remote_profile_cache(user_id)
defer.returnValue(profile or {}) return profile or {}
@defer.inlineCallbacks @defer.inlineCallbacks
def get_displayname(self, target_user): def get_displayname(self, target_user):
@ -125,7 +125,7 @@ class BaseProfileHandler(BaseHandler):
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND) raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
raise raise
defer.returnValue(displayname) return displayname
else: else:
try: try:
result = yield self.federation.make_query( result = yield self.federation.make_query(
@ -139,7 +139,7 @@ class BaseProfileHandler(BaseHandler):
except HttpResponseException as e: except HttpResponseException as e:
raise e.to_synapse_error() raise e.to_synapse_error()
defer.returnValue(result["displayname"]) return result["displayname"]
@defer.inlineCallbacks @defer.inlineCallbacks
def set_displayname(self, target_user, requester, new_displayname, by_admin=False): def set_displayname(self, target_user, requester, new_displayname, by_admin=False):
@ -186,7 +186,7 @@ class BaseProfileHandler(BaseHandler):
if e.code == 404: if e.code == 404:
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND) raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
raise raise
defer.returnValue(avatar_url) return avatar_url
else: else:
try: try:
result = yield self.federation.make_query( result = yield self.federation.make_query(
@ -200,7 +200,7 @@ class BaseProfileHandler(BaseHandler):
except HttpResponseException as e: except HttpResponseException as e:
raise e.to_synapse_error() raise e.to_synapse_error()
defer.returnValue(result["avatar_url"]) return result["avatar_url"]
@defer.inlineCallbacks @defer.inlineCallbacks
def set_avatar_url(self, target_user, requester, new_avatar_url, by_admin=False): def set_avatar_url(self, target_user, requester, new_avatar_url, by_admin=False):
@ -251,7 +251,7 @@ class BaseProfileHandler(BaseHandler):
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND) raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
raise raise
defer.returnValue(response) return response
@defer.inlineCallbacks @defer.inlineCallbacks
def _update_join_states(self, requester, target_user): def _update_join_states(self, requester, target_user):

View File

@ -93,7 +93,7 @@ class ReceiptsHandler(BaseHandler):
if min_batch_id is None: if min_batch_id is None:
# no new receipts # no new receipts
defer.returnValue(False) return False
affected_room_ids = list(set([r.room_id for r in receipts])) affected_room_ids = list(set([r.room_id for r in receipts]))
@ -103,7 +103,7 @@ class ReceiptsHandler(BaseHandler):
min_batch_id, max_batch_id, affected_room_ids min_batch_id, max_batch_id, affected_room_ids
) )
defer.returnValue(True) return True
@defer.inlineCallbacks @defer.inlineCallbacks
def received_client_receipt(self, room_id, receipt_type, user_id, event_id): def received_client_receipt(self, room_id, receipt_type, user_id, event_id):
@ -133,9 +133,9 @@ class ReceiptsHandler(BaseHandler):
) )
if not result: if not result:
defer.returnValue([]) return []
defer.returnValue(result) return result
class ReceiptEventSource(object): class ReceiptEventSource(object):
@ -148,13 +148,13 @@ class ReceiptEventSource(object):
to_key = yield self.get_current_key() to_key = yield self.get_current_key()
if from_key == to_key: if from_key == to_key:
defer.returnValue(([], to_key)) return ([], to_key)
events = yield self.store.get_linearized_receipts_for_rooms( events = yield self.store.get_linearized_receipts_for_rooms(
room_ids, from_key=from_key, to_key=to_key room_ids, from_key=from_key, to_key=to_key
) )
defer.returnValue((events, to_key)) return (events, to_key)
def get_current_key(self, direction="f"): def get_current_key(self, direction="f"):
return self.store.get_max_receipt_stream_id() return self.store.get_max_receipt_stream_id()
@ -173,4 +173,4 @@ class ReceiptEventSource(object):
room_ids, from_key=from_key, to_key=to_key room_ids, from_key=from_key, to_key=to_key
) )
defer.returnValue((events, to_key)) return (events, to_key)

View File

@ -265,7 +265,7 @@ class RegistrationHandler(BaseHandler):
# Bind email to new account # Bind email to new account
yield self._register_email_threepid(user_id, threepid_dict, None, False) yield self._register_email_threepid(user_id, threepid_dict, None, False)
defer.returnValue(user_id) return user_id
@defer.inlineCallbacks @defer.inlineCallbacks
def _auto_join_rooms(self, user_id): def _auto_join_rooms(self, user_id):
@ -360,7 +360,7 @@ class RegistrationHandler(BaseHandler):
appservice_id=service_id, appservice_id=service_id,
create_profile_with_displayname=user.localpart, create_profile_with_displayname=user.localpart,
) )
defer.returnValue(user_id) return user_id
@defer.inlineCallbacks @defer.inlineCallbacks
def check_recaptcha(self, ip, private_key, challenge, response): def check_recaptcha(self, ip, private_key, challenge, response):
@ -461,7 +461,7 @@ class RegistrationHandler(BaseHandler):
id = self._next_generated_user_id id = self._next_generated_user_id
self._next_generated_user_id += 1 self._next_generated_user_id += 1
defer.returnValue(str(id)) return str(id)
@defer.inlineCallbacks @defer.inlineCallbacks
def _validate_captcha(self, ip_addr, private_key, challenge, response): def _validate_captcha(self, ip_addr, private_key, challenge, response):
@ -481,7 +481,7 @@ class RegistrationHandler(BaseHandler):
"error_url": "http://www.recaptcha.net/recaptcha/api/challenge?" "error_url": "http://www.recaptcha.net/recaptcha/api/challenge?"
+ "error=%s" % lines[1], + "error=%s" % lines[1],
} }
defer.returnValue(json) return json
@defer.inlineCallbacks @defer.inlineCallbacks
def _submit_captcha(self, ip_addr, private_key, challenge, response): def _submit_captcha(self, ip_addr, private_key, challenge, response):
@ -497,7 +497,7 @@ class RegistrationHandler(BaseHandler):
"response": response, "response": response,
}, },
) )
defer.returnValue(data) return data
@defer.inlineCallbacks @defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier): def _join_user_to_room(self, requester, room_identifier):
@ -622,7 +622,7 @@ class RegistrationHandler(BaseHandler):
initial_display_name=initial_display_name, initial_display_name=initial_display_name,
is_guest=is_guest, is_guest=is_guest,
) )
defer.returnValue((r["device_id"], r["access_token"])) return (r["device_id"], r["access_token"])
valid_until_ms = None valid_until_ms = None
if self.session_lifetime is not None: if self.session_lifetime is not None:
@ -645,7 +645,7 @@ class RegistrationHandler(BaseHandler):
user_id, device_id=device_id, valid_until_ms=valid_until_ms user_id, device_id=device_id, valid_until_ms=valid_until_ms
) )
defer.returnValue((device_id, access_token)) return (device_id, access_token)
@defer.inlineCallbacks @defer.inlineCallbacks
def post_registration_actions( def post_registration_actions(
@ -798,7 +798,7 @@ class RegistrationHandler(BaseHandler):
if ex.errcode == Codes.MISSING_PARAM: if ex.errcode == Codes.MISSING_PARAM:
# This will only happen if the ID server returns a malformed response # This will only happen if the ID server returns a malformed response
logger.info("Can't add incomplete 3pid") logger.info("Can't add incomplete 3pid")
defer.returnValue(None) return None
raise raise
yield self._auth_handler.add_threepid( yield self._auth_handler.add_threepid(

View File

@ -128,7 +128,7 @@ class RoomCreationHandler(BaseHandler):
old_room_id, old_room_id,
new_version, # args for _upgrade_room new_version, # args for _upgrade_room
) )
defer.returnValue(ret) return ret
@defer.inlineCallbacks @defer.inlineCallbacks
def _upgrade_room(self, requester, old_room_id, new_version): def _upgrade_room(self, requester, old_room_id, new_version):
@ -193,7 +193,7 @@ class RoomCreationHandler(BaseHandler):
requester, old_room_id, new_room_id, old_room_state requester, old_room_id, new_room_id, old_room_state
) )
defer.returnValue(new_room_id) return new_room_id
@defer.inlineCallbacks @defer.inlineCallbacks
def _update_upgraded_room_pls( def _update_upgraded_room_pls(
@ -671,7 +671,7 @@ class RoomCreationHandler(BaseHandler):
result["room_alias"] = room_alias.to_string() result["room_alias"] = room_alias.to_string()
yield directory_handler.send_room_alias_update_event(requester, room_id) yield directory_handler.send_room_alias_update_event(requester, room_id)
defer.returnValue(result) return result
@defer.inlineCallbacks @defer.inlineCallbacks
def _send_events_for_new_room( def _send_events_for_new_room(
@ -796,7 +796,7 @@ class RoomCreationHandler(BaseHandler):
room_creator_user_id=creator_id, room_creator_user_id=creator_id,
is_public=is_public, is_public=is_public,
) )
defer.returnValue(gen_room_id) return gen_room_id
except StoreError: except StoreError:
attempts += 1 attempts += 1
raise StoreError(500, "Couldn't generate a room ID.") raise StoreError(500, "Couldn't generate a room ID.")
@ -839,7 +839,7 @@ class RoomContextHandler(object):
event_id, get_prev_content=True, allow_none=True event_id, get_prev_content=True, allow_none=True
) )
if not event: if not event:
defer.returnValue(None) return None
return return
filtered = yield (filter_evts([event])) filtered = yield (filter_evts([event]))
@ -890,7 +890,7 @@ class RoomContextHandler(object):
results["end"] = token.copy_and_replace("room_key", results["end"]).to_string() results["end"] = token.copy_and_replace("room_key", results["end"]).to_string()
defer.returnValue(results) return results
class RoomEventSource(object): class RoomEventSource(object):
@ -941,7 +941,7 @@ class RoomEventSource(object):
else: else:
end_key = to_key end_key = to_key
defer.returnValue((events, end_key)) return (events, end_key)
def get_current_key(self): def get_current_key(self):
return self.store.get_room_events_max_id() return self.store.get_room_events_max_id()
@ -959,4 +959,4 @@ class RoomEventSource(object):
limit=config.limit, limit=config.limit,
) )
defer.returnValue((events, next_key)) return (events, next_key)

View File

@ -325,7 +325,7 @@ class RoomListHandler(BaseHandler):
current_limit=since_token.current_limit - 1, current_limit=since_token.current_limit - 1,
).to_token() ).to_token()
defer.returnValue(results) return results
@defer.inlineCallbacks @defer.inlineCallbacks
def _append_room_entry_to_chunk( def _append_room_entry_to_chunk(
@ -420,7 +420,7 @@ class RoomListHandler(BaseHandler):
if join_rules_event: if join_rules_event:
join_rule = join_rules_event.content.get("join_rule", None) join_rule = join_rules_event.content.get("join_rule", None)
if not allow_private and join_rule and join_rule != JoinRules.PUBLIC: if not allow_private and join_rule and join_rule != JoinRules.PUBLIC:
defer.returnValue(None) return None
# Return whether this room is open to federation users or not # Return whether this room is open to federation users or not
create_event = current_state.get((EventTypes.Create, "")) create_event = current_state.get((EventTypes.Create, ""))
@ -469,7 +469,7 @@ class RoomListHandler(BaseHandler):
if avatar_url: if avatar_url:
result["avatar_url"] = avatar_url result["avatar_url"] = avatar_url
defer.returnValue(result) return result
@defer.inlineCallbacks @defer.inlineCallbacks
def get_remote_public_room_list( def get_remote_public_room_list(
@ -482,7 +482,7 @@ class RoomListHandler(BaseHandler):
third_party_instance_id=None, third_party_instance_id=None,
): ):
if not self.enable_room_list_search: if not self.enable_room_list_search:
defer.returnValue({"chunk": [], "total_room_count_estimate": 0}) return {"chunk": [], "total_room_count_estimate": 0}
if search_filter: if search_filter:
# We currently don't support searching across federation, so we have # We currently don't support searching across federation, so we have
@ -507,7 +507,7 @@ class RoomListHandler(BaseHandler):
] ]
} }
defer.returnValue(res) return res
def _get_remote_list_cached( def _get_remote_list_cached(
self, self,

View File

@ -26,8 +26,7 @@ from unpaddedbase64 import decode_base64
from twisted.internet import defer from twisted.internet import defer
import synapse.server from synapse import types
import synapse.types
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, HttpResponseException, SynapseError from synapse.api.errors import AuthError, Codes, HttpResponseException, SynapseError
from synapse.types import RoomID, UserID from synapse.types import RoomID, UserID
@ -191,7 +190,7 @@ class RoomMemberHandler(object):
) )
if duplicate is not None: if duplicate is not None:
# Discard the new event since this membership change is a no-op. # Discard the new event since this membership change is a no-op.
defer.returnValue(duplicate) return duplicate
yield self.event_creation_handler.handle_new_client_event( yield self.event_creation_handler.handle_new_client_event(
requester, event, context, extra_users=[target], ratelimit=ratelimit requester, event, context, extra_users=[target], ratelimit=ratelimit
@ -233,7 +232,7 @@ class RoomMemberHandler(object):
if prev_member_event.membership == Membership.JOIN: if prev_member_event.membership == Membership.JOIN:
yield self._user_left_room(target, room_id) yield self._user_left_room(target, room_id)
defer.returnValue(event) return event
@defer.inlineCallbacks @defer.inlineCallbacks
def copy_room_tags_and_direct_to_room(self, old_room_id, new_room_id, user_id): def copy_room_tags_and_direct_to_room(self, old_room_id, new_room_id, user_id):
@ -303,7 +302,7 @@ class RoomMemberHandler(object):
require_consent=require_consent, require_consent=require_consent,
) )
defer.returnValue(result) return result
@defer.inlineCallbacks @defer.inlineCallbacks
def _update_membership( def _update_membership(
@ -423,7 +422,7 @@ class RoomMemberHandler(object):
same_membership = old_membership == effective_membership_state same_membership = old_membership == effective_membership_state
same_sender = requester.user.to_string() == old_state.sender same_sender = requester.user.to_string() == old_state.sender
if same_sender and same_membership and same_content: if same_sender and same_membership and same_content:
defer.returnValue(old_state) return old_state
if old_membership in ["ban", "leave"] and action == "kick": if old_membership in ["ban", "leave"] and action == "kick":
raise AuthError(403, "The target user is not in the room") raise AuthError(403, "The target user is not in the room")
@ -473,7 +472,7 @@ class RoomMemberHandler(object):
ret = yield self._remote_join( ret = yield self._remote_join(
requester, remote_room_hosts, room_id, target, content requester, remote_room_hosts, room_id, target, content
) )
defer.returnValue(ret) return ret
elif effective_membership_state == Membership.LEAVE: elif effective_membership_state == Membership.LEAVE:
if not is_host_in_room: if not is_host_in_room:
@ -495,7 +494,7 @@ class RoomMemberHandler(object):
res = yield self._remote_reject_invite( res = yield self._remote_reject_invite(
requester, remote_room_hosts, room_id, target requester, remote_room_hosts, room_id, target
) )
defer.returnValue(res) return res
res = yield self._local_membership_update( res = yield self._local_membership_update(
requester=requester, requester=requester,
@ -508,7 +507,7 @@ class RoomMemberHandler(object):
content=content, content=content,
require_consent=require_consent, require_consent=require_consent,
) )
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def send_membership_event( def send_membership_event(
@ -543,7 +542,7 @@ class RoomMemberHandler(object):
), "Sender (%s) must be same as requester (%s)" % (sender, requester.user) ), "Sender (%s) must be same as requester (%s)" % (sender, requester.user)
assert self.hs.is_mine(sender), "Sender must be our own: %s" % (sender,) assert self.hs.is_mine(sender), "Sender must be our own: %s" % (sender,)
else: else:
requester = synapse.types.create_requester(target_user) requester = types.create_requester(target_user)
prev_event = yield self.event_creation_handler.deduplicate_state_event( prev_event = yield self.event_creation_handler.deduplicate_state_event(
event, context event, context
@ -596,11 +595,11 @@ class RoomMemberHandler(object):
""" """
guest_access_id = current_state_ids.get((EventTypes.GuestAccess, ""), None) guest_access_id = current_state_ids.get((EventTypes.GuestAccess, ""), None)
if not guest_access_id: if not guest_access_id:
defer.returnValue(False) return False
guest_access = yield self.store.get_event(guest_access_id) guest_access = yield self.store.get_event(guest_access_id)
defer.returnValue( return (
guest_access guest_access
and guest_access.content and guest_access.content
and "guest_access" in guest_access.content and "guest_access" in guest_access.content
@ -635,7 +634,7 @@ class RoomMemberHandler(object):
servers.remove(room_alias.domain) servers.remove(room_alias.domain)
servers.insert(0, room_alias.domain) servers.insert(0, room_alias.domain)
defer.returnValue((RoomID.from_string(room_id), servers)) return (RoomID.from_string(room_id), servers)
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_inviter(self, user_id, room_id): def _get_inviter(self, user_id, room_id):
@ -643,7 +642,7 @@ class RoomMemberHandler(object):
user_id=user_id, room_id=room_id user_id=user_id, room_id=room_id
) )
if invite: if invite:
defer.returnValue(UserID.from_string(invite.sender)) return UserID.from_string(invite.sender)
@defer.inlineCallbacks @defer.inlineCallbacks
def do_3pid_invite( def do_3pid_invite(
@ -708,11 +707,11 @@ class RoomMemberHandler(object):
if "signatures" not in data: if "signatures" not in data:
raise AuthError(401, "No signatures on 3pid binding") raise AuthError(401, "No signatures on 3pid binding")
yield self._verify_any_signature(data, id_server) yield self._verify_any_signature(data, id_server)
defer.returnValue(data["mxid"]) return data["mxid"]
except IOError as e: except IOError as e:
logger.warn("Error from identity server lookup: %s" % (e,)) logger.warn("Error from identity server lookup: %s" % (e,))
defer.returnValue(None) return None
@defer.inlineCallbacks @defer.inlineCallbacks
def _verify_any_signature(self, data, server_hostname): def _verify_any_signature(self, data, server_hostname):
@ -904,7 +903,7 @@ class RoomMemberHandler(object):
if not public_keys: if not public_keys:
public_keys.append(fallback_public_key) public_keys.append(fallback_public_key)
display_name = data["display_name"] display_name = data["display_name"]
defer.returnValue((token, public_keys, fallback_public_key, display_name)) return (token, public_keys, fallback_public_key, display_name)
@defer.inlineCallbacks @defer.inlineCallbacks
def _is_host_in_room(self, current_state_ids): def _is_host_in_room(self, current_state_ids):
@ -913,7 +912,7 @@ class RoomMemberHandler(object):
create_event_id = current_state_ids.get(("m.room.create", "")) create_event_id = current_state_ids.get(("m.room.create", ""))
if len(current_state_ids) == 1 and create_event_id: if len(current_state_ids) == 1 and create_event_id:
# We can only get here if we're in the process of creating the room # We can only get here if we're in the process of creating the room
defer.returnValue(True) return True
for etype, state_key in current_state_ids: for etype, state_key in current_state_ids:
if etype != EventTypes.Member or not self.hs.is_mine_id(state_key): if etype != EventTypes.Member or not self.hs.is_mine_id(state_key):
@ -925,16 +924,16 @@ class RoomMemberHandler(object):
continue continue
if event.membership == Membership.JOIN: if event.membership == Membership.JOIN:
defer.returnValue(True) return True
defer.returnValue(False) return False
@defer.inlineCallbacks @defer.inlineCallbacks
def _is_server_notice_room(self, room_id): def _is_server_notice_room(self, room_id):
if self._server_notices_mxid is None: if self._server_notices_mxid is None:
defer.returnValue(False) return False
user_ids = yield self.store.get_users_in_room(room_id) user_ids = yield self.store.get_users_in_room(room_id)
defer.returnValue(self._server_notices_mxid in user_ids) return self._server_notices_mxid in user_ids
class RoomMemberMasterHandler(RoomMemberHandler): class RoomMemberMasterHandler(RoomMemberHandler):
@ -945,6 +944,47 @@ class RoomMemberMasterHandler(RoomMemberHandler):
self.distributor.declare("user_joined_room") self.distributor.declare("user_joined_room")
self.distributor.declare("user_left_room") self.distributor.declare("user_left_room")
@defer.inlineCallbacks
def _is_remote_room_too_complex(self, room_id, remote_room_hosts):
"""
Check if complexity of a remote room is too great.
Args:
room_id (str)
remote_room_hosts (list[str])
Returns: bool of whether the complexity is too great, or None
if unable to be fetched
"""
max_complexity = self.hs.config.limit_remote_rooms.complexity
complexity = yield self.federation_handler.get_room_complexity(
remote_room_hosts, room_id
)
if complexity:
if complexity["v1"] > max_complexity:
return True
return False
return None
@defer.inlineCallbacks
def _is_local_room_too_complex(self, room_id):
"""
Check if the complexity of a local room is too great.
Args:
room_id (str)
Returns: bool
"""
max_complexity = self.hs.config.limit_remote_rooms.complexity
complexity = yield self.store.get_room_complexity(room_id)
if complexity["v1"] > max_complexity:
return True
return False
@defer.inlineCallbacks @defer.inlineCallbacks
def _remote_join(self, requester, remote_room_hosts, room_id, user, content): def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Implements RoomMemberHandler._remote_join """Implements RoomMemberHandler._remote_join
@ -952,7 +992,6 @@ class RoomMemberMasterHandler(RoomMemberHandler):
# filter ourselves out of remote_room_hosts: do_invite_join ignores it # filter ourselves out of remote_room_hosts: do_invite_join ignores it
# and if it is the only entry we'd like to return a 404 rather than a # and if it is the only entry we'd like to return a 404 rather than a
# 500. # 500.
remote_room_hosts = [ remote_room_hosts = [
host for host in remote_room_hosts if host != self.hs.hostname host for host in remote_room_hosts if host != self.hs.hostname
] ]
@ -960,6 +999,18 @@ class RoomMemberMasterHandler(RoomMemberHandler):
if len(remote_room_hosts) == 0: if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers") raise SynapseError(404, "No known servers")
if self.hs.config.limit_remote_rooms.enabled:
# Fetch the room complexity
too_complex = yield self._is_remote_room_too_complex(
room_id, remote_room_hosts
)
if too_complex is True:
raise SynapseError(
code=400,
msg=self.hs.config.limit_remote_rooms.complexity_error,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
)
# We don't do an auth check if we are doing an invite # We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking # join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we # that we are allowed to join when we decide whether or not we
@ -969,6 +1020,31 @@ class RoomMemberMasterHandler(RoomMemberHandler):
) )
yield self._user_joined_room(user, room_id) yield self._user_joined_room(user, room_id)
# Check the room we just joined wasn't too large, if we didn't fetch the
# complexity of it before.
if self.hs.config.limit_remote_rooms.enabled:
if too_complex is False:
# We checked, and we're under the limit.
return
# Check again, but with the local state events
too_complex = yield self._is_local_room_too_complex(room_id)
if too_complex is False:
# We're under the limit.
return
# The room is too large. Leave.
requester = types.create_requester(user, None, False, None)
yield self.update_membership(
requester=requester, target=user, room_id=room_id, action="leave"
)
raise SynapseError(
code=400,
msg=self.hs.config.limit_remote_rooms.complexity_error,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
)
@defer.inlineCallbacks @defer.inlineCallbacks
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target): def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
"""Implements RoomMemberHandler._remote_reject_invite """Implements RoomMemberHandler._remote_reject_invite
@ -978,7 +1054,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
ret = yield fed_handler.do_remotely_reject_invite( ret = yield fed_handler.do_remotely_reject_invite(
remote_room_hosts, room_id, target.to_string() remote_room_hosts, room_id, target.to_string()
) )
defer.returnValue(ret) return ret
except Exception as e: except Exception as e:
# if we were unable to reject the exception, just mark # if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead. # it as rejected on our end and plough ahead.
@ -989,7 +1065,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
logger.warn("Failed to reject invite: %s", e) logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(target.to_string(), room_id) yield self.store.locally_reject_invite(target.to_string(), room_id)
defer.returnValue({}) return {}
def _user_joined_room(self, target, room_id): def _user_joined_room(self, target, room_id):
"""Implements RoomMemberHandler._user_joined_room """Implements RoomMemberHandler._user_joined_room

View File

@ -53,7 +53,7 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
yield self._user_joined_room(user, room_id) yield self._user_joined_room(user, room_id)
defer.returnValue(ret) return ret
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target): def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
"""Implements RoomMemberHandler._remote_reject_invite """Implements RoomMemberHandler._remote_reject_invite

View File

@ -69,7 +69,7 @@ class SearchHandler(BaseHandler):
# Scan through the old room for further predecessors # Scan through the old room for further predecessors
room_id = predecessor["room_id"] room_id = predecessor["room_id"]
defer.returnValue(historical_room_ids) return historical_room_ids
@defer.inlineCallbacks @defer.inlineCallbacks
def search(self, user, content, batch=None): def search(self, user, content, batch=None):
@ -186,13 +186,11 @@ class SearchHandler(BaseHandler):
room_ids.intersection_update({batch_group_key}) room_ids.intersection_update({batch_group_key})
if not room_ids: if not room_ids:
defer.returnValue( return {
{
"search_categories": { "search_categories": {
"room_events": {"results": [], "count": 0, "highlights": []} "room_events": {"results": [], "count": 0, "highlights": []}
} }
} }
)
rank_map = {} # event_id -> rank of event rank_map = {} # event_id -> rank of event
allowed_events = [] allowed_events = []
@ -455,4 +453,4 @@ class SearchHandler(BaseHandler):
if global_next_batch: if global_next_batch:
rooms_cat_res["next_batch"] = global_next_batch rooms_cat_res["next_batch"] = global_next_batch
defer.returnValue({"search_categories": {"room_events": rooms_cat_res}}) return {"search_categories": {"room_events": rooms_cat_res}}

View File

@ -48,7 +48,7 @@ class StateDeltasHandler(object):
if not event and not prev_event: if not event and not prev_event:
logger.debug("Neither event exists: %r %r", prev_event_id, event_id) logger.debug("Neither event exists: %r %r", prev_event_id, event_id)
defer.returnValue(None) return None
prev_value = None prev_value = None
value = None value = None
@ -62,8 +62,8 @@ class StateDeltasHandler(object):
logger.debug("prev_value: %r -> value: %r", prev_value, value) logger.debug("prev_value: %r -> value: %r", prev_value, value)
if value == public_value and prev_value != public_value: if value == public_value and prev_value != public_value:
defer.returnValue(True) return True
elif value != public_value and prev_value == public_value: elif value != public_value and prev_value == public_value:
defer.returnValue(False) return False
else: else:
defer.returnValue(None) return None

View File

@ -86,7 +86,7 @@ class StatsHandler(StateDeltasHandler):
# If still None then the initial background update hasn't happened yet # If still None then the initial background update hasn't happened yet
if self.pos is None: if self.pos is None:
defer.returnValue(None) return None
# Loop round handling deltas until we're up to date # Loop round handling deltas until we're up to date
while True: while True:
@ -328,6 +328,6 @@ class StatsHandler(StateDeltasHandler):
== "world_readable" == "world_readable"
) )
): ):
defer.returnValue(True) return True
else: else:
defer.returnValue(False) return False

View File

@ -263,7 +263,7 @@ class SyncHandler(object):
timeout, timeout,
full_state, full_state,
) )
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def _wait_for_sync_for_user(self, sync_config, since_token, timeout, full_state): def _wait_for_sync_for_user(self, sync_config, since_token, timeout, full_state):
@ -303,7 +303,7 @@ class SyncHandler(object):
lazy_loaded = "false" lazy_loaded = "false"
non_empty_sync_counter.labels(sync_type, lazy_loaded).inc() non_empty_sync_counter.labels(sync_type, lazy_loaded).inc()
defer.returnValue(result) return result
def current_sync_for_user(self, sync_config, since_token=None, full_state=False): def current_sync_for_user(self, sync_config, since_token=None, full_state=False):
"""Get the sync for client needed to match what the server has now. """Get the sync for client needed to match what the server has now.
@ -317,7 +317,7 @@ class SyncHandler(object):
user_id = user.to_string() user_id = user.to_string()
rules = yield self.store.get_push_rules_for_user(user_id) rules = yield self.store.get_push_rules_for_user(user_id)
rules = format_push_rules_for_user(user, rules) rules = format_push_rules_for_user(user, rules)
defer.returnValue(rules) return rules
@defer.inlineCallbacks @defer.inlineCallbacks
def ephemeral_by_room(self, sync_result_builder, now_token, since_token=None): def ephemeral_by_room(self, sync_result_builder, now_token, since_token=None):
@ -378,7 +378,7 @@ class SyncHandler(object):
event_copy = {k: v for (k, v) in iteritems(event) if k != "room_id"} event_copy = {k: v for (k, v) in iteritems(event) if k != "room_id"}
ephemeral_by_room.setdefault(room_id, []).append(event_copy) ephemeral_by_room.setdefault(room_id, []).append(event_copy)
defer.returnValue((now_token, ephemeral_by_room)) return (now_token, ephemeral_by_room)
@defer.inlineCallbacks @defer.inlineCallbacks
def _load_filtered_recents( def _load_filtered_recents(
@ -426,8 +426,8 @@ class SyncHandler(object):
recents = [] recents = []
if not limited or block_all_timeline: if not limited or block_all_timeline:
defer.returnValue( return TimelineBatch(
TimelineBatch(events=recents, prev_batch=now_token, limited=False) events=recents, prev_batch=now_token, limited=False
) )
filtering_factor = 2 filtering_factor = 2
@ -490,13 +490,11 @@ class SyncHandler(object):
prev_batch_token = now_token.copy_and_replace("room_key", room_key) prev_batch_token = now_token.copy_and_replace("room_key", room_key)
defer.returnValue( return TimelineBatch(
TimelineBatch(
events=recents, events=recents,
prev_batch=prev_batch_token, prev_batch=prev_batch_token,
limited=limited or newly_joined_room, limited=limited or newly_joined_room,
) )
)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_state_after_event(self, event, state_filter=StateFilter.all()): def get_state_after_event(self, event, state_filter=StateFilter.all()):
@ -517,7 +515,7 @@ class SyncHandler(object):
if event.is_state(): if event.is_state():
state_ids = state_ids.copy() state_ids = state_ids.copy()
state_ids[(event.type, event.state_key)] = event.event_id state_ids[(event.type, event.state_key)] = event.event_id
defer.returnValue(state_ids) return state_ids
@defer.inlineCallbacks @defer.inlineCallbacks
def get_state_at(self, room_id, stream_position, state_filter=StateFilter.all()): def get_state_at(self, room_id, stream_position, state_filter=StateFilter.all()):
@ -549,7 +547,7 @@ class SyncHandler(object):
else: else:
# no events in this room - so presumably no state # no events in this room - so presumably no state
state = {} state = {}
defer.returnValue(state) return state
@defer.inlineCallbacks @defer.inlineCallbacks
def compute_summary(self, room_id, sync_config, batch, state, now_token): def compute_summary(self, room_id, sync_config, batch, state, now_token):
@ -579,7 +577,7 @@ class SyncHandler(object):
) )
if not last_events: if not last_events:
defer.returnValue(None) return None
return return
last_event = last_events[-1] last_event = last_events[-1]
@ -611,14 +609,14 @@ class SyncHandler(object):
if name_id: if name_id:
name = yield self.store.get_event(name_id, allow_none=True) name = yield self.store.get_event(name_id, allow_none=True)
if name and name.content.get("name"): if name and name.content.get("name"):
defer.returnValue(summary) return summary
if canonical_alias_id: if canonical_alias_id:
canonical_alias = yield self.store.get_event( canonical_alias = yield self.store.get_event(
canonical_alias_id, allow_none=True canonical_alias_id, allow_none=True
) )
if canonical_alias and canonical_alias.content.get("alias"): if canonical_alias and canonical_alias.content.get("alias"):
defer.returnValue(summary) return summary
me = sync_config.user.to_string() me = sync_config.user.to_string()
@ -652,7 +650,7 @@ class SyncHandler(object):
summary["m.heroes"] = sorted([user_id for user_id in gone_user_ids])[0:5] summary["m.heroes"] = sorted([user_id for user_id in gone_user_ids])[0:5]
if not sync_config.filter_collection.lazy_load_members(): if not sync_config.filter_collection.lazy_load_members():
defer.returnValue(summary) return summary
# ensure we send membership events for heroes if needed # ensure we send membership events for heroes if needed
cache_key = (sync_config.user.to_string(), sync_config.device_id) cache_key = (sync_config.user.to_string(), sync_config.device_id)
@ -686,7 +684,7 @@ class SyncHandler(object):
cache.set(s.state_key, s.event_id) cache.set(s.state_key, s.event_id)
state[(EventTypes.Member, s.state_key)] = s state[(EventTypes.Member, s.state_key)] = s
defer.returnValue(summary) return summary
def get_lazy_loaded_members_cache(self, cache_key): def get_lazy_loaded_members_cache(self, cache_key):
cache = self.lazy_loaded_members_cache.get(cache_key) cache = self.lazy_loaded_members_cache.get(cache_key)
@ -783,9 +781,17 @@ class SyncHandler(object):
lazy_load_members=lazy_load_members, lazy_load_members=lazy_load_members,
) )
elif batch.limited: elif batch.limited:
if batch:
state_at_timeline_start = yield self.store.get_state_ids_for_event( state_at_timeline_start = yield self.store.get_state_ids_for_event(
batch.events[0].event_id, state_filter=state_filter batch.events[0].event_id, state_filter=state_filter
) )
else:
# Its not clear how we get here, but empirically we do
# (#5407). Logging has been added elsewhere to try and
# figure out where this state comes from.
state_at_timeline_start = yield self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
)
# for now, we disable LL for gappy syncs - see # for now, we disable LL for gappy syncs - see
# https://github.com/vector-im/riot-web/issues/7211#issuecomment-419976346 # https://github.com/vector-im/riot-web/issues/7211#issuecomment-419976346
@ -805,9 +811,17 @@ class SyncHandler(object):
room_id, stream_position=since_token, state_filter=state_filter room_id, stream_position=since_token, state_filter=state_filter
) )
if batch:
current_state_ids = yield self.store.get_state_ids_for_event( current_state_ids = yield self.store.get_state_ids_for_event(
batch.events[-1].event_id, state_filter=state_filter batch.events[-1].event_id, state_filter=state_filter
) )
else:
# Its not clear how we get here, but empirically we do
# (#5407). Logging has been added elsewhere to try and
# figure out where this state comes from.
current_state_ids = yield self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
)
state_ids = _calculate_state( state_ids = _calculate_state(
timeline_contains=timeline_state, timeline_contains=timeline_state,
@ -871,14 +885,12 @@ class SyncHandler(object):
if state_ids: if state_ids:
state = yield self.store.get_events(list(state_ids.values())) state = yield self.store.get_events(list(state_ids.values()))
defer.returnValue( return {
{
(e.type, e.state_key): e (e.type, e.state_key): e
for e in sync_config.filter_collection.filter_room_state( for e in sync_config.filter_collection.filter_room_state(
list(state.values()) list(state.values())
) )
} }
)
@defer.inlineCallbacks @defer.inlineCallbacks
def unread_notifs_for_room_id(self, room_id, sync_config): def unread_notifs_for_room_id(self, room_id, sync_config):
@ -894,11 +906,11 @@ class SyncHandler(object):
notifs = yield self.store.get_unread_event_push_actions_by_room_for_user( notifs = yield self.store.get_unread_event_push_actions_by_room_for_user(
room_id, sync_config.user.to_string(), last_unread_event_id room_id, sync_config.user.to_string(), last_unread_event_id
) )
defer.returnValue(notifs) return notifs
# There is no new information in this period, so your notification # There is no new information in this period, so your notification
# count is whatever it was last time. # count is whatever it was last time.
defer.returnValue(None) return None
@defer.inlineCallbacks @defer.inlineCallbacks
def generate_sync_result(self, sync_config, since_token=None, full_state=False): def generate_sync_result(self, sync_config, since_token=None, full_state=False):
@ -989,8 +1001,7 @@ class SyncHandler(object):
"Sync result for newly joined room %s: %r", room_id, joined_room "Sync result for newly joined room %s: %r", room_id, joined_room
) )
defer.returnValue( return SyncResult(
SyncResult(
presence=sync_result_builder.presence, presence=sync_result_builder.presence,
account_data=sync_result_builder.account_data, account_data=sync_result_builder.account_data,
joined=sync_result_builder.joined, joined=sync_result_builder.joined,
@ -1002,7 +1013,6 @@ class SyncHandler(object):
device_one_time_keys_count=one_time_key_counts, device_one_time_keys_count=one_time_key_counts,
next_batch=sync_result_builder.now_token, next_batch=sync_result_builder.now_token,
) )
)
@measure_func("_generate_sync_entry_for_groups") @measure_func("_generate_sync_entry_for_groups")
@defer.inlineCallbacks @defer.inlineCallbacks
@ -1124,11 +1134,9 @@ class SyncHandler(object):
# Remove any users that we still share a room with. # Remove any users that we still share a room with.
newly_left_users -= users_who_share_room newly_left_users -= users_who_share_room
defer.returnValue( return DeviceLists(changed=users_that_have_changed, left=newly_left_users)
DeviceLists(changed=users_that_have_changed, left=newly_left_users)
)
else: else:
defer.returnValue(DeviceLists(changed=[], left=[])) return DeviceLists(changed=[], left=[])
@defer.inlineCallbacks @defer.inlineCallbacks
def _generate_sync_entry_for_to_device(self, sync_result_builder): def _generate_sync_entry_for_to_device(self, sync_result_builder):
@ -1225,7 +1233,7 @@ class SyncHandler(object):
sync_result_builder.account_data = account_data_for_user sync_result_builder.account_data = account_data_for_user
defer.returnValue(account_data_by_room) return account_data_by_room
@defer.inlineCallbacks @defer.inlineCallbacks
def _generate_sync_entry_for_presence( def _generate_sync_entry_for_presence(
@ -1325,7 +1333,7 @@ class SyncHandler(object):
) )
if not tags_by_room: if not tags_by_room:
logger.debug("no-oping sync") logger.debug("no-oping sync")
defer.returnValue(([], [], [], [])) return ([], [], [], [])
ignored_account_data = yield self.store.get_global_account_data_by_type_for_user( ignored_account_data = yield self.store.get_global_account_data_by_type_for_user(
"m.ignored_user_list", user_id=user_id "m.ignored_user_list", user_id=user_id
@ -1388,14 +1396,12 @@ class SyncHandler(object):
newly_left_users -= newly_joined_or_invited_users newly_left_users -= newly_joined_or_invited_users
defer.returnValue( return (
(
newly_joined_rooms, newly_joined_rooms,
newly_joined_or_invited_users, newly_joined_or_invited_users,
newly_left_rooms, newly_left_rooms,
newly_left_users, newly_left_users,
) )
)
@defer.inlineCallbacks @defer.inlineCallbacks
def _have_rooms_changed(self, sync_result_builder): def _have_rooms_changed(self, sync_result_builder):
@ -1414,13 +1420,13 @@ class SyncHandler(object):
) )
if rooms_changed: if rooms_changed:
defer.returnValue(True) return True
stream_id = RoomStreamToken.parse_stream_token(since_token.room_key).stream stream_id = RoomStreamToken.parse_stream_token(since_token.room_key).stream
for room_id in sync_result_builder.joined_room_ids: for room_id in sync_result_builder.joined_room_ids:
if self.store.has_room_changed_since(room_id, stream_id): if self.store.has_room_changed_since(room_id, stream_id):
defer.returnValue(True) return True
defer.returnValue(False) return False
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_rooms_changed(self, sync_result_builder, ignored_users): def _get_rooms_changed(self, sync_result_builder, ignored_users):
@ -1637,7 +1643,7 @@ class SyncHandler(object):
) )
room_entries.append(entry) room_entries.append(entry)
defer.returnValue((room_entries, invited, newly_joined_rooms, newly_left_rooms)) return (room_entries, invited, newly_joined_rooms, newly_left_rooms)
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_all_rooms(self, sync_result_builder, ignored_users): def _get_all_rooms(self, sync_result_builder, ignored_users):
@ -1711,7 +1717,7 @@ class SyncHandler(object):
) )
) )
defer.returnValue((room_entries, invited, [])) return (room_entries, invited, [])
@defer.inlineCallbacks @defer.inlineCallbacks
def _generate_room_entry( def _generate_room_entry(
@ -1765,6 +1771,21 @@ class SyncHandler(object):
newly_joined_room=newly_joined, newly_joined_room=newly_joined,
) )
if not batch and batch.limited:
# This resulted in #5407, which is weird, so lets log! We do it
# here as we have the maximum amount of information.
user_id = sync_result_builder.sync_config.user.to_string()
logger.info(
"Issue #5407: Found limited batch with no events. user %s, room %s,"
" sync_config %s, newly_joined %s, events %s, batch %s.",
user_id,
room_id,
sync_config,
newly_joined,
events,
batch,
)
if newly_joined: if newly_joined:
# debug for https://github.com/matrix-org/synapse/issues/4422 # debug for https://github.com/matrix-org/synapse/issues/4422
issue4422_logger.debug( issue4422_logger.debug(
@ -1912,7 +1933,7 @@ class SyncHandler(object):
joined_room_ids.add(room_id) joined_room_ids.add(room_id)
joined_room_ids = frozenset(joined_room_ids) joined_room_ids = frozenset(joined_room_ids)
defer.returnValue(joined_room_ids) return joined_room_ids
def _action_has_highlight(actions): def _action_has_highlight(actions):

View File

@ -83,7 +83,7 @@ class TypingHandler(object):
self._room_typing = {} self._room_typing = {}
def _handle_timeouts(self): def _handle_timeouts(self):
logger.info("Checking for typing timeouts") logger.debug("Checking for typing timeouts")
now = self.clock.time_msec() now = self.clock.time_msec()
@ -140,7 +140,7 @@ class TypingHandler(object):
if was_present: if was_present:
# No point sending another notification # No point sending another notification
defer.returnValue(None) return None
self._push_update(member=member, typing=True) self._push_update(member=member, typing=True)
@ -173,7 +173,7 @@ class TypingHandler(object):
def _stopped_typing(self, member): def _stopped_typing(self, member):
if member.user_id not in self._room_typing.get(member.room_id, set()): if member.user_id not in self._room_typing.get(member.room_id, set()):
# No point # No point
defer.returnValue(None) return None
self._member_typing_until.pop(member, None) self._member_typing_until.pop(member, None)
self._member_last_federation_poke.pop(member, None) self._member_last_federation_poke.pop(member, None)

View File

@ -133,7 +133,7 @@ class UserDirectoryHandler(StateDeltasHandler):
# If still None then the initial background update hasn't happened yet # If still None then the initial background update hasn't happened yet
if self.pos is None: if self.pos is None:
defer.returnValue(None) return None
# Loop round handling deltas until we're up to date # Loop round handling deltas until we're up to date
while True: while True:

View File

@ -294,7 +294,7 @@ class SimpleHttpClient(object):
logger.info( logger.info(
"Received response to %s %s: %s", method, redact_uri(uri), response.code "Received response to %s %s: %s", method, redact_uri(uri), response.code
) )
defer.returnValue(response) return response
except Exception as e: except Exception as e:
incoming_responses_counter.labels(method, "ERR").inc() incoming_responses_counter.labels(method, "ERR").inc()
logger.info( logger.info(
@ -345,7 +345,7 @@ class SimpleHttpClient(object):
body = yield make_deferred_yieldable(readBody(response)) body = yield make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300: if 200 <= response.code < 300:
defer.returnValue(json.loads(body)) return json.loads(body)
else: else:
raise HttpResponseException(response.code, response.phrase, body) raise HttpResponseException(response.code, response.phrase, body)
@ -385,7 +385,7 @@ class SimpleHttpClient(object):
body = yield make_deferred_yieldable(readBody(response)) body = yield make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300: if 200 <= response.code < 300:
defer.returnValue(json.loads(body)) return json.loads(body)
else: else:
raise HttpResponseException(response.code, response.phrase, body) raise HttpResponseException(response.code, response.phrase, body)
@ -410,7 +410,7 @@ class SimpleHttpClient(object):
ValueError: if the response was not JSON ValueError: if the response was not JSON
""" """
body = yield self.get_raw(uri, args, headers=headers) body = yield self.get_raw(uri, args, headers=headers)
defer.returnValue(json.loads(body)) return json.loads(body)
@defer.inlineCallbacks @defer.inlineCallbacks
def put_json(self, uri, json_body, args={}, headers=None): def put_json(self, uri, json_body, args={}, headers=None):
@ -453,7 +453,7 @@ class SimpleHttpClient(object):
body = yield make_deferred_yieldable(readBody(response)) body = yield make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300: if 200 <= response.code < 300:
defer.returnValue(json.loads(body)) return json.loads(body)
else: else:
raise HttpResponseException(response.code, response.phrase, body) raise HttpResponseException(response.code, response.phrase, body)
@ -488,7 +488,7 @@ class SimpleHttpClient(object):
body = yield make_deferred_yieldable(readBody(response)) body = yield make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300: if 200 <= response.code < 300:
defer.returnValue(body) return body
else: else:
raise HttpResponseException(response.code, response.phrase, body) raise HttpResponseException(response.code, response.phrase, body)
@ -545,14 +545,12 @@ class SimpleHttpClient(object):
except Exception as e: except Exception as e:
raise_from(SynapseError(502, ("Failed to download remote body: %s" % e)), e) raise_from(SynapseError(502, ("Failed to download remote body: %s" % e)), e)
defer.returnValue( return (
(
length, length,
resp_headers, resp_headers,
response.request.absoluteURI.decode("ascii"), response.request.absoluteURI.decode("ascii"),
response.code, response.code,
) )
)
# XXX: FIXME: This is horribly copy-pasted from matrixfederationclient. # XXX: FIXME: This is horribly copy-pasted from matrixfederationclient.
@ -627,10 +625,10 @@ class CaptchaServerHttpClient(SimpleHttpClient):
try: try:
body = yield make_deferred_yieldable(readBody(response)) body = yield make_deferred_yieldable(readBody(response))
defer.returnValue(body) return body
except PartialDownloadError as e: except PartialDownloadError as e:
# twisted dislikes google's response, no content length. # twisted dislikes google's response, no content length.
defer.returnValue(e.response) return e.response
def encode_urlencode_args(args): def encode_urlencode_args(args):

View File

@ -12,10 +12,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import json
import logging import logging
import random
import time
import attr import attr
from netaddr import IPAddress from netaddr import IPAddress
@ -24,31 +22,16 @@ from zope.interface import implementer
from twisted.internet import defer from twisted.internet import defer
from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
from twisted.internet.interfaces import IStreamClientEndpoint from twisted.internet.interfaces import IStreamClientEndpoint
from twisted.web.client import URI, Agent, HTTPConnectionPool, RedirectAgent, readBody from twisted.web.client import URI, Agent, HTTPConnectionPool
from twisted.web.http import stringToDatetime
from twisted.web.http_headers import Headers from twisted.web.http_headers import Headers
from twisted.web.iweb import IAgent from twisted.web.iweb import IAgent
from synapse.http.federation.srv_resolver import SrvResolver, pick_server_from_list from synapse.http.federation.srv_resolver import SrvResolver, pick_server_from_list
from synapse.http.federation.well_known_resolver import WellKnownResolver
from synapse.logging.context import make_deferred_yieldable from synapse.logging.context import make_deferred_yieldable
from synapse.util import Clock from synapse.util import Clock
from synapse.util.caches.ttlcache import TTLCache
from synapse.util.metrics import Measure
# period to cache .well-known results for by default
WELL_KNOWN_DEFAULT_CACHE_PERIOD = 24 * 3600
# jitter to add to the .well-known default cache ttl
WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER = 10 * 60
# period to cache failure to fetch .well-known for
WELL_KNOWN_INVALID_CACHE_PERIOD = 1 * 3600
# cap for .well-known cache period
WELL_KNOWN_MAX_CACHE_PERIOD = 48 * 3600
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
well_known_cache = TTLCache("well-known")
@implementer(IAgent) @implementer(IAgent)
@ -64,10 +47,6 @@ class MatrixFederationAgent(object):
tls_client_options_factory (ClientTLSOptionsFactory|None): tls_client_options_factory (ClientTLSOptionsFactory|None):
factory to use for fetching client tls options, or none to disable TLS. factory to use for fetching client tls options, or none to disable TLS.
_well_known_tls_policy (IPolicyForHTTPS|None):
TLS policy to use for fetching .well-known files. None to use a default
(browser-like) implementation.
_srv_resolver (SrvResolver|None): _srv_resolver (SrvResolver|None):
SRVResolver impl to use for looking up SRV records. None to use a default SRVResolver impl to use for looking up SRV records. None to use a default
implementation. implementation.
@ -81,9 +60,8 @@ class MatrixFederationAgent(object):
self, self,
reactor, reactor,
tls_client_options_factory, tls_client_options_factory,
_well_known_tls_policy=None,
_srv_resolver=None, _srv_resolver=None,
_well_known_cache=well_known_cache, _well_known_cache=None,
): ):
self._reactor = reactor self._reactor = reactor
self._clock = Clock(reactor) self._clock = Clock(reactor)
@ -98,21 +76,15 @@ class MatrixFederationAgent(object):
self._pool.maxPersistentPerHost = 5 self._pool.maxPersistentPerHost = 5
self._pool.cachedConnectionTimeout = 2 * 60 self._pool.cachedConnectionTimeout = 2 * 60
agent_args = {} self._well_known_resolver = WellKnownResolver(
if _well_known_tls_policy is not None: self._reactor,
# the param is called 'contextFactory', but actually passing a agent=Agent(
# contextfactory is deprecated, and it expects an IPolicyForHTTPS. self._reactor,
agent_args["contextFactory"] = _well_known_tls_policy pool=self._pool,
_well_known_agent = RedirectAgent( contextFactory=tls_client_options_factory,
Agent(self._reactor, pool=self._pool, **agent_args) ),
well_known_cache=_well_known_cache,
) )
self._well_known_agent = _well_known_agent
# our cache of .well-known lookup results, mapping from server name
# to delegated name. The values can be:
# `bytes`: a valid server-name
# `None`: there is no (valid) .well-known here
self._well_known_cache = _well_known_cache
@defer.inlineCallbacks @defer.inlineCallbacks
def request(self, method, uri, headers=None, bodyProducer=None): def request(self, method, uri, headers=None, bodyProducer=None):
@ -177,7 +149,7 @@ class MatrixFederationAgent(object):
res = yield make_deferred_yieldable( res = yield make_deferred_yieldable(
agent.request(method, uri, headers, bodyProducer) agent.request(method, uri, headers, bodyProducer)
) )
defer.returnValue(res) return res
@defer.inlineCallbacks @defer.inlineCallbacks
def _route_matrix_uri(self, parsed_uri, lookup_well_known=True): def _route_matrix_uri(self, parsed_uri, lookup_well_known=True):
@ -205,29 +177,28 @@ class MatrixFederationAgent(object):
port = parsed_uri.port port = parsed_uri.port
if port == -1: if port == -1:
port = 8448 port = 8448
defer.returnValue( return _RoutingResult(
_RoutingResult(
host_header=parsed_uri.netloc, host_header=parsed_uri.netloc,
tls_server_name=parsed_uri.host, tls_server_name=parsed_uri.host,
target_host=parsed_uri.host, target_host=parsed_uri.host,
target_port=port, target_port=port,
) )
)
if parsed_uri.port != -1: if parsed_uri.port != -1:
# there is an explicit port # there is an explicit port
defer.returnValue( return _RoutingResult(
_RoutingResult(
host_header=parsed_uri.netloc, host_header=parsed_uri.netloc,
tls_server_name=parsed_uri.host, tls_server_name=parsed_uri.host,
target_host=parsed_uri.host, target_host=parsed_uri.host,
target_port=parsed_uri.port, target_port=parsed_uri.port,
) )
)
if lookup_well_known: if lookup_well_known:
# try a .well-known lookup # try a .well-known lookup
well_known_server = yield self._get_well_known(parsed_uri.host) well_known_result = yield self._well_known_resolver.get_well_known(
parsed_uri.host
)
well_known_server = well_known_result.delegated_server
if well_known_server: if well_known_server:
# if we found a .well-known, start again, but don't do another # if we found a .well-known, start again, but don't do another
@ -259,7 +230,7 @@ class MatrixFederationAgent(object):
) )
res = yield self._route_matrix_uri(new_uri, lookup_well_known=False) res = yield self._route_matrix_uri(new_uri, lookup_well_known=False)
defer.returnValue(res) return res
# try a SRV lookup # try a SRV lookup
service_name = b"_matrix._tcp.%s" % (parsed_uri.host,) service_name = b"_matrix._tcp.%s" % (parsed_uri.host,)
@ -283,93 +254,12 @@ class MatrixFederationAgent(object):
parsed_uri.host.decode("ascii"), parsed_uri.host.decode("ascii"),
) )
defer.returnValue( return _RoutingResult(
_RoutingResult(
host_header=parsed_uri.netloc, host_header=parsed_uri.netloc,
tls_server_name=parsed_uri.host, tls_server_name=parsed_uri.host,
target_host=target_host, target_host=target_host,
target_port=port, target_port=port,
) )
)
@defer.inlineCallbacks
def _get_well_known(self, server_name):
"""Attempt to fetch and parse a .well-known file for the given server
Args:
server_name (bytes): name of the server, from the requested url
Returns:
Deferred[bytes|None]: either the new server name, from the .well-known, or
None if there was no .well-known file.
"""
try:
result = self._well_known_cache[server_name]
except KeyError:
# TODO: should we linearise so that we don't end up doing two .well-known
# requests for the same server in parallel?
with Measure(self._clock, "get_well_known"):
result, cache_period = yield self._do_get_well_known(server_name)
if cache_period > 0:
self._well_known_cache.set(server_name, result, cache_period)
defer.returnValue(result)
@defer.inlineCallbacks
def _do_get_well_known(self, server_name):
"""Actually fetch and parse a .well-known, without checking the cache
Args:
server_name (bytes): name of the server, from the requested url
Returns:
Deferred[Tuple[bytes|None|object],int]:
result, cache period, where result is one of:
- the new server name from the .well-known (as a `bytes`)
- None if there was no .well-known file.
- INVALID_WELL_KNOWN if the .well-known was invalid
"""
uri = b"https://%s/.well-known/matrix/server" % (server_name,)
uri_str = uri.decode("ascii")
logger.info("Fetching %s", uri_str)
try:
response = yield make_deferred_yieldable(
self._well_known_agent.request(b"GET", uri)
)
body = yield make_deferred_yieldable(readBody(response))
if response.code != 200:
raise Exception("Non-200 response %s" % (response.code,))
parsed_body = json.loads(body.decode("utf-8"))
logger.info("Response from .well-known: %s", parsed_body)
if not isinstance(parsed_body, dict):
raise Exception("not a dict")
if "m.server" not in parsed_body:
raise Exception("Missing key 'm.server'")
except Exception as e:
logger.info("Error fetching %s: %s", uri_str, e)
# add some randomness to the TTL to avoid a stampeding herd every hour
# after startup
cache_period = WELL_KNOWN_INVALID_CACHE_PERIOD
cache_period += random.uniform(0, WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER)
defer.returnValue((None, cache_period))
result = parsed_body["m.server"].encode("ascii")
cache_period = _cache_period_from_headers(
response.headers, time_now=self._reactor.seconds
)
if cache_period is None:
cache_period = WELL_KNOWN_DEFAULT_CACHE_PERIOD
# add some randomness to the TTL to avoid a stampeding herd every 24 hours
# after startup
cache_period += random.uniform(0, WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER)
else:
cache_period = min(cache_period, WELL_KNOWN_MAX_CACHE_PERIOD)
defer.returnValue((result, cache_period))
@implementer(IStreamClientEndpoint) @implementer(IStreamClientEndpoint)
@ -386,44 +276,6 @@ class LoggingHostnameEndpoint(object):
return self.ep.connect(protocol_factory) return self.ep.connect(protocol_factory)
def _cache_period_from_headers(headers, time_now=time.time):
cache_controls = _parse_cache_control(headers)
if b"no-store" in cache_controls:
return 0
if b"max-age" in cache_controls:
try:
max_age = int(cache_controls[b"max-age"])
return max_age
except ValueError:
pass
expires = headers.getRawHeaders(b"expires")
if expires is not None:
try:
expires_date = stringToDatetime(expires[-1])
return expires_date - time_now()
except ValueError:
# RFC7234 says 'A cache recipient MUST interpret invalid date formats,
# especially the value "0", as representing a time in the past (i.e.,
# "already expired").
return 0
return None
def _parse_cache_control(headers):
cache_controls = {}
for hdr in headers.getRawHeaders(b"cache-control", []):
for directive in hdr.split(b","):
splits = [x.strip() for x in directive.split(b"=", 1)]
k = splits[0].lower()
v = splits[1] if len(splits) > 1 else None
cache_controls[k] = v
return cache_controls
@attr.s @attr.s
class _RoutingResult(object): class _RoutingResult(object):
"""The result returned by `_route_matrix_uri`. """The result returned by `_route_matrix_uri`.

View File

@ -120,7 +120,7 @@ class SrvResolver(object):
if cache_entry: if cache_entry:
if all(s.expires > now for s in cache_entry): if all(s.expires > now for s in cache_entry):
servers = list(cache_entry) servers = list(cache_entry)
defer.returnValue(servers) return servers
try: try:
answers, _, _ = yield make_deferred_yieldable( answers, _, _ = yield make_deferred_yieldable(
@ -129,7 +129,7 @@ class SrvResolver(object):
except DNSNameError: except DNSNameError:
# TODO: cache this. We can get the SOA out of the exception, and use # TODO: cache this. We can get the SOA out of the exception, and use
# the negative-TTL value. # the negative-TTL value.
defer.returnValue([]) return []
except DomainError as e: except DomainError as e:
# We failed to resolve the name (other than a NameError) # We failed to resolve the name (other than a NameError)
# Try something in the cache, else rereaise # Try something in the cache, else rereaise
@ -138,7 +138,7 @@ class SrvResolver(object):
logger.warn( logger.warn(
"Failed to resolve %r, falling back to cache. %r", service_name, e "Failed to resolve %r, falling back to cache. %r", service_name, e
) )
defer.returnValue(list(cache_entry)) return list(cache_entry)
else: else:
raise e raise e
@ -169,4 +169,4 @@ class SrvResolver(object):
) )
self._cache[service_name] = list(servers) self._cache[service_name] = list(servers)
defer.returnValue(servers) return servers

View File

@ -0,0 +1,187 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import logging
import random
import time
import attr
from twisted.internet import defer
from twisted.web.client import RedirectAgent, readBody
from twisted.web.http import stringToDatetime
from synapse.logging.context import make_deferred_yieldable
from synapse.util import Clock
from synapse.util.caches.ttlcache import TTLCache
from synapse.util.metrics import Measure
# period to cache .well-known results for by default
WELL_KNOWN_DEFAULT_CACHE_PERIOD = 24 * 3600
# jitter to add to the .well-known default cache ttl
WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER = 10 * 60
# period to cache failure to fetch .well-known for
WELL_KNOWN_INVALID_CACHE_PERIOD = 1 * 3600
# cap for .well-known cache period
WELL_KNOWN_MAX_CACHE_PERIOD = 48 * 3600
# lower bound for .well-known cache period
WELL_KNOWN_MIN_CACHE_PERIOD = 5 * 60
logger = logging.getLogger(__name__)
_well_known_cache = TTLCache("well-known")
@attr.s(slots=True, frozen=True)
class WellKnownLookupResult(object):
delegated_server = attr.ib()
class WellKnownResolver(object):
"""Handles well-known lookups for matrix servers.
"""
def __init__(self, reactor, agent, well_known_cache=None):
self._reactor = reactor
self._clock = Clock(reactor)
if well_known_cache is None:
well_known_cache = _well_known_cache
self._well_known_cache = well_known_cache
self._well_known_agent = RedirectAgent(agent)
@defer.inlineCallbacks
def get_well_known(self, server_name):
"""Attempt to fetch and parse a .well-known file for the given server
Args:
server_name (bytes): name of the server, from the requested url
Returns:
Deferred[WellKnownLookupResult]: The result of the lookup
"""
try:
result = self._well_known_cache[server_name]
except KeyError:
# TODO: should we linearise so that we don't end up doing two .well-known
# requests for the same server in parallel?
with Measure(self._clock, "get_well_known"):
result, cache_period = yield self._do_get_well_known(server_name)
if cache_period > 0:
self._well_known_cache.set(server_name, result, cache_period)
return WellKnownLookupResult(delegated_server=result)
@defer.inlineCallbacks
def _do_get_well_known(self, server_name):
"""Actually fetch and parse a .well-known, without checking the cache
Args:
server_name (bytes): name of the server, from the requested url
Returns:
Deferred[Tuple[bytes|None|object],int]:
result, cache period, where result is one of:
- the new server name from the .well-known (as a `bytes`)
- None if there was no .well-known file.
- INVALID_WELL_KNOWN if the .well-known was invalid
"""
uri = b"https://%s/.well-known/matrix/server" % (server_name,)
uri_str = uri.decode("ascii")
logger.info("Fetching %s", uri_str)
try:
response = yield make_deferred_yieldable(
self._well_known_agent.request(b"GET", uri)
)
body = yield make_deferred_yieldable(readBody(response))
if response.code != 200:
raise Exception("Non-200 response %s" % (response.code,))
parsed_body = json.loads(body.decode("utf-8"))
logger.info("Response from .well-known: %s", parsed_body)
if not isinstance(parsed_body, dict):
raise Exception("not a dict")
if "m.server" not in parsed_body:
raise Exception("Missing key 'm.server'")
except Exception as e:
logger.info("Error fetching %s: %s", uri_str, e)
# add some randomness to the TTL to avoid a stampeding herd every hour
# after startup
cache_period = WELL_KNOWN_INVALID_CACHE_PERIOD
cache_period += random.uniform(0, WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER)
return (None, cache_period)
result = parsed_body["m.server"].encode("ascii")
cache_period = _cache_period_from_headers(
response.headers, time_now=self._reactor.seconds
)
if cache_period is None:
cache_period = WELL_KNOWN_DEFAULT_CACHE_PERIOD
# add some randomness to the TTL to avoid a stampeding herd every 24 hours
# after startup
cache_period += random.uniform(0, WELL_KNOWN_DEFAULT_CACHE_PERIOD_JITTER)
else:
cache_period = min(cache_period, WELL_KNOWN_MAX_CACHE_PERIOD)
cache_period = max(cache_period, WELL_KNOWN_MIN_CACHE_PERIOD)
return (result, cache_period)
def _cache_period_from_headers(headers, time_now=time.time):
cache_controls = _parse_cache_control(headers)
if b"no-store" in cache_controls:
return 0
if b"max-age" in cache_controls:
try:
max_age = int(cache_controls[b"max-age"])
return max_age
except ValueError:
pass
expires = headers.getRawHeaders(b"expires")
if expires is not None:
try:
expires_date = stringToDatetime(expires[-1])
return expires_date - time_now()
except ValueError:
# RFC7234 says 'A cache recipient MUST interpret invalid date formats,
# especially the value "0", as representing a time in the past (i.e.,
# "already expired").
return 0
return None
def _parse_cache_control(headers):
cache_controls = {}
for hdr in headers.getRawHeaders(b"cache-control", []):
for directive in hdr.split(b","):
splits = [x.strip() for x in directive.split(b"=", 1)]
k = splits[0].lower()
v = splits[1] if len(splits) > 1 else None
cache_controls[k] = v
return cache_controls

View File

@ -158,7 +158,7 @@ def _handle_json_response(reactor, timeout_sec, request, response):
response.code, response.code,
response.phrase.decode("ascii", errors="replace"), response.phrase.decode("ascii", errors="replace"),
) )
defer.returnValue(body) return body
class MatrixFederationHttpClient(object): class MatrixFederationHttpClient(object):
@ -256,7 +256,7 @@ class MatrixFederationHttpClient(object):
response = yield self._send_request(request, **send_request_args) response = yield self._send_request(request, **send_request_args)
defer.returnValue(response) return response
@defer.inlineCallbacks @defer.inlineCallbacks
def _send_request( def _send_request(
@ -520,7 +520,7 @@ class MatrixFederationHttpClient(object):
_flatten_response_never_received(e), _flatten_response_never_received(e),
) )
raise raise
defer.returnValue(response) return response
def build_auth_headers( def build_auth_headers(
self, destination, method, url_bytes, content=None, destination_is=None self, destination, method, url_bytes, content=None, destination_is=None
@ -644,7 +644,7 @@ class MatrixFederationHttpClient(object):
self.reactor, self.default_timeout, request, response self.reactor, self.default_timeout, request, response
) )
defer.returnValue(body) return body
@defer.inlineCallbacks @defer.inlineCallbacks
def post_json( def post_json(
@ -713,7 +713,7 @@ class MatrixFederationHttpClient(object):
body = yield _handle_json_response( body = yield _handle_json_response(
self.reactor, _sec_timeout, request, response self.reactor, _sec_timeout, request, response
) )
defer.returnValue(body) return body
@defer.inlineCallbacks @defer.inlineCallbacks
def get_json( def get_json(
@ -778,7 +778,7 @@ class MatrixFederationHttpClient(object):
self.reactor, self.default_timeout, request, response self.reactor, self.default_timeout, request, response
) )
defer.returnValue(body) return body
@defer.inlineCallbacks @defer.inlineCallbacks
def delete_json( def delete_json(
@ -836,7 +836,7 @@ class MatrixFederationHttpClient(object):
body = yield _handle_json_response( body = yield _handle_json_response(
self.reactor, self.default_timeout, request, response self.reactor, self.default_timeout, request, response
) )
defer.returnValue(body) return body
@defer.inlineCallbacks @defer.inlineCallbacks
def get_file( def get_file(
@ -902,7 +902,7 @@ class MatrixFederationHttpClient(object):
response.phrase.decode("ascii", errors="replace"), response.phrase.decode("ascii", errors="replace"),
length, length,
) )
defer.returnValue((length, headers)) return (length, headers)
class _ReadBodyToFileProtocol(protocol.Protocol): class _ReadBodyToFileProtocol(protocol.Protocol):

View File

@ -166,7 +166,12 @@ def parse_string_from_args(
value = args[name][0] value = args[name][0]
if encoding: if encoding:
try:
value = value.decode(encoding) value = value.decode(encoding)
except ValueError:
raise SynapseError(
400, "Query parameter %r must be %s" % (name, encoding)
)
if allowed_values is not None and value not in allowed_values: if allowed_values is not None and value not in allowed_values:
message = "Query parameter %r must be one of [%s]" % ( message = "Query parameter %r must be one of [%s]" % (

View File

@ -11,7 +11,7 @@
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License.import opentracing # limitations under the License.
# NOTE # NOTE
@ -89,7 +89,7 @@ the function becomes the operation name for the span.
# We start # We start
yield we_wait yield we_wait
# we finish # we finish
defer.returnValue(something_usual_and_useful) return something_usual_and_useful
Operation names can be explicitly set for functions by using Operation names can be explicitly set for functions by using
``trace_using_operation_name`` and ``trace_using_operation_name`` and
@ -113,7 +113,7 @@ Operation names can be explicitly set for functions by using
# We start # We start
yield we_wait yield we_wait
# we finish # we finish
defer.returnValue(something_usual_and_useful) return something_usual_and_useful
Contexts and carriers Contexts and carriers
--------------------- ---------------------
@ -150,10 +150,13 @@ Gotchas
""" """
import contextlib import contextlib
import inspect
import logging import logging
import re import re
from functools import wraps from functools import wraps
from canonicaljson import json
from twisted.internet import defer from twisted.internet import defer
from synapse.config import ConfigError from synapse.config import ConfigError
@ -173,36 +176,12 @@ except ImportError:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class _DumTagNames(object): # Block everything by default
"""wrapper of opentracings tags. We need to have them if we # A regex which matches the server_names to expose traces for.
want to reference them without opentracing around. Clearly they # None means 'block everything'.
should never actually show up in a trace. `set_tags` overwrites _homeserver_whitelist = None
these with the correct ones."""
INVALID_TAG = "invalid-tag" # Util methods
COMPONENT = INVALID_TAG
DATABASE_INSTANCE = INVALID_TAG
DATABASE_STATEMENT = INVALID_TAG
DATABASE_TYPE = INVALID_TAG
DATABASE_USER = INVALID_TAG
ERROR = INVALID_TAG
HTTP_METHOD = INVALID_TAG
HTTP_STATUS_CODE = INVALID_TAG
HTTP_URL = INVALID_TAG
MESSAGE_BUS_DESTINATION = INVALID_TAG
PEER_ADDRESS = INVALID_TAG
PEER_HOSTNAME = INVALID_TAG
PEER_HOST_IPV4 = INVALID_TAG
PEER_HOST_IPV6 = INVALID_TAG
PEER_PORT = INVALID_TAG
PEER_SERVICE = INVALID_TAG
SAMPLING_PRIORITY = INVALID_TAG
SERVICE = INVALID_TAG
SPAN_KIND = INVALID_TAG
SPAN_KIND_CONSUMER = INVALID_TAG
SPAN_KIND_PRODUCER = INVALID_TAG
SPAN_KIND_RPC_CLIENT = INVALID_TAG
SPAN_KIND_RPC_SERVER = INVALID_TAG
def only_if_tracing(func): def only_if_tracing(func):
@ -219,11 +198,13 @@ def only_if_tracing(func):
return _only_if_tracing_inner return _only_if_tracing_inner
# A regex which matches the server_names to expose traces for. @contextlib.contextmanager
# None means 'block everything'. def _noop_context_manager(*args, **kwargs):
_homeserver_whitelist = None """Does exactly what it says on the tin"""
yield
tags = _DumTagNames
# Setup
def init_tracer(config): def init_tracer(config):
@ -247,26 +228,55 @@ def init_tracer(config):
# Include the worker name # Include the worker name
name = config.worker_name if config.worker_name else "master" name = config.worker_name if config.worker_name else "master"
# Pull out the jaeger config if it was given. Otherwise set it to something sensible.
# See https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/config.py
set_homeserver_whitelist(config.opentracer_whitelist) set_homeserver_whitelist(config.opentracer_whitelist)
jaeger_config = JaegerConfig(
config={"sampler": {"type": "const", "param": 1}, "logging": True}, JaegerConfig(
config=config.jaeger_config,
service_name="{} {}".format(config.server_name, name), service_name="{} {}".format(config.server_name, name),
scope_manager=LogContextScopeManager(config), scope_manager=LogContextScopeManager(config),
) ).initialize_tracer()
jaeger_config.initialize_tracer()
# Set up tags to be opentracing's tags # Set up tags to be opentracing's tags
global tags global tags
tags = opentracing.tags tags = opentracing.tags
@contextlib.contextmanager # Whitelisting
def _noop_context_manager(*args, **kwargs):
"""Does absolutely nothing really well. Can be entered and exited arbitrarily.
Good substitute for an opentracing scope."""
yield
@only_if_tracing
def set_homeserver_whitelist(homeserver_whitelist):
"""Sets the homeserver whitelist
Args:
homeserver_whitelist (Iterable[str]): regex of whitelisted homeservers
"""
global _homeserver_whitelist
if homeserver_whitelist:
# Makes a single regex which accepts all passed in regexes in the list
_homeserver_whitelist = re.compile(
"({})".format(")|(".join(homeserver_whitelist))
)
@only_if_tracing
def whitelisted_homeserver(destination):
"""Checks if a destination matches the whitelist
Args:
destination (str)
"""
_homeserver_whitelist
if _homeserver_whitelist:
return _homeserver_whitelist.match(destination)
return False
# Start spans and scopes
# Could use kwargs but I want these to be explicit # Could use kwargs but I want these to be explicit
def start_active_span( def start_active_span(
operation_name, operation_name,
@ -285,8 +295,10 @@ def start_active_span(
Returns: Returns:
scope (Scope) or noop_context_manager scope (Scope) or noop_context_manager
""" """
if opentracing is None: if opentracing is None:
return _noop_context_manager() return _noop_context_manager()
else: else:
# We need to enter the scope here for the logcontext to become active # We need to enter the scope here for the logcontext to become active
return opentracing.tracer.start_active_span( return opentracing.tracer.start_active_span(
@ -300,63 +312,13 @@ def start_active_span(
) )
@only_if_tracing def start_active_span_follows_from(operation_name, contexts):
def close_active_span(): if opentracing is None:
"""Closes the active span. This will close it's logcontext if the context return _noop_context_manager()
was made for the span""" else:
opentracing.tracer.scope_manager.active.__exit__(None, None, None) references = [opentracing.follows_from(context) for context in contexts]
scope = start_active_span(operation_name, references=references)
return scope
@only_if_tracing
def set_tag(key, value):
"""Set's a tag on the active span"""
opentracing.tracer.active_span.set_tag(key, value)
@only_if_tracing
def log_kv(key_values, timestamp=None):
"""Log to the active span"""
opentracing.tracer.active_span.log_kv(key_values, timestamp)
# Note: we don't have a get baggage items because we're trying to hide all
# scope and span state from synapse. I think this method may also be useless
# as a result
@only_if_tracing
def set_baggage_item(key, value):
"""Attach baggage to the active span"""
opentracing.tracer.active_span.set_baggage_item(key, value)
@only_if_tracing
def set_operation_name(operation_name):
"""Sets the operation name of the active span"""
opentracing.tracer.active_span.set_operation_name(operation_name)
@only_if_tracing
def set_homeserver_whitelist(homeserver_whitelist):
"""Sets the whitelist
Args:
homeserver_whitelist (iterable of strings): regex of whitelisted homeservers
"""
global _homeserver_whitelist
if homeserver_whitelist:
# Makes a single regex which accepts all passed in regexes in the list
_homeserver_whitelist = re.compile(
"({})".format(")|(".join(homeserver_whitelist))
)
@only_if_tracing
def whitelisted_homeserver(destination):
"""Checks if a destination matches the whitelist
Args:
destination (String)"""
if _homeserver_whitelist:
return _homeserver_whitelist.match(destination)
return False
def start_active_span_from_context( def start_active_span_from_context(
@ -372,12 +334,16 @@ def start_active_span_from_context(
Extracts a span context from Twisted Headers. Extracts a span context from Twisted Headers.
args: args:
headers (twisted.web.http_headers.Headers) headers (twisted.web.http_headers.Headers)
For the other args see opentracing.tracer
returns: returns:
span_context (opentracing.span.SpanContext) span_context (opentracing.span.SpanContext)
""" """
# Twisted encodes the values as lists whereas opentracing doesn't. # Twisted encodes the values as lists whereas opentracing doesn't.
# So, we take the first item in the list. # So, we take the first item in the list.
# Also, twisted uses byte arrays while opentracing expects strings. # Also, twisted uses byte arrays while opentracing expects strings.
if opentracing is None: if opentracing is None:
return _noop_context_manager() return _noop_context_manager()
@ -395,17 +361,90 @@ def start_active_span_from_context(
) )
def start_active_span_from_edu(
edu_content,
operation_name,
references=[],
tags=None,
start_time=None,
ignore_active_span=False,
finish_on_close=True,
):
"""
Extracts a span context from an edu and uses it to start a new active span
Args:
edu_content (dict): and edu_content with a `context` field whose value is
canonical json for a dict which contains opentracing information.
For the other args see opentracing.tracer
"""
if opentracing is None:
return _noop_context_manager()
carrier = json.loads(edu_content.get("context", "{}")).get("opentracing", {})
context = opentracing.tracer.extract(opentracing.Format.TEXT_MAP, carrier)
_references = [
opentracing.child_of(span_context_from_string(x))
for x in carrier.get("references", [])
]
# For some reason jaeger decided not to support the visualization of multiple parent
# spans or explicitely show references. I include the span context as a tag here as
# an aid to people debugging but it's really not an ideal solution.
references += _references
scope = opentracing.tracer.start_active_span(
operation_name,
child_of=context,
references=references,
tags=tags,
start_time=start_time,
ignore_active_span=ignore_active_span,
finish_on_close=finish_on_close,
)
scope.span.set_tag("references", carrier.get("references", []))
return scope
# Opentracing setters for tags, logs, etc
@only_if_tracing
def set_tag(key, value):
"""Sets a tag on the active span"""
opentracing.tracer.active_span.set_tag(key, value)
@only_if_tracing
def log_kv(key_values, timestamp=None):
"""Log to the active span"""
opentracing.tracer.active_span.log_kv(key_values, timestamp)
@only_if_tracing
def set_operation_name(operation_name):
"""Sets the operation name of the active span"""
opentracing.tracer.active_span.set_operation_name(operation_name)
# Injection and extraction
@only_if_tracing @only_if_tracing
def inject_active_span_twisted_headers(headers, destination): def inject_active_span_twisted_headers(headers, destination):
""" """
Injects a span context into twisted headers inplace Injects a span context into twisted headers in-place
Args: Args:
headers (twisted.web.http_headers.Headers) headers (twisted.web.http_headers.Headers)
span (opentracing.Span) span (opentracing.Span)
Returns: Returns:
Inplace modification of headers In-place modification of headers
Note: Note:
The headers set by the tracer are custom to the tracer implementation which The headers set by the tracer are custom to the tracer implementation which
@ -437,7 +476,7 @@ def inject_active_span_byte_dict(headers, destination):
span (opentracing.Span) span (opentracing.Span)
Returns: Returns:
Inplace modification of headers In-place modification of headers
Note: Note:
The headers set by the tracer are custom to the tracer implementation which The headers set by the tracer are custom to the tracer implementation which
@ -458,15 +497,195 @@ def inject_active_span_byte_dict(headers, destination):
headers[key.encode()] = [value.encode()] headers[key.encode()] = [value.encode()]
@only_if_tracing
def inject_active_span_text_map(carrier, destination=None):
"""
Injects a span context into a dict
Args:
carrier (dict)
destination (str): the name of the remote server. The span context
will only be injected if the destination matches the homeserver_whitelist
or destination is None.
Returns:
In-place modification of carrier
Note:
The headers set by the tracer are custom to the tracer implementation which
should be unique enough that they don't interfere with any headers set by
synapse or twisted. If we're still using jaeger these headers would be those
here:
https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
"""
if destination and not whitelisted_homeserver(destination):
return
opentracing.tracer.inject(
opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier
)
def active_span_context_as_string():
"""
Returns:
The active span context encoded as a string.
"""
carrier = {}
if opentracing:
opentracing.tracer.inject(
opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier
)
return json.dumps(carrier)
@only_if_tracing
def span_context_from_string(carrier):
"""
Returns:
The active span context decoded from a string.
"""
carrier = json.loads(carrier)
return opentracing.tracer.extract(opentracing.Format.TEXT_MAP, carrier)
@only_if_tracing
def extract_text_map(carrier):
"""
Wrapper method for opentracing's tracer.extract for TEXT_MAP.
Args:
carrier (dict): a dict possibly containing a span context.
Returns:
The active span context extracted from carrier.
"""
return opentracing.tracer.extract(opentracing.Format.TEXT_MAP, carrier)
# Tracing decorators
def trace(func):
"""
Decorator to trace a function.
Sets the operation name to that of the function's.
"""
if opentracing is None:
return func
@wraps(func)
def _trace_inner(self, *args, **kwargs):
if opentracing is None:
return func(self, *args, **kwargs)
scope = start_active_span(func.__name__)
scope.__enter__()
try:
result = func(self, *args, **kwargs)
if isinstance(result, defer.Deferred):
def call_back(result):
scope.__exit__(None, None, None)
return result
def err_back(result):
scope.span.set_tag(tags.ERROR, True)
scope.__exit__(None, None, None)
return result
result.addCallbacks(call_back, err_back)
else:
scope.__exit__(None, None, None)
return result
except Exception as e:
scope.__exit__(type(e), None, e.__traceback__)
raise
return _trace_inner
def trace_using_operation_name(operation_name):
"""Decorator to trace a function. Explicitely sets the operation_name."""
def trace(func):
"""
Decorator to trace a function.
Sets the operation name to that of the function's.
"""
if opentracing is None:
return func
@wraps(func)
def _trace_inner(self, *args, **kwargs):
if opentracing is None:
return func(self, *args, **kwargs)
scope = start_active_span(operation_name)
scope.__enter__()
try:
result = func(self, *args, **kwargs)
if isinstance(result, defer.Deferred):
def call_back(result):
scope.__exit__(None, None, None)
return result
def err_back(result):
scope.span.set_tag(tags.ERROR, True)
scope.__exit__(None, None, None)
return result
result.addCallbacks(call_back, err_back)
else:
scope.__exit__(None, None, None)
return result
except Exception as e:
scope.__exit__(type(e), None, e.__traceback__)
raise
return _trace_inner
return trace
def tag_args(func):
"""
Tags all of the args to the active span.
"""
if not opentracing:
return func
@wraps(func)
def _tag_args_inner(self, *args, **kwargs):
argspec = inspect.getargspec(func)
for i, arg in enumerate(argspec.args[1:]):
set_tag("ARG_" + arg, args[i])
set_tag("args", args[len(argspec.args) :])
set_tag("kwargs", kwargs)
return func(self, *args, **kwargs)
return _tag_args_inner
def trace_servlet(servlet_name, func): def trace_servlet(servlet_name, func):
"""Decorator which traces a serlet. It starts a span with some servlet specific """Decorator which traces a serlet. It starts a span with some servlet specific
tags such as the servlet_name and request information""" tags such as the servlet_name and request information"""
if not opentracing:
return func
@wraps(func) @wraps(func)
@defer.inlineCallbacks @defer.inlineCallbacks
def _trace_servlet_inner(request, *args, **kwargs): def _trace_servlet_inner(request, *args, **kwargs):
with start_active_span_from_context( with start_active_span(
request.requestHeaders,
"incoming-client-request", "incoming-client-request",
tags={ tags={
"request_id": request.get_request_id(), "request_id": request.get_request_id(),
@ -478,6 +697,44 @@ def trace_servlet(servlet_name, func):
}, },
): ):
result = yield defer.maybeDeferred(func, request, *args, **kwargs) result = yield defer.maybeDeferred(func, request, *args, **kwargs)
defer.returnValue(result) return result
return _trace_servlet_inner return _trace_servlet_inner
# Helper class
class _DummyTagNames(object):
"""wrapper of opentracings tags. We need to have them if we
want to reference them without opentracing around. Clearly they
should never actually show up in a trace. `set_tags` overwrites
these with the correct ones."""
INVALID_TAG = "invalid-tag"
COMPONENT = INVALID_TAG
DATABASE_INSTANCE = INVALID_TAG
DATABASE_STATEMENT = INVALID_TAG
DATABASE_TYPE = INVALID_TAG
DATABASE_USER = INVALID_TAG
ERROR = INVALID_TAG
HTTP_METHOD = INVALID_TAG
HTTP_STATUS_CODE = INVALID_TAG
HTTP_URL = INVALID_TAG
MESSAGE_BUS_DESTINATION = INVALID_TAG
PEER_ADDRESS = INVALID_TAG
PEER_HOSTNAME = INVALID_TAG
PEER_HOST_IPV4 = INVALID_TAG
PEER_HOST_IPV6 = INVALID_TAG
PEER_PORT = INVALID_TAG
PEER_SERVICE = INVALID_TAG
SAMPLING_PRIORITY = INVALID_TAG
SERVICE = INVALID_TAG
SPAN_KIND = INVALID_TAG
SPAN_KIND_CONSUMER = INVALID_TAG
SPAN_KIND_PRODUCER = INVALID_TAG
SPAN_KIND_RPC_CLIENT = INVALID_TAG
SPAN_KIND_RPC_SERVER = INVALID_TAG
tags = _DummyTagNames

Some files were not shown because too many files have changed in this diff Show More