Synapse 1.7.0 (2019-12-13)

==========================
 
 This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](UPGRADE.rst#upgrading-to-v170) for details.
 
 Support for SQLite versions before 3.11 is now deprecated. A future release will refuse to start if used with an SQLite version before 3.11.
 
 Administrators are reminded that SQLite should not be used for production instances. Instructions for migrating to Postgres are available [here](docs/postgres.md). A future release of synapse will, by default, disable federation for servers using SQLite.
 
 No significant changes since 1.7.0rc2.
 
 Synapse 1.7.0rc2 (2019-12-11)
 =============================
 
 Bugfixes
 --------
 
 - Fix incorrect error message for invalid requests when setting user's avatar URL. ([\#6497](https://github.com/matrix-org/synapse/issues/6497))
 - Fix support for SQLite 3.7. ([\#6499](https://github.com/matrix-org/synapse/issues/6499))
 - Fix regression where sending email push would not work when using a pusher worker. ([\#6507](https://github.com/matrix-org/synapse/issues/6507), [\#6509](https://github.com/matrix-org/synapse/issues/6509))
 
 Synapse 1.7.0rc1 (2019-12-09)
 =============================
 
 Features
 --------
 
 - Implement per-room message retention policies. ([\#5815](https://github.com/matrix-org/synapse/issues/5815), [\#6436](https://github.com/matrix-org/synapse/issues/6436))
 - Add etag and count fields to key backup endpoints to help clients guess if there are new keys. ([\#5858](https://github.com/matrix-org/synapse/issues/5858))
 - Add `/admin/v2/users` endpoint with pagination. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
 - Require User-Interactive Authentication for `/account/3pid/add`, meaning the user's password will be required to add a third-party ID to their account. ([\#6119](https://github.com/matrix-org/synapse/issues/6119))
 - Implement the `/_matrix/federation/unstable/net.atleastfornow/state/<context>` API as drafted in MSC2314. ([\#6176](https://github.com/matrix-org/synapse/issues/6176))
 - Configure privacy-preserving settings by default for the room directory. ([\#6355](https://github.com/matrix-org/synapse/issues/6355))
 - Add ephemeral messages support by partially implementing [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). ([\#6409](https://github.com/matrix-org/synapse/issues/6409))
 - Add support for [MSC 2367](https://github.com/matrix-org/matrix-doc/pull/2367), which allows specifying a reason on all membership events. ([\#6434](https://github.com/matrix-org/synapse/issues/6434))
 
 Bugfixes
 --------
 
 - Transfer non-standard power levels on room upgrade. ([\#6237](https://github.com/matrix-org/synapse/issues/6237))
 - Fix error from the Pillow library when uploading RGBA images. ([\#6241](https://github.com/matrix-org/synapse/issues/6241))
 - Correctly apply the event filter to the `state`, `events_before` and `events_after` fields in the response to `/context` requests. ([\#6329](https://github.com/matrix-org/synapse/issues/6329))
 - Fix caching devices for remote users when using workers, so that we don't attempt to refetch (and potentially fail) each time a user requests devices. ([\#6332](https://github.com/matrix-org/synapse/issues/6332))
 - Prevent account data syncs getting lost across TCP replication. ([\#6333](https://github.com/matrix-org/synapse/issues/6333))
 - Fix bug: TypeError in `register_user()` while using LDAP auth module. ([\#6406](https://github.com/matrix-org/synapse/issues/6406))
 - Fix an intermittent exception when handling read-receipts. ([\#6408](https://github.com/matrix-org/synapse/issues/6408))
 - Fix broken guest registration when there are existing blocks of numeric user IDs. ([\#6420](https://github.com/matrix-org/synapse/issues/6420))
 - Fix startup error when http proxy is defined. ([\#6421](https://github.com/matrix-org/synapse/issues/6421))
 - Fix error when using synapse_port_db on a vanilla synapse db. ([\#6449](https://github.com/matrix-org/synapse/issues/6449))
 - Fix uploading multiple cross signing signatures for the same user. ([\#6451](https://github.com/matrix-org/synapse/issues/6451))
 - Fix bug which lead to exceptions being thrown in a loop when a cross-signed device is deleted. ([\#6462](https://github.com/matrix-org/synapse/issues/6462))
 - Fix `synapse_port_db` not exiting with a 0 code if something went wrong during the port process. ([\#6470](https://github.com/matrix-org/synapse/issues/6470))
 - Improve sanity-checking when receiving events over federation. ([\#6472](https://github.com/matrix-org/synapse/issues/6472))
 - Fix inaccurate per-block Prometheus metrics. ([\#6491](https://github.com/matrix-org/synapse/issues/6491))
 - Fix small performance regression for sending invites. ([\#6493](https://github.com/matrix-org/synapse/issues/6493))
 - Back out cross-signing code added in Synapse 1.5.0, which caused a performance regression. ([\#6494](https://github.com/matrix-org/synapse/issues/6494))
 
 Improved Documentation
 ----------------------
 
 - Update documentation and variables in user contributed systemd reference file. ([\#6369](https://github.com/matrix-org/synapse/issues/6369), [\#6490](https://github.com/matrix-org/synapse/issues/6490))
 - Fix link in the user directory documentation. ([\#6388](https://github.com/matrix-org/synapse/issues/6388))
 - Add build instructions to the docker readme. ([\#6390](https://github.com/matrix-org/synapse/issues/6390))
 - Switch Ubuntu package install recommendation to use python3 packages in INSTALL.md. ([\#6443](https://github.com/matrix-org/synapse/issues/6443))
 - Write some docs for the quarantine_media api. ([\#6458](https://github.com/matrix-org/synapse/issues/6458))
 - Convert CONTRIBUTING.rst to markdown (among other small fixes). ([\#6461](https://github.com/matrix-org/synapse/issues/6461))
 
 Deprecations and Removals
 -------------------------
 
 - Remove admin/v1/users_paginate endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
 - Remove fallback for federation with old servers which lack the /federation/v1/state_ids API. ([\#6488](https://github.com/matrix-org/synapse/issues/6488))
 
 Internal Changes
 ----------------
 
 - Add benchmarks for structured logging and improve output performance. ([\#6266](https://github.com/matrix-org/synapse/issues/6266))
 - Improve the performance of outputting structured logging. ([\#6322](https://github.com/matrix-org/synapse/issues/6322))
 - Refactor some code in the event authentication path for clarity. ([\#6343](https://github.com/matrix-org/synapse/issues/6343), [\#6468](https://github.com/matrix-org/synapse/issues/6468), [\#6480](https://github.com/matrix-org/synapse/issues/6480))
 - Clean up some unnecessary quotation marks around the codebase. ([\#6362](https://github.com/matrix-org/synapse/issues/6362))
 - Complain on startup instead of 500'ing during runtime when `public_baseurl` isn't set when necessary. ([\#6379](https://github.com/matrix-org/synapse/issues/6379))
 - Add a test scenario to make sure room history purges don't break `/messages` in the future. ([\#6392](https://github.com/matrix-org/synapse/issues/6392))
 - Clarifications for the email configuration settings. ([\#6423](https://github.com/matrix-org/synapse/issues/6423))
 - Add more tests to the blacklist when running in worker mode. ([\#6429](https://github.com/matrix-org/synapse/issues/6429))
 - Refactor data store layer to support multiple databases in the future. ([\#6454](https://github.com/matrix-org/synapse/issues/6454), [\#6464](https://github.com/matrix-org/synapse/issues/6464), [\#6469](https://github.com/matrix-org/synapse/issues/6469), [\#6487](https://github.com/matrix-org/synapse/issues/6487))
 - Port synapse.rest.client.v1 to async/await. ([\#6482](https://github.com/matrix-org/synapse/issues/6482))
 - Port synapse.rest.client.v2_alpha to async/await. ([\#6483](https://github.com/matrix-org/synapse/issues/6483))
 - Port SyncHandler to async/await. ([\#6484](https://github.com/matrix-org/synapse/issues/6484))
 -----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCgAuFiEEumuwyPtYLL2OMhYdOtoG7cdT0R4FAl3zbaUQHGVyaWtAbWF0
 cml4Lm9yZwAKCRA62gbtx1PRHqxED/9Ldjx5pxRm99Rkfhpv6qaw79GaQsAERW+i
 rPzdC0u59PRDLyfB2MRGU+q4oTTmBN3APPuW79mhY4CRisKfUvw6tuaJM5avvRsF
 vlY6m4ObewPXvK9dumlKD4ORqks45Sv0VGLHoZH4MLvWu/3Z+ycUn0qf+KNy0JrV
 TNNnNfsEeo3FXrkLU/12TiEBqNTVCU2vi/nZaHoGlHtZGVoykyFk4z2i9hLKENH4
 ScwaN4zeFRTIWYMiEb+kI/6QQLTASvar5FstwXPpM2F+aZhrMc5M7YsEe16jx+bQ
 jOM1L67hMetcWWmE9QrgP030clIDg30xhNbVxFSD4WoGgEPFkf/zE91v0CF5ZMRJ
 u83hlhe8tHoO8DOfVl2DNw3x6Evx4Vb5P8bvszNwKAyVDanvkSe0GvYm84B0Tat6
 2UCyPR5mbU3fyiwpouaNYun8MppyMRQ/fCnNElvZwwoj9EQQcJ8QOnMK0pFv1AXV
 n4wwXqRb+WvvbhnKPxTsiTK0DPBbIiGqo1TIONSbhPF+wySR3YV80HCWofd+qlRT
 V+GbZihk4SCcmamgqYLeQYneOrIu6UWGP348t3scCz/RCUp7hmxaJhJ3GAC3hOmU
 alN6tDJn7Lo0CwY2frX8xPS6S31sq9IZyDhHkaL0chsEa1rgWJMOK3FRp/b+c8yU
 b3CEJObIjw==
 =RQVG
 -----END PGP SIGNATURE-----

Merge tag 'v1.7.0'

Synapse 1.7.0 (2019-12-13)
==========================

This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](UPGRADE.rst#upgrading-to-v170) for details.

Support for SQLite versions before 3.11 is now deprecated. A future release will refuse to start if used with an SQLite version before 3.11.

Administrators are reminded that SQLite should not be used for production instances. Instructions for migrating to Postgres are available [here](docs/postgres.md). A future release of synapse will, by default, disable federation for servers using SQLite.

No significant changes since 1.7.0rc2.

Synapse 1.7.0rc2 (2019-12-11)
=============================

Bugfixes
--------

- Fix incorrect error message for invalid requests when setting user's avatar URL. ([\#6497](https://github.com/matrix-org/synapse/issues/6497))
- Fix support for SQLite 3.7. ([\#6499](https://github.com/matrix-org/synapse/issues/6499))
- Fix regression where sending email push would not work when using a pusher worker. ([\#6507](https://github.com/matrix-org/synapse/issues/6507), [\#6509](https://github.com/matrix-org/synapse/issues/6509))

Synapse 1.7.0rc1 (2019-12-09)
=============================

Features
--------

- Implement per-room message retention policies. ([\#5815](https://github.com/matrix-org/synapse/issues/5815), [\#6436](https://github.com/matrix-org/synapse/issues/6436))
- Add etag and count fields to key backup endpoints to help clients guess if there are new keys. ([\#5858](https://github.com/matrix-org/synapse/issues/5858))
- Add `/admin/v2/users` endpoint with pagination. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Require User-Interactive Authentication for `/account/3pid/add`, meaning the user's password will be required to add a third-party ID to their account. ([\#6119](https://github.com/matrix-org/synapse/issues/6119))
- Implement the `/_matrix/federation/unstable/net.atleastfornow/state/<context>` API as drafted in MSC2314. ([\#6176](https://github.com/matrix-org/synapse/issues/6176))
- Configure privacy-preserving settings by default for the room directory. ([\#6355](https://github.com/matrix-org/synapse/issues/6355))
- Add ephemeral messages support by partially implementing [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). ([\#6409](https://github.com/matrix-org/synapse/issues/6409))
- Add support for [MSC 2367](https://github.com/matrix-org/matrix-doc/pull/2367), which allows specifying a reason on all membership events. ([\#6434](https://github.com/matrix-org/synapse/issues/6434))

Bugfixes
--------

- Transfer non-standard power levels on room upgrade. ([\#6237](https://github.com/matrix-org/synapse/issues/6237))
- Fix error from the Pillow library when uploading RGBA images. ([\#6241](https://github.com/matrix-org/synapse/issues/6241))
- Correctly apply the event filter to the `state`, `events_before` and `events_after` fields in the response to `/context` requests. ([\#6329](https://github.com/matrix-org/synapse/issues/6329))
- Fix caching devices for remote users when using workers, so that we don't attempt to refetch (and potentially fail) each time a user requests devices. ([\#6332](https://github.com/matrix-org/synapse/issues/6332))
- Prevent account data syncs getting lost across TCP replication. ([\#6333](https://github.com/matrix-org/synapse/issues/6333))
- Fix bug: TypeError in `register_user()` while using LDAP auth module. ([\#6406](https://github.com/matrix-org/synapse/issues/6406))
- Fix an intermittent exception when handling read-receipts. ([\#6408](https://github.com/matrix-org/synapse/issues/6408))
- Fix broken guest registration when there are existing blocks of numeric user IDs. ([\#6420](https://github.com/matrix-org/synapse/issues/6420))
- Fix startup error when http proxy is defined. ([\#6421](https://github.com/matrix-org/synapse/issues/6421))
- Fix error when using synapse_port_db on a vanilla synapse db. ([\#6449](https://github.com/matrix-org/synapse/issues/6449))
- Fix uploading multiple cross signing signatures for the same user. ([\#6451](https://github.com/matrix-org/synapse/issues/6451))
- Fix bug which lead to exceptions being thrown in a loop when a cross-signed device is deleted. ([\#6462](https://github.com/matrix-org/synapse/issues/6462))
- Fix `synapse_port_db` not exiting with a 0 code if something went wrong during the port process. ([\#6470](https://github.com/matrix-org/synapse/issues/6470))
- Improve sanity-checking when receiving events over federation. ([\#6472](https://github.com/matrix-org/synapse/issues/6472))
- Fix inaccurate per-block Prometheus metrics. ([\#6491](https://github.com/matrix-org/synapse/issues/6491))
- Fix small performance regression for sending invites. ([\#6493](https://github.com/matrix-org/synapse/issues/6493))
- Back out cross-signing code added in Synapse 1.5.0, which caused a performance regression. ([\#6494](https://github.com/matrix-org/synapse/issues/6494))

Improved Documentation
----------------------

- Update documentation and variables in user contributed systemd reference file. ([\#6369](https://github.com/matrix-org/synapse/issues/6369), [\#6490](https://github.com/matrix-org/synapse/issues/6490))
- Fix link in the user directory documentation. ([\#6388](https://github.com/matrix-org/synapse/issues/6388))
- Add build instructions to the docker readme. ([\#6390](https://github.com/matrix-org/synapse/issues/6390))
- Switch Ubuntu package install recommendation to use python3 packages in INSTALL.md. ([\#6443](https://github.com/matrix-org/synapse/issues/6443))
- Write some docs for the quarantine_media api. ([\#6458](https://github.com/matrix-org/synapse/issues/6458))
- Convert CONTRIBUTING.rst to markdown (among other small fixes). ([\#6461](https://github.com/matrix-org/synapse/issues/6461))

Deprecations and Removals
-------------------------

- Remove admin/v1/users_paginate endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Remove fallback for federation with old servers which lack the /federation/v1/state_ids API. ([\#6488](https://github.com/matrix-org/synapse/issues/6488))

Internal Changes
----------------

- Add benchmarks for structured logging and improve output performance. ([\#6266](https://github.com/matrix-org/synapse/issues/6266))
- Improve the performance of outputting structured logging. ([\#6322](https://github.com/matrix-org/synapse/issues/6322))
- Refactor some code in the event authentication path for clarity. ([\#6343](https://github.com/matrix-org/synapse/issues/6343), [\#6468](https://github.com/matrix-org/synapse/issues/6468), [\#6480](https://github.com/matrix-org/synapse/issues/6480))
- Clean up some unnecessary quotation marks around the codebase. ([\#6362](https://github.com/matrix-org/synapse/issues/6362))
- Complain on startup instead of 500'ing during runtime when `public_baseurl` isn't set when necessary. ([\#6379](https://github.com/matrix-org/synapse/issues/6379))
- Add a test scenario to make sure room history purges don't break `/messages` in the future. ([\#6392](https://github.com/matrix-org/synapse/issues/6392))
- Clarifications for the email configuration settings. ([\#6423](https://github.com/matrix-org/synapse/issues/6423))
- Add more tests to the blacklist when running in worker mode. ([\#6429](https://github.com/matrix-org/synapse/issues/6429))
- Refactor data store layer to support multiple databases in the future. ([\#6454](https://github.com/matrix-org/synapse/issues/6454), [\#6464](https://github.com/matrix-org/synapse/issues/6464), [\#6469](https://github.com/matrix-org/synapse/issues/6469), [\#6487](https://github.com/matrix-org/synapse/issues/6487))
- Port synapse.rest.client.v1 to async/await. ([\#6482](https://github.com/matrix-org/synapse/issues/6482))
- Port synapse.rest.client.v2_alpha to async/await. ([\#6483](https://github.com/matrix-org/synapse/issues/6483))
- Port SyncHandler to async/await. ([\#6484](https://github.com/matrix-org/synapse/issues/6484))
This commit is contained in:
Erik Johnston 2019-12-13 10:55:33 +00:00
commit bee1982d17
228 changed files with 7893 additions and 4804 deletions

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash
set -ex
set -e
if [[ "$BUILDKITE_BRANCH" =~ ^(develop|master|dinsic|shhs|release-.*)$ ]]; then
echo "Not merging forward, as this is a release branch"
@ -18,6 +18,8 @@ else
GITBASE=$BUILDKITE_PULL_REQUEST_BASE_BRANCH
fi
echo "--- merge_base_branch $GITBASE"
# Show what we are before
git --no-pager show -s

View File

@ -1,7 +1,7 @@
# Configuration file used for testing the 'synapse_port_db' script.
# Tells the script to connect to the postgresql database that will be available in the
# CI's Docker setup at the point where this file is considered.
server_name: "test"
server_name: "localhost:8800"
signing_key_path: "/src/.buildkite/test.signing.key"

View File

@ -1,7 +1,7 @@
# Configuration file used for testing the 'synapse_port_db' script.
# Tells the 'update_database' script to connect to the test SQLite database to upgrade its
# schema and run background updates on it.
server_name: "test"
server_name: "localhost:8800"
signing_key_path: "/src/.buildkite/test.signing.key"

View File

@ -28,3 +28,39 @@ User sees updates to presence from other users in the incremental sync.
Gapped incremental syncs include all state changes
Old members are included in gappy incr LL sync if they start speaking
# new failures as of https://github.com/matrix-org/sytest/pull/732
Device list doesn't change if remote server is down
Remote servers cannot set power levels in rooms without existing powerlevels
Remote servers should reject attempts by non-creators to set the power levels
# new failures as of https://github.com/matrix-org/sytest/pull/753
GET /rooms/:room_id/messages returns a message
GET /rooms/:room_id/messages lazy loads members correctly
Read receipts are sent as events
Only original members of the room can see messages from erased users
Device deletion propagates over federation
If user leaves room, remote user changes device and rejoins we see update in /sync and /keys/changes
Changing user-signing key notifies local users
Newly updated tags appear in an incremental v2 /sync
Server correctly handles incoming m.device_list_update
Local device key changes get to remote servers with correct prev_id
AS-ghosted users can use rooms via AS
Ghost user must register before joining room
Test that a message is pushed
Invites are pushed
Rooms with aliases are correctly named in pushed
Rooms with names are correctly named in pushed
Rooms with canonical alias are correctly named in pushed
Rooms with many users are correctly pushed
Don't get pushed for rooms you've muted
Rejected events are not pushed
Test that rejected pushers are removed.
Events come down the correct room
# https://buildkite.com/matrix-dot-org/sytest/builds/326#cca62404-a88a-4fcb-ad41-175fd3377603
Presence changes to UNAVAILABLE are reported to remote room members
If remote user leaves room, changes device and rejoins we see update in sync
uploading self-signing key notifies over federation
Inbound federation can receive redacted events
Outbound federation can request missing events

View File

@ -1,8 +1,8 @@
### Pull Request Checklist
<!-- Please read CONTRIBUTING.rst before submitting your pull request -->
<!-- Please read CONTRIBUTING.md before submitting your pull request -->
* [ ] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#changelog)
* [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#sign-off)
* [ ] Code style is correct (run the [linters](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#code-style))
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#changelog)
* [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#sign-off)
* [ ] Code style is correct (run the [linters](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#code-style))

View File

@ -1,3 +1,98 @@
Synapse 1.7.0 (2019-12-13)
==========================
This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](UPGRADE.rst#upgrading-to-v170) for details.
Support for SQLite versions before 3.11 is now deprecated. A future release will refuse to start if used with an SQLite version before 3.11.
Administrators are reminded that SQLite should not be used for production instances. Instructions for migrating to Postgres are available [here](docs/postgres.md). A future release of synapse will, by default, disable federation for servers using SQLite.
No significant changes since 1.7.0rc2.
Synapse 1.7.0rc2 (2019-12-11)
=============================
Bugfixes
--------
- Fix incorrect error message for invalid requests when setting user's avatar URL. ([\#6497](https://github.com/matrix-org/synapse/issues/6497))
- Fix support for SQLite 3.7. ([\#6499](https://github.com/matrix-org/synapse/issues/6499))
- Fix regression where sending email push would not work when using a pusher worker. ([\#6507](https://github.com/matrix-org/synapse/issues/6507), [\#6509](https://github.com/matrix-org/synapse/issues/6509))
Synapse 1.7.0rc1 (2019-12-09)
=============================
Features
--------
- Implement per-room message retention policies. ([\#5815](https://github.com/matrix-org/synapse/issues/5815), [\#6436](https://github.com/matrix-org/synapse/issues/6436))
- Add etag and count fields to key backup endpoints to help clients guess if there are new keys. ([\#5858](https://github.com/matrix-org/synapse/issues/5858))
- Add `/admin/v2/users` endpoint with pagination. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Require User-Interactive Authentication for `/account/3pid/add`, meaning the user's password will be required to add a third-party ID to their account. ([\#6119](https://github.com/matrix-org/synapse/issues/6119))
- Implement the `/_matrix/federation/unstable/net.atleastfornow/state/<context>` API as drafted in MSC2314. ([\#6176](https://github.com/matrix-org/synapse/issues/6176))
- Configure privacy-preserving settings by default for the room directory. ([\#6355](https://github.com/matrix-org/synapse/issues/6355))
- Add ephemeral messages support by partially implementing [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). ([\#6409](https://github.com/matrix-org/synapse/issues/6409))
- Add support for [MSC 2367](https://github.com/matrix-org/matrix-doc/pull/2367), which allows specifying a reason on all membership events. ([\#6434](https://github.com/matrix-org/synapse/issues/6434))
Bugfixes
--------
- Transfer non-standard power levels on room upgrade. ([\#6237](https://github.com/matrix-org/synapse/issues/6237))
- Fix error from the Pillow library when uploading RGBA images. ([\#6241](https://github.com/matrix-org/synapse/issues/6241))
- Correctly apply the event filter to the `state`, `events_before` and `events_after` fields in the response to `/context` requests. ([\#6329](https://github.com/matrix-org/synapse/issues/6329))
- Fix caching devices for remote users when using workers, so that we don't attempt to refetch (and potentially fail) each time a user requests devices. ([\#6332](https://github.com/matrix-org/synapse/issues/6332))
- Prevent account data syncs getting lost across TCP replication. ([\#6333](https://github.com/matrix-org/synapse/issues/6333))
- Fix bug: TypeError in `register_user()` while using LDAP auth module. ([\#6406](https://github.com/matrix-org/synapse/issues/6406))
- Fix an intermittent exception when handling read-receipts. ([\#6408](https://github.com/matrix-org/synapse/issues/6408))
- Fix broken guest registration when there are existing blocks of numeric user IDs. ([\#6420](https://github.com/matrix-org/synapse/issues/6420))
- Fix startup error when http proxy is defined. ([\#6421](https://github.com/matrix-org/synapse/issues/6421))
- Fix error when using synapse_port_db on a vanilla synapse db. ([\#6449](https://github.com/matrix-org/synapse/issues/6449))
- Fix uploading multiple cross signing signatures for the same user. ([\#6451](https://github.com/matrix-org/synapse/issues/6451))
- Fix bug which lead to exceptions being thrown in a loop when a cross-signed device is deleted. ([\#6462](https://github.com/matrix-org/synapse/issues/6462))
- Fix `synapse_port_db` not exiting with a 0 code if something went wrong during the port process. ([\#6470](https://github.com/matrix-org/synapse/issues/6470))
- Improve sanity-checking when receiving events over federation. ([\#6472](https://github.com/matrix-org/synapse/issues/6472))
- Fix inaccurate per-block Prometheus metrics. ([\#6491](https://github.com/matrix-org/synapse/issues/6491))
- Fix small performance regression for sending invites. ([\#6493](https://github.com/matrix-org/synapse/issues/6493))
- Back out cross-signing code added in Synapse 1.5.0, which caused a performance regression. ([\#6494](https://github.com/matrix-org/synapse/issues/6494))
Improved Documentation
----------------------
- Update documentation and variables in user contributed systemd reference file. ([\#6369](https://github.com/matrix-org/synapse/issues/6369), [\#6490](https://github.com/matrix-org/synapse/issues/6490))
- Fix link in the user directory documentation. ([\#6388](https://github.com/matrix-org/synapse/issues/6388))
- Add build instructions to the docker readme. ([\#6390](https://github.com/matrix-org/synapse/issues/6390))
- Switch Ubuntu package install recommendation to use python3 packages in INSTALL.md. ([\#6443](https://github.com/matrix-org/synapse/issues/6443))
- Write some docs for the quarantine_media api. ([\#6458](https://github.com/matrix-org/synapse/issues/6458))
- Convert CONTRIBUTING.rst to markdown (among other small fixes). ([\#6461](https://github.com/matrix-org/synapse/issues/6461))
Deprecations and Removals
-------------------------
- Remove admin/v1/users_paginate endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Remove fallback for federation with old servers which lack the /federation/v1/state_ids API. ([\#6488](https://github.com/matrix-org/synapse/issues/6488))
Internal Changes
----------------
- Add benchmarks for structured logging and improve output performance. ([\#6266](https://github.com/matrix-org/synapse/issues/6266))
- Improve the performance of outputting structured logging. ([\#6322](https://github.com/matrix-org/synapse/issues/6322))
- Refactor some code in the event authentication path for clarity. ([\#6343](https://github.com/matrix-org/synapse/issues/6343), [\#6468](https://github.com/matrix-org/synapse/issues/6468), [\#6480](https://github.com/matrix-org/synapse/issues/6480))
- Clean up some unnecessary quotation marks around the codebase. ([\#6362](https://github.com/matrix-org/synapse/issues/6362))
- Complain on startup instead of 500'ing during runtime when `public_baseurl` isn't set when necessary. ([\#6379](https://github.com/matrix-org/synapse/issues/6379))
- Add a test scenario to make sure room history purges don't break `/messages` in the future. ([\#6392](https://github.com/matrix-org/synapse/issues/6392))
- Clarifications for the email configuration settings. ([\#6423](https://github.com/matrix-org/synapse/issues/6423))
- Add more tests to the blacklist when running in worker mode. ([\#6429](https://github.com/matrix-org/synapse/issues/6429))
- Refactor data store layer to support multiple databases in the future. ([\#6454](https://github.com/matrix-org/synapse/issues/6454), [\#6464](https://github.com/matrix-org/synapse/issues/6464), [\#6469](https://github.com/matrix-org/synapse/issues/6469), [\#6487](https://github.com/matrix-org/synapse/issues/6487))
- Port synapse.rest.client.v1 to async/await. ([\#6482](https://github.com/matrix-org/synapse/issues/6482))
- Port synapse.rest.client.v2_alpha to async/await. ([\#6483](https://github.com/matrix-org/synapse/issues/6483))
- Port SyncHandler to async/await. ([\#6484](https://github.com/matrix-org/synapse/issues/6484))
Synapse 1.6.1 (2019-11-28)
==========================

210
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,210 @@
# Contributing code to Matrix
Everyone is welcome to contribute code to Matrix
(https://github.com/matrix-org), provided that they are willing to license
their contributions under the same license as the project itself. We follow a
simple 'inbound=outbound' model for contributions: the act of submitting an
'inbound' contribution means that the contributor agrees to license the code
under the same terms as the project's overall 'outbound' license - in our
case, this is almost always Apache Software License v2 (see [LICENSE](LICENSE)).
## How to contribute
The preferred and easiest way to contribute changes to Matrix is to fork the
relevant project on github, and then [create a pull request](
https://help.github.com/articles/using-pull-requests/) to ask us to pull
your changes into our repo.
**The single biggest thing you need to know is: please base your changes on
the develop branch - *not* master.**
We use the master branch to track the most recent release, so that folks who
blindly clone the repo and automatically check out master get something that
works. Develop is the unstable branch where all the development actually
happens: the workflow is that contributors should fork the develop branch to
make a 'feature' branch for a particular contribution, and then make a pull
request to merge this back into the matrix.org 'official' develop branch. We
use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use [Buildkite](https://buildkite.com/matrix-dot-org/synapse) for continuous
integration. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
To run unit tests in a local development environment, you can use:
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
## Code style
All Matrix projects have a well-defined code-style - and sometimes we've even
got as far as documenting it... For instance, synapse's code style doc lives
[here](docs/code_style.md).
To facilitate meeting these criteria you can run `scripts-dev/lint.sh`
locally. Since this runs the tools listed in the above document, you'll need
python 3.6 and to install each tool:
```
# Install the dependencies
pip install -U black flake8 isort
# Run the linter script
./scripts-dev/lint.sh
```
**Note that the script does not just test/check, but also reformats code, so you
may wish to ensure any new code is committed first**. By default this script
checks all files and can take some time; if you alter only certain files, you
might wish to specify paths as arguments to reduce the run-time:
```
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
```
Before pushing new changes, ensure they don't produce linting errors. Commit any
files that were corrected.
Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
## Changelog
All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by [Towncrier](https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the `changelog.d` directory named
in the format of `PRnumber.type`. The type can be one of the following:
* `feature`
* `bugfix`
* `docker` (for updates to the Docker image)
* `doc` (for updates to the documentation)
* `removal` (also used for deprecations)
* `misc` (for internal-only changes)
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our [changelog](
https://github.com/matrix-org/synapse/blob/master/CHANGES.md). The file can
contain Markdown formatting, and should end with a full stop ('.') for
consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!
For example, a fix in PR #1234 would have its changelog entry in
`changelog.d/1234.bugfix`, and contain content like "The security levels of
Florbs are now validated when received over federation. Contributed by Jane
Matrix.".
## Debian changelog
Changes which affect the debian packaging files (in `debian`) are an
exception.
In this case, you will need to add an entry to the debian changelog for the
next release. For this, run the following command:
```
dch
```
This will make up a new version number (if there isn't already an unreleased
version in flight), and open an editor where you can add a new changelog entry.
(Our release process will ensure that the version number and maintainer name is
corrected for the release.)
If your change affects both the debian packaging *and* files outside the debian
directory, you will need both a regular newsfragment *and* an entry in the
debian changelog. (Though typically such changes should be submitted as two
separate pull requests.)
## Sign off
In order to have a concrete record that your contribution is intentional
and you agree to license it under the same terms as the project's license, we've adopted the
same lightweight approach that the Linux Kernel
[submitting patches process](
https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin>),
[Docker](https://github.com/docker/docker/blob/master/CONTRIBUTING.md), and many other
projects use: the DCO (Developer Certificate of Origin:
http://developercertificate.org/). This is a simple declaration that you wrote
the contribution or otherwise have the right to contribute it to Matrix:
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment:
```
Signed-off-by: Your Name <your@email.example.org>
```
We accept contributions under a legally identifiable name, such as
your name on government documentation or common-law names (names
claimed by legitimate usage or repute). Unfortunately, we cannot
accept anonymous contributions at this time.
Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs.
## Conclusion
That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully
matrix together all the fragmented communication technologies out there we are
reliant on contributions and collaboration from the community to do so. So
please get involved - and we hope you have as much fun hacking on Matrix as we
do!

View File

@ -1,206 +0,0 @@
Contributing code to Matrix
===========================
Everyone is welcome to contribute code to Matrix
(https://github.com/matrix-org), provided that they are willing to license
their contributions under the same license as the project itself. We follow a
simple 'inbound=outbound' model for contributions: the act of submitting an
'inbound' contribution means that the contributor agrees to license the code
under the same terms as the project's overall 'outbound' license - in our
case, this is almost always Apache Software License v2 (see LICENSE).
How to contribute
~~~~~~~~~~~~~~~~~
The preferred and easiest way to contribute changes to Matrix is to fork the
relevant project on github, and then create a pull request to ask us to pull
your changes into our repo
(https://help.github.com/articles/using-pull-requests/)
**The single biggest thing you need to know is: please base your changes on
the develop branch - /not/ master.**
We use the master branch to track the most recent release, so that folks who
blindly clone the repo and automatically check out master get something that
works. Develop is the unstable branch where all the development actually
happens: the workflow is that contributors should fork the develop branch to
make a 'feature' branch for a particular contribution, and then make a pull
request to merge this back into the matrix.org 'official' develop branch. We
use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use `Buildkite <https://buildkite.com/matrix-dot-org/synapse>`_ for
continuous integration. Buildkite builds need to be authorised by a
maintainer. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
To run unit tests in a local development environment, you can use:
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the `documentation in the SyTest repo
<https://github.com/matrix-org/sytest/blob/develop/docker/README.md>`_ for more
information.
Code style
~~~~~~~~~~
All Matrix projects have a well-defined code-style - and sometimes we've even
got as far as documenting it... For instance, synapse's code style doc lives
at https://github.com/matrix-org/synapse/tree/master/docs/code_style.md.
To facilitate meeting these criteria you can run ``scripts-dev/lint.sh``
locally. Since this runs the tools listed in the above document, you'll need
python 3.6 and to install each tool. **Note that the script does not just
test/check, but also reformats code, so you may wish to ensure any new code is
committed first**. By default this script checks all files and can take some
time; if you alter only certain files, you might wish to specify paths as
arguments to reduce the run-time.
Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
Before doing a commit, ensure the changes you've made don't produce
linting errors. You can do this by running the linters as follows. Ensure to
commit any files that were corrected.
::
# Install the dependencies
pip install -U black flake8 isort
# Run the linter script
./scripts-dev/lint.sh
Changelog
~~~~~~~~~
All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d`` file named
in the format of ``PRnumber.type``. The type can be one of the following:
* ``feature``.
* ``bugfix``.
* ``docker`` (for updates to the Docker image).
* ``doc`` (for updates to the documentation).
* ``removal`` (also used for deprecations).
* ``misc`` (for internal-only changes).
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our `changelog
<https://github.com/matrix-org/synapse/blob/master/CHANGES.md>`_. The file can
contain Markdown formatting, and should end with a full stop ('.') for
consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!
For example, a fix in PR #1234 would have its changelog entry in
``changelog.d/1234.bugfix``, and contain content like "The security levels of
Florbs are now validated when recieved over federation. Contributed by Jane
Matrix.".
Debian changelog
----------------
Changes which affect the debian packaging files (in ``debian``) are an
exception.
In this case, you will need to add an entry to the debian changelog for the
next release. For this, run the following command::
dch
This will make up a new version number (if there isn't already an unreleased
version in flight), and open an editor where you can add a new changelog entry.
(Our release process will ensure that the version number and maintainer name is
corrected for the release.)
If your change affects both the debian packaging *and* files outside the debian
directory, you will need both a regular newsfragment *and* an entry in the
debian changelog. (Though typically such changes should be submitted as two
separate pull requests.)
Sign off
~~~~~~~~
In order to have a concrete record that your contribution is intentional
and you agree to license it under the same terms as the project's license, we've adopted the
same lightweight approach that the Linux Kernel
`submitting patches process <https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin>`_, Docker
(https://github.com/docker/docker/blob/master/CONTRIBUTING.md), and many other
projects use: the DCO (Developer Certificate of Origin:
http://developercertificate.org/). This is a simple declaration that you wrote
the contribution or otherwise have the right to contribute it to Matrix::
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment::
Signed-off-by: Your Name <your@email.example.org>
We accept contributions under a legally identifiable name, such as
your name on government documentation or common-law names (names
claimed by legitimate usage or repute). Unfortunately, we cannot
accept anonymous contributions at this time.
Git allows you to add this signoff automatically when using the ``-s``
flag to ``git commit``, which uses the name and email set in your
``user.name`` and ``user.email`` git configs.
Conclusion
~~~~~~~~~~
That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully
matrix together all the fragmented communication technologies out there we are
reliant on contributions and collaboration from the community to do so. So
please get involved - and we hope you have as much fun hacking on Matrix as we
do!

View File

@ -109,8 +109,8 @@ Installing prerequisites on Ubuntu or Debian:
```
sudo apt-get install build-essential python3-dev libffi-dev \
python-pip python-setuptools sqlite3 \
libssl-dev python-virtualenv libjpeg-dev libxslt1-dev
python3-pip python3-setuptools sqlite3 \
libssl-dev python3-virtualenv libjpeg-dev libxslt1-dev
```
#### ArchLinux
@ -133,9 +133,9 @@ sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
sudo yum groupinstall "Development Tools"
```
#### Mac OS X
#### macOS
Installing prerequisites on Mac OS X:
Installing prerequisites on macOS:
```
xcode-select --install
@ -144,6 +144,14 @@ sudo pip install virtualenv
brew install pkg-config libffi
```
On macOS Catalina (10.15) you may need to explicitly install OpenSSL
via brew and inform `pip` about it so that `psycopg2` builds:
```
brew install openssl@1.1
export LDFLAGS=-L/usr/local/Cellar/openssl\@1.1/1.1.1d/lib/
```
#### OpenSUSE
Installing prerequisites on openSUSE:

View File

@ -75,6 +75,23 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.7.0
===================
In an attempt to configure Synapse in a privacy preserving way, the default
behaviours of ``allow_public_rooms_without_auth`` and
``allow_public_rooms_over_federation`` have been inverted. This means that by
default, only authenticated users querying the Client/Server API will be able
to query the room directory, and relatedly that the server will not share
room directory information with other servers over federation.
If your installation does not explicitly set these settings one way or the other
and you want either setting to be ``true`` then it will necessary to update
your homeserver configuration file accordingly.
For more details on the surrounding context see our `explainer
<https://matrix.org/blog/2019/11/09/avoiding-unwelcome-visitors-on-private-matrix-servers>`_.
Upgrading to v1.5.0
===================

View File

@ -25,7 +25,7 @@ Restart=on-abort
User=synapse
Group=nogroup
WorkingDirectory=/opt/synapse
WorkingDirectory=/home/synapse/synapse
ExecStart=/home/synapse/synapse/env/bin/python -m synapse.app.homeserver --config-path=/home/synapse/synapse/homeserver.yaml
SyslogIdentifier=matrix-synapse

6
debian/changelog vendored
View File

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.7.0) stable; urgency=medium
* New synapse release 1.7.0.
-- Synapse Packaging team <packages@matrix.org> Fri, 13 Dec 2019 10:19:38 +0000
matrix-synapse-py3 (1.6.1) stable; urgency=medium
* New synapse release 1.6.1.

View File

@ -130,3 +130,15 @@ docker run -it --rm \
This will generate the same configuration file as the legacy mode used, but
will store it in `/data/homeserver.yaml` instead of a temporary location. You
can then use it as shown above at [Running synapse](#running-synapse).
## Building the image
If you need to build the image from a Synapse checkout, use the following `docker
build` command from the repo's root:
```
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
You can choose to build a different docker image by changing the value of the `-f` flag to
point to another Dockerfile.

View File

@ -21,3 +21,20 @@ It returns a JSON body like the following:
]
}
```
# Quarantine media in a room
This API 'quarantines' all the media in a room.
The API is:
```
POST /_synapse/admin/v1/quarantine_media/<room_id>
{}
```
Quarantining media means that it is marked as inaccessible by users. It applies
to any local media, and any locally-cached copies of remote media.
The media file itself (and any thumbnails) is not deleted from the server.

View File

@ -1,3 +1,48 @@
List Accounts
=============
This API returns all local user accounts.
The api is::
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
including an ``access_token`` of a server admin.
The parameters ``from`` and ``limit`` are required only for pagination.
By default, a ``limit`` of 100 is used.
The parameter ``user_id`` can be used to select only users with user ids that
contain this value.
The parameter ``guests=false`` can be used to exclude guest users,
default is to include guest users.
The parameter ``deactivated=true`` can be used to include deactivated users,
default is to exclude deactivated users.
If the endpoint does not return a ``next_token`` then there are no more users left.
It returns a JSON body like the following:
.. code:: json
{
"users": [
{
"name": "<user_id1>",
"password_hash": "<password_hash1>",
"is_guest": 0,
"admin": 0,
"user_type": null,
"deactivated": 0
}, {
"name": "<user_id2>",
"password_hash": "<password_hash2>",
"is_guest": 0,
"admin": 1,
"user_type": null,
"deactivated": 0
}
],
"next_token": "100"
}
Query Account
=============

View File

@ -54,15 +54,16 @@ pid_file: DATADIR/homeserver.pid
#
#require_auth_for_profile_requests: true
# If set to 'false', requires authentication to access the server's public rooms
# directory through the client API. Defaults to 'true'.
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
#
#allow_public_rooms_without_auth: false
#allow_public_rooms_without_auth: true
# If set to 'false', forbids any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'true'.
# If set to 'true', allows any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'false'.
#
#allow_public_rooms_over_federation: false
#allow_public_rooms_over_federation: true
# The default room version for newly created rooms.
#
@ -328,6 +329,69 @@ listeners:
#
#user_ips_max_age: 14d
# Message retention policy at the server level.
#
# Room admins and mods can define a retention period for their rooms using the
# 'm.room.retention' state event, and server admins can cap this period by setting
# the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
#
# If this feature is enabled, Synapse will regularly look for and purge events
# which are older than the room's maximum retention period. Synapse will also
# filter events received over federation so that events that should have been
# purged are ignored and not stored again.
#
retention:
# The message retention policies feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# Default retention policy. If set, Synapse will apply it to rooms that lack the
# 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
# matter much because Synapse doesn't take it into account yet.
#
#default_policy:
# min_lifetime: 1d
# max_lifetime: 1y
# Retention policy limits. If set, a user won't be able to send a
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
# that's not within this range. This is especially useful in closed federations,
# in which server admins can make sure every federating server applies the same
# rules.
#
#allowed_lifetime_min: 1d
#allowed_lifetime_max: 1y
# Server admins can define the settings of the background jobs purging the
# events which lifetime has expired under the 'purge_jobs' section.
#
# If no configuration is provided, a single job will be set up to delete expired
# events in every room daily.
#
# Each job's configuration defines which range of message lifetimes the job
# takes care of. For example, if 'shortest_max_lifetime' is '2d' and
# 'longest_max_lifetime' is '3d', the job will handle purging expired events in
# rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
# lower than or equal to 3 days. Both the minimum and the maximum value of a
# range are optional, e.g. a job with no 'shortest_max_lifetime' and a
# 'longest_max_lifetime' of '3d' will handle every room with a retention policy
# which 'max_lifetime' is lower than or equal to three days.
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
# that purge to be performed by a job that's iterating over every room it knows,
# which would be quite heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 5m:
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 24h
## TLS ##
@ -1270,8 +1334,23 @@ password_config:
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
#
# # notif_from defines the "From" address to use when sending emails.
# # It must be set if email sending is enabled.
# #
# # The placeholder '%(app)s' will be replaced by the application name,
# # which is normally 'app_name' (below), but may be overridden by the
# # Matrix client application.
# #
# # Note that the placeholder must be written '%(app)s', including the
# # trailing 's'.
# #
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name: Matrix
#
# # app_name defines the default value for '%(app)s' in notif_from. It
# # defaults to 'Matrix'.
# #
# #app_name: my_branded_matrix_server
#
# # Enable email notifications by default
# #

View File

@ -7,7 +7,6 @@ who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to execute the SQL here
https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/delta/53/user_dir_populate.sql
solution to fix it is to execute the SQL [here](../synapse/storage/data_stores/main/schema/delta/53/user_dir_populate.sql)
and then restart synapse. This should then start a background task to
flush the current tables and regenerate the directory.

View File

@ -27,7 +27,7 @@ class Store(object):
"_store_pdu_reference_hash_txn"
]
_store_prev_pdu_hash_txn = SignatureStore.__dict__["_store_prev_pdu_hash_txn"]
_simple_insert_txn = SQLBaseStore.__dict__["_simple_insert_txn"]
simple_insert_txn = SQLBaseStore.__dict__["simple_insert_txn"]
store = Store()

View File

@ -58,10 +58,10 @@ if __name__ == "__main__":
" on it."
)
)
parser.add_argument("-v", action='store_true')
parser.add_argument("-v", action="store_true")
parser.add_argument(
"--database-config",
type=argparse.FileType('r'),
type=argparse.FileType("r"),
required=True,
help="A database config file for either a SQLite3 database or a PostgreSQL one.",
)
@ -101,10 +101,7 @@ if __name__ == "__main__":
# Instantiate and initialise the homeserver object.
hs = MockHomeserver(
config,
database_engine,
db_conn,
db_config=config.database_config,
config, database_engine, db_conn, db_config=config.database_config,
)
# setup instantiates the store within the homeserver object.
hs.setup()
@ -112,13 +109,13 @@ if __name__ == "__main__":
@defer.inlineCallbacks
def run_background_updates():
yield store.run_background_updates(sleep=False)
yield store.db.updates.run_background_updates(sleep=False)
# Stop the reactor to exit the script once every background update is run.
reactor.stop()
# Apply all background updates on the database.
reactor.callWhenRunning(lambda: run_as_background_process(
"background_updates", run_background_updates
))
reactor.callWhenRunning(
lambda: run_as_background_process("background_updates", run_background_updates)
)
reactor.run()

View File

@ -47,6 +47,7 @@ from synapse.storage.data_stores.main.media_repository import (
from synapse.storage.data_stores.main.registration import (
RegistrationBackgroundUpdateStore,
)
from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore
from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.data_stores.main.search import SearchBackgroundUpdateStore
from synapse.storage.data_stores.main.state import StateBackgroundUpdateStore
@ -54,6 +55,7 @@ from synapse.storage.data_stores.main.stats import StatsStore
from synapse.storage.data_stores.main.user_directory import (
UserDirectoryBackgroundUpdateStore,
)
from synapse.storage.database import Database
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.util import Clock
@ -131,54 +133,22 @@ class Store(
EventsBackgroundUpdatesStore,
MediaRepositoryBackgroundUpdateStore,
RegistrationBackgroundUpdateStore,
RoomBackgroundUpdateStore,
RoomMemberBackgroundUpdateStore,
SearchBackgroundUpdateStore,
StateBackgroundUpdateStore,
UserDirectoryBackgroundUpdateStore,
StatsStore,
):
def __init__(self, db_conn, hs):
super().__init__(db_conn, hs)
self.db_pool = hs.get_db_pool()
@defer.inlineCallbacks
def runInteraction(self, desc, func, *args, **kwargs):
def r(conn):
try:
i = 0
N = 5
while True:
try:
txn = conn.cursor()
return func(
LoggingTransaction(txn, desc, self.database_engine, [], []),
*args,
**kwargs
)
except self.database_engine.module.DatabaseError as e:
if self.database_engine.is_deadlock(e):
logger.warning("[TXN DEADLOCK] {%s} %d/%d", desc, i, N)
if i < N:
i += 1
conn.rollback()
continue
raise
except Exception as e:
logger.debug("[TXN FAIL] {%s} %s", desc, e)
raise
with PreserveLoggingContext():
return (yield self.db_pool.runWithConnection(r))
def execute(self, f, *args, **kwargs):
return self.runInteraction(f.__name__, f, *args, **kwargs)
return self.db.runInteraction(f.__name__, f, *args, **kwargs)
def execute_sql(self, sql, *args):
def r(txn):
txn.execute(sql, args)
return txn.fetchall()
return self.runInteraction("execute_sql", r)
return self.db.runInteraction("execute_sql", r)
def insert_many_txn(self, txn, table, headers, rows):
sql = "INSERT INTO %s (%s) VALUES (%s)" % (
@ -221,7 +191,7 @@ class Porter(object):
def setup_table(self, table):
if table in APPEND_ONLY_TABLES:
# It's safe to just carry on inserting.
row = yield self.postgres_store._simple_select_one(
row = yield self.postgres_store.db.simple_select_one(
table="port_from_sqlite3",
keyvalues={"table_name": table},
retcols=("forward_rowid", "backward_rowid"),
@ -231,12 +201,14 @@ class Porter(object):
total_to_port = None
if row is None:
if table == "sent_transactions":
forward_chunk, already_ported, total_to_port = (
yield self._setup_sent_transactions()
)
(
forward_chunk,
already_ported,
total_to_port,
) = yield self._setup_sent_transactions()
backward_chunk = 0
else:
yield self.postgres_store._simple_insert(
yield self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": table,
@ -266,7 +238,7 @@ class Porter(object):
yield self.postgres_store.execute(delete_all)
yield self.postgres_store._simple_insert(
yield self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={"table_name": table, "forward_rowid": 1, "backward_rowid": 0},
)
@ -320,7 +292,7 @@ class Porter(object):
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
yield self.postgres_store._simple_insert(
yield self.postgres_store.db.simple_insert(
table=table, values={"stream_id": None}
)
self.progress.update(table, table_size) # Mark table as done
@ -361,7 +333,9 @@ class Porter(object):
return headers, forward_rows, backward_rows
headers, frows, brows = yield self.sqlite_store.runInteraction("select", r)
headers, frows, brows = yield self.sqlite_store.db.runInteraction(
"select", r
)
if frows or brows:
if frows:
@ -375,7 +349,7 @@ class Porter(object):
def insert(txn):
self.postgres_store.insert_many_txn(txn, table, headers[1:], rows)
self.postgres_store._simple_update_one_txn(
self.postgres_store.db.simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
@ -414,7 +388,7 @@ class Porter(object):
return headers, rows
headers, rows = yield self.sqlite_store.runInteraction("select", r)
headers, rows = yield self.sqlite_store.db.runInteraction("select", r)
if rows:
forward_chunk = rows[-1][0] + 1
@ -431,8 +405,8 @@ class Porter(object):
rows_dict = []
for row in rows:
d = dict(zip(headers, row))
if "\0" in d['value']:
logger.warning('dropping search row %s', d)
if "\0" in d["value"]:
logger.warning("dropping search row %s", d)
else:
rows_dict.append(d)
@ -452,7 +426,7 @@ class Porter(object):
],
)
self.postgres_store._simple_update_one_txn(
self.postgres_store.db.simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": "event_search"},
@ -502,17 +476,14 @@ class Porter(object):
self.progress.set_state("Preparing %s" % config["name"])
conn = self.setup_db(config, engine)
db_pool = adbapi.ConnectionPool(
config["name"], **config["args"]
)
db_pool = adbapi.ConnectionPool(config["name"], **config["args"])
hs = MockHomeserver(self.hs_config, engine, conn, db_pool)
store = Store(conn, hs)
store = Store(Database(hs), conn, hs)
yield store.runInteraction(
"%s_engine.check_database" % config["name"],
engine.check_database,
yield store.db.runInteraction(
"%s_engine.check_database" % config["name"], engine.check_database,
)
return store
@ -520,7 +491,9 @@ class Porter(object):
@defer.inlineCallbacks
def run_background_updates_on_postgres(self):
# Manually apply all background updates on the PostgreSQL database.
postgres_ready = yield self.postgres_store.has_completed_background_updates()
postgres_ready = (
yield self.postgres_store.db.updates.has_completed_background_updates()
)
if not postgres_ready:
# Only say that we're running background updates when there are background
@ -528,9 +501,9 @@ class Porter(object):
self.progress.set_state("Running background updates on PostgreSQL")
while not postgres_ready:
yield self.postgres_store.do_next_background_update(100)
yield self.postgres_store.db.updates.do_next_background_update(100)
postgres_ready = yield (
self.postgres_store.has_completed_background_updates()
self.postgres_store.db.updates.has_completed_background_updates()
)
@defer.inlineCallbacks
@ -539,7 +512,9 @@ class Porter(object):
self.sqlite_store = yield self.build_db_store(self.sqlite_config)
# Check if all background updates are done, abort if not.
updates_complete = yield self.sqlite_store.has_completed_background_updates()
updates_complete = (
yield self.sqlite_store.db.updates.has_completed_background_updates()
)
if not updates_complete:
sys.stderr.write(
"Pending background updates exist in the SQLite3 database."
@ -580,22 +555,22 @@ class Porter(object):
)
try:
yield self.postgres_store.runInteraction("alter_table", alter_table)
yield self.postgres_store.db.runInteraction("alter_table", alter_table)
except Exception:
# On Error Resume Next
pass
yield self.postgres_store.runInteraction(
yield self.postgres_store.db.runInteraction(
"create_port_table", create_port_table
)
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
sqlite_tables = yield self.sqlite_store.db.simple_select_onecol(
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
)
postgres_tables = yield self.postgres_store._simple_select_onecol(
postgres_tables = yield self.postgres_store.db.simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
@ -685,11 +660,11 @@ class Porter(object):
rows = txn.fetchall()
headers = [column[0] for column in txn.description]
ts_ind = headers.index('ts')
ts_ind = headers.index("ts")
return headers, [r for r in rows if r[ts_ind] < yesterday]
headers, rows = yield self.sqlite_store.runInteraction("select", r)
headers, rows = yield self.sqlite_store.db.runInteraction("select", r)
rows = self._convert_rows("sent_transactions", headers, rows)
@ -722,7 +697,7 @@ class Porter(object):
next_chunk = yield self.sqlite_store.execute(get_start_id)
next_chunk = max(max_inserted_rowid + 1, next_chunk)
yield self.postgres_store._simple_insert(
yield self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": "sent_transactions",
@ -735,7 +710,7 @@ class Porter(object):
txn.execute(
"SELECT count(*) FROM sent_transactions" " WHERE ts >= ?", (yesterday,)
)
size, = txn.fetchone()
(size,) = txn.fetchone()
return int(size)
remaining_count = yield self.sqlite_store.execute(get_sent_table_size)
@ -782,10 +757,13 @@ class Porter(object):
def _setup_state_group_id_seq(self):
def r(txn):
txn.execute("SELECT MAX(id) FROM state_groups")
next_id = txn.fetchone()[0] + 1
curr_id = txn.fetchone()[0]
if not curr_id:
return
next_id = curr_id + 1
txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.runInteraction("setup_state_group_id_seq", r)
return self.postgres_store.db.runInteraction("setup_state_group_id_seq", r)
##############################################
@ -866,7 +844,7 @@ class CursesProgress(Progress):
duration = int(now) - int(self.start_time)
minutes, seconds = divmod(duration, 60)
duration_str = '%02dm %02ds' % (minutes, seconds)
duration_str = "%02dm %02ds" % (minutes, seconds)
if self.finished:
status = "Time spent: %s (Done!)" % (duration_str,)
@ -876,7 +854,7 @@ class CursesProgress(Progress):
left = float(self.total_remaining) / self.total_processed
est_remaining = (int(now) - self.start_time) * left
est_remaining_str = '%02dm %02ds remaining' % divmod(est_remaining, 60)
est_remaining_str = "%02dm %02ds remaining" % divmod(est_remaining, 60)
else:
est_remaining_str = "Unknown"
status = "Time spent: %s (est. remaining: %s)" % (
@ -962,7 +940,7 @@ if __name__ == "__main__":
description="A script to port an existing synapse SQLite database to"
" a new PostgreSQL database."
)
parser.add_argument("-v", action='store_true')
parser.add_argument("-v", action="store_true")
parser.add_argument(
"--sqlite-database",
required=True,
@ -971,12 +949,12 @@ if __name__ == "__main__":
)
parser.add_argument(
"--postgres-config",
type=argparse.FileType('r'),
type=argparse.FileType("r"),
required=True,
help="The database config file for the PostgreSQL database",
)
parser.add_argument(
"--curses", action='store_true', help="display a curses based progress UI"
"--curses", action="store_true", help="display a curses based progress UI"
)
parser.add_argument(
@ -1052,3 +1030,4 @@ if __name__ == "__main__":
if end_error_exec_info:
exc_type, exc_value, exc_traceback = end_error_exec_info
traceback.print_exception(exc_type, exc_value, exc_traceback)
sys.exit(5)

View File

@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.6.1"
__version__ = "1.7.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@ -1,7 +1,8 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2018-2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -94,6 +95,8 @@ class EventTypes(object):
ServerACL = "m.room.server_acl"
Pinned = "m.room.pinned_events"
Retention = "m.room.retention"
class RejectedReason(object):
AUTH_ERROR = "auth_error"
@ -145,3 +148,7 @@ class EventContentFields(object):
# Labels for the event, cf https://github.com/matrix-org/matrix-doc/pull/2326
LABELS = "org.matrix.labels"
# Timestamp to delete the event after
# cf https://github.com/matrix-org/matrix-doc/pull/2228
SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"

View File

@ -1,5 +1,8 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018-2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

View File

@ -269,7 +269,7 @@ def start(hs, listeners=None):
# It is now safe to start your Synapse.
hs.start_listening(listeners)
hs.get_datastore().start_profiling()
hs.get_datastore().db.start_profiling()
setup_sentry(hs)
setup_sdnotify(hs)

View File

@ -40,6 +40,7 @@ from synapse.replication.slave.storage.transactions import SlavedTransactionStor
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams._base import ReceiptsStream
from synapse.server import HomeServer
from synapse.storage.database import Database
from synapse.storage.engines import create_engine
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
@ -59,8 +60,8 @@ class FederationSenderSlaveStore(
SlavedDeviceStore,
SlavedPresenceStore,
):
def __init__(self, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(database, db_conn, hs)
# We pull out the current federation stream position now so that we
# always have a known value for the federation position in memory so
@ -69,7 +70,7 @@ class FederationSenderSlaveStore(
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = "SELECT stream_id FROM federation_stream_position" " WHERE type = ?"
sql = "SELECT stream_id FROM federation_stream_position WHERE type = ?"
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()

View File

@ -68,9 +68,9 @@ from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.well_known import WellKnownResource
from synapse.server import HomeServer
from synapse.storage import DataStore, are_all_users_on_domain
from synapse.storage import DataStore
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.storage.prepare_database import UpgradeDatabaseException
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
@ -294,22 +294,6 @@ class SynapseHomeServer(HomeServer):
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
def run_startup_checks(self, db_conn, database_engine):
all_users_native = are_all_users_on_domain(
db_conn.cursor(), database_engine, self.hostname
)
if not all_users_native:
quit_with_error(
"Found users in database not native to %s!\n"
"You cannot changed a synapse server_name after it's been configured"
% (self.hostname,)
)
try:
database_engine.check_database(db_conn.cursor())
except IncorrectDatabaseSetup as e:
quit_with_error(str(e))
# Gauges to expose monthly active user control metrics
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
@ -357,16 +341,12 @@ def setup(config_options):
synapse.config.logger.setup_logging(hs, config, use_worker_options=False)
logger.info("Preparing database: %s...", config.database_config["name"])
logger.info("Setting up server")
try:
with hs.get_db_conn(run_new_connection=False) as db_conn:
prepare_database(db_conn, database_engine, config=config)
database_engine.on_new_connection(db_conn)
hs.run_startup_checks(db_conn, database_engine)
db_conn.commit()
hs.setup()
except IncorrectDatabaseSetup as e:
quit_with_error(str(e))
except UpgradeDatabaseException:
sys.stderr.write(
"\nFailed to upgrade database.\n"
@ -375,9 +355,6 @@ def setup(config_options):
)
sys.exit(1)
logger.info("Database prepared in %s.", config.database_config["name"])
hs.setup()
hs.setup_master()
@defer.inlineCallbacks
@ -436,7 +413,7 @@ def setup(config_options):
_base.start(hs, config.listeners)
hs.get_pusherpool().start()
hs.get_datastore().start_doing_background_updates()
hs.get_datastore().db.updates.start_doing_background_updates()
except Exception:
# Print the exception and bail out.
print("Error during startup:", file=sys.stderr)
@ -542,8 +519,8 @@ def phone_stats_home(hs, stats, stats_process=_stats_process):
# Database version
#
stats["database_engine"] = hs.get_datastore().database_engine_name
stats["database_server_version"] = hs.get_datastore().get_server_version()
stats["database_engine"] = hs.database_engine.module.__name__
stats["database_server_version"] = hs.database_engine.server_version
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
try:
yield hs.get_proxied_http_client().put_json(

View File

@ -33,6 +33,7 @@ from synapse.replication.slave.storage.account_data import SlavedAccountDataStor
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage import DataStore
@ -45,7 +46,11 @@ logger = logging.getLogger("synapse.app.pusher")
class PusherSlaveStore(
SlavedEventStore, SlavedPusherStore, SlavedReceiptsStore, SlavedAccountDataStore
SlavedEventStore,
SlavedPusherStore,
SlavedReceiptsStore,
SlavedAccountDataStore,
RoomStore,
):
update_pusher_last_stream_ordering_and_success = __func__(
DataStore.update_pusher_last_stream_ordering_and_success

View File

@ -151,7 +151,7 @@ class SynchrotronPresence(object):
def set_state(self, user, state, ignore_status_msg=False):
# TODO Hows this supposed to work?
pass
return defer.succeed(None)
get_states = __func__(PresenceHandler.get_states)
get_state = __func__(PresenceHandler.get_state)

View File

@ -43,6 +43,7 @@ from synapse.replication.tcp.streams.events import (
from synapse.rest.client.v2_alpha import user_directory
from synapse.server import HomeServer
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.storage.database import Database
from synapse.storage.engines import create_engine
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
@ -60,11 +61,11 @@ class UserDirectorySlaveStore(
UserDirectoryStore,
BaseSlavedStore,
):
def __init__(self, db_conn, hs):
super(UserDirectorySlaveStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(UserDirectorySlaveStore, self).__init__(database, db_conn, hs)
events_max = self._stream_id_gen.get_current_token()
curr_state_delta_prefill, min_curr_state_delta_id = self._get_cache_dict(
curr_state_delta_prefill, min_curr_state_delta_id = self.db.get_cache_dict(
db_conn,
"current_state_delta_stream",
entity_column="room_id",

View File

@ -185,7 +185,7 @@ class ApplicationServiceApi(SimpleHttpClient):
if not _is_valid_3pe_metadata(info):
logger.warning(
"query_3pe_protocol to %s did not return a" " valid result", uri
"query_3pe_protocol to %s did not return a valid result", uri
)
return None

View File

@ -134,7 +134,7 @@ def _load_appservice(hostname, as_info, config_filename):
for regex_obj in as_info["namespaces"][ns]:
if not isinstance(regex_obj, dict):
raise ValueError(
"Expected namespace entry in %s to be an object," " but got %s",
"Expected namespace entry in %s to be an object, but got %s",
ns,
regex_obj,
)

View File

@ -146,6 +146,8 @@ class EmailConfig(Config):
if k not in email_config:
missing.append("email." + k)
# public_baseurl is required to build password reset and validation links that
# will be emailed to users
if config.get("public_baseurl") is None:
missing.append("public_baseurl")
@ -305,8 +307,23 @@ class EmailConfig(Config):
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
#
# # notif_from defines the "From" address to use when sending emails.
# # It must be set if email sending is enabled.
# #
# # The placeholder '%(app)s' will be replaced by the application name,
# # which is normally 'app_name' (below), but may be overridden by the
# # Matrix client application.
# #
# # Note that the placeholder must be written '%(app)s', including the
# # trailing 's'.
# #
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name: Matrix
#
# # app_name defines the default value for '%(app)s' in notif_from. It
# # defaults to 'Matrix'.
# #
# #app_name: my_branded_matrix_server
#
# # Enable email notifications by default
# #

View File

@ -106,6 +106,13 @@ class RegistrationConfig(Config):
account_threepid_delegates = config.get("account_threepid_delegates") or {}
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
if self.account_threepid_delegate_msisdn and not self.public_baseurl:
raise ConfigError(
"The configuration option `public_baseurl` is required if "
"`account_threepid_delegate.msisdn` is set, such that "
"clients know where to submit validation tokens to. Please "
"configure `public_baseurl`."
)
self.default_identity_server = config.get("default_identity_server")
self.allow_guest_access = config.get("allow_guest_access", False)

View File

@ -170,7 +170,7 @@ class _RoomDirectoryRule(object):
self.action = action
else:
raise ConfigError(
"%s rules can only have action of 'allow'" " or 'deny'" % (option_name,)
"%s rules can only have action of 'allow' or 'deny'" % (option_name,)
)
self._alias_matches_all = alias == "*"

View File

@ -19,7 +19,7 @@ import logging
import os.path
import re
from textwrap import indent
from typing import List
from typing import Dict, List, Optional
import attr
import yaml
@ -118,15 +118,16 @@ class ServerConfig(Config):
self.allow_public_rooms_without_auth = False
self.allow_public_rooms_over_federation = False
else:
# If set to 'False', requires authentication to access the server's public
# rooms directory through the client API. Defaults to 'True'.
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
self.allow_public_rooms_without_auth = config.get(
"allow_public_rooms_without_auth", True
"allow_public_rooms_without_auth", False
)
# If set to 'False', forbids any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'True'.
# If set to 'true', allows any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'false'.
self.allow_public_rooms_over_federation = config.get(
"allow_public_rooms_over_federation", True
"allow_public_rooms_over_federation", False
)
default_room_version = config.get("default_room_version", DEFAULT_ROOM_VERSION)
@ -223,7 +224,7 @@ class ServerConfig(Config):
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in " "federation_ip_range_blacklist: %s" % e
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
)
if self.public_baseurl is not None:
@ -246,6 +247,124 @@ class ServerConfig(Config):
# events with profile information that differ from the target's global profile.
self.allow_per_room_profiles = config.get("allow_per_room_profiles", True)
retention_config = config.get("retention")
if retention_config is None:
retention_config = {}
self.retention_enabled = retention_config.get("enabled", False)
retention_default_policy = retention_config.get("default_policy")
if retention_default_policy is not None:
self.retention_default_min_lifetime = retention_default_policy.get(
"min_lifetime"
)
if self.retention_default_min_lifetime is not None:
self.retention_default_min_lifetime = self.parse_duration(
self.retention_default_min_lifetime
)
self.retention_default_max_lifetime = retention_default_policy.get(
"max_lifetime"
)
if self.retention_default_max_lifetime is not None:
self.retention_default_max_lifetime = self.parse_duration(
self.retention_default_max_lifetime
)
if (
self.retention_default_min_lifetime is not None
and self.retention_default_max_lifetime is not None
and (
self.retention_default_min_lifetime
> self.retention_default_max_lifetime
)
):
raise ConfigError(
"The default retention policy's 'min_lifetime' can not be greater"
" than its 'max_lifetime'"
)
else:
self.retention_default_min_lifetime = None
self.retention_default_max_lifetime = None
self.retention_allowed_lifetime_min = retention_config.get(
"allowed_lifetime_min"
)
if self.retention_allowed_lifetime_min is not None:
self.retention_allowed_lifetime_min = self.parse_duration(
self.retention_allowed_lifetime_min
)
self.retention_allowed_lifetime_max = retention_config.get(
"allowed_lifetime_max"
)
if self.retention_allowed_lifetime_max is not None:
self.retention_allowed_lifetime_max = self.parse_duration(
self.retention_allowed_lifetime_max
)
if (
self.retention_allowed_lifetime_min is not None
and self.retention_allowed_lifetime_max is not None
and self.retention_allowed_lifetime_min
> self.retention_allowed_lifetime_max
):
raise ConfigError(
"Invalid retention policy limits: 'allowed_lifetime_min' can not be"
" greater than 'allowed_lifetime_max'"
)
self.retention_purge_jobs = [] # type: List[Dict[str, Optional[int]]]
for purge_job_config in retention_config.get("purge_jobs", []):
interval_config = purge_job_config.get("interval")
if interval_config is None:
raise ConfigError(
"A retention policy's purge jobs configuration must have the"
" 'interval' key set."
)
interval = self.parse_duration(interval_config)
shortest_max_lifetime = purge_job_config.get("shortest_max_lifetime")
if shortest_max_lifetime is not None:
shortest_max_lifetime = self.parse_duration(shortest_max_lifetime)
longest_max_lifetime = purge_job_config.get("longest_max_lifetime")
if longest_max_lifetime is not None:
longest_max_lifetime = self.parse_duration(longest_max_lifetime)
if (
shortest_max_lifetime is not None
and longest_max_lifetime is not None
and shortest_max_lifetime > longest_max_lifetime
):
raise ConfigError(
"A retention policy's purge jobs configuration's"
" 'shortest_max_lifetime' value can not be greater than its"
" 'longest_max_lifetime' value."
)
self.retention_purge_jobs.append(
{
"interval": interval,
"shortest_max_lifetime": shortest_max_lifetime,
"longest_max_lifetime": longest_max_lifetime,
}
)
if not self.retention_purge_jobs:
self.retention_purge_jobs = [
{
"interval": self.parse_duration("1d"),
"shortest_max_lifetime": None,
"longest_max_lifetime": None,
}
]
self.listeners = [] # type: List[dict]
for listener in config.get("listeners", []):
if not isinstance(listener.get("port", None), int):
@ -372,6 +491,8 @@ class ServerConfig(Config):
"cleanup_extremities_with_dummy_events", True
)
self.enable_ephemeral_messages = config.get("enable_ephemeral_messages", False)
def has_tls_listener(self) -> bool:
return any(l["tls"] for l in self.listeners)
@ -500,15 +621,16 @@ class ServerConfig(Config):
#
#require_auth_for_profile_requests: true
# If set to 'false', requires authentication to access the server's public rooms
# directory through the client API. Defaults to 'true'.
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
#
#allow_public_rooms_without_auth: false
#allow_public_rooms_without_auth: true
# If set to 'false', forbids any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'true'.
# If set to 'true', allows any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'false'.
#
#allow_public_rooms_over_federation: false
#allow_public_rooms_over_federation: true
# The default room version for newly created rooms.
#
@ -761,6 +883,69 @@ class ServerConfig(Config):
# Defaults to `28d`. Set to `null` to disable clearing out of old rows.
#
#user_ips_max_age: 14d
# Message retention policy at the server level.
#
# Room admins and mods can define a retention period for their rooms using the
# 'm.room.retention' state event, and server admins can cap this period by setting
# the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
#
# If this feature is enabled, Synapse will regularly look for and purge events
# which are older than the room's maximum retention period. Synapse will also
# filter events received over federation so that events that should have been
# purged are ignored and not stored again.
#
retention:
# The message retention policies feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# Default retention policy. If set, Synapse will apply it to rooms that lack the
# 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
# matter much because Synapse doesn't take it into account yet.
#
#default_policy:
# min_lifetime: 1d
# max_lifetime: 1y
# Retention policy limits. If set, a user won't be able to send a
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
# that's not within this range. This is especially useful in closed federations,
# in which server admins can make sure every federating server applies the same
# rules.
#
#allowed_lifetime_min: 1d
#allowed_lifetime_max: 1y
# Server admins can define the settings of the background jobs purging the
# events which lifetime has expired under the 'purge_jobs' section.
#
# If no configuration is provided, a single job will be set up to delete expired
# events in every room daily.
#
# Each job's configuration defines which range of message lifetimes the job
# takes care of. For example, if 'shortest_max_lifetime' is '2d' and
# 'longest_max_lifetime' is '3d', the job will handle purging expired events in
# rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
# lower than or equal to 3 days. Both the minimum and the maximum value of a
# range are optional, e.g. a job with no 'shortest_max_lifetime' and a
# 'longest_max_lifetime' of '3d' will handle every room with a retention policy
# which 'max_lifetime' is lower than or equal to three days.
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
# that purge to be performed by a job that's iterating over every room it knows,
# which would be quite heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 5m:
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 24h
"""
% locals()
)
@ -787,14 +972,14 @@ class ServerConfig(Config):
"--print-pidfile",
action="store_true",
default=None,
help="Print the path to the pidfile just" " before daemonizing",
help="Print the path to the pidfile just before daemonizing",
)
server_group.add_argument(
"--manhole",
metavar="PORT",
dest="manhole",
type=int,
help="Turn on the twisted telnet manhole" " service on the given port.",
help="Turn on the twisted telnet manhole service on the given port.",
)

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from six import string_types
from six import integer_types, string_types
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
@ -22,11 +22,12 @@ from synapse.types import EventID, RoomID, UserID
class EventValidator(object):
def validate_new(self, event):
def validate_new(self, event, config):
"""Validates the event has roughly the right format
Args:
event (FrozenEvent)
event (FrozenEvent): The event to validate.
config (Config): The homeserver's configuration.
"""
self.validate_builder(event)
@ -67,6 +68,99 @@ class EventValidator(object):
Codes.INVALID_PARAM,
)
if event.type == EventTypes.Retention:
self._validate_retention(event, config)
def _validate_retention(self, event, config):
"""Checks that an event that defines the retention policy for a room respects the
boundaries imposed by the server's administrator.
Args:
event (FrozenEvent): The event to validate.
config (Config): The homeserver's configuration.
"""
min_lifetime = event.content.get("min_lifetime")
max_lifetime = event.content.get("max_lifetime")
if min_lifetime is not None:
if not isinstance(min_lifetime, integer_types):
raise SynapseError(
code=400,
msg="'min_lifetime' must be an integer",
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_min is not None
and min_lifetime < config.retention_allowed_lifetime_min
):
raise SynapseError(
code=400,
msg=(
"'min_lifetime' can't be lower than the minimum allowed"
" value enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_max is not None
and min_lifetime > config.retention_allowed_lifetime_max
):
raise SynapseError(
code=400,
msg=(
"'min_lifetime' can't be greater than the maximum allowed"
" value enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if max_lifetime is not None:
if not isinstance(max_lifetime, integer_types):
raise SynapseError(
code=400,
msg="'max_lifetime' must be an integer",
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_min is not None
and max_lifetime < config.retention_allowed_lifetime_min
):
raise SynapseError(
code=400,
msg=(
"'max_lifetime' can't be lower than the minimum allowed value"
" enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_max is not None
and max_lifetime > config.retention_allowed_lifetime_max
):
raise SynapseError(
code=400,
msg=(
"'max_lifetime' can't be greater than the maximum allowed"
" value enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if (
min_lifetime is not None
and max_lifetime is not None
and min_lifetime > max_lifetime
):
raise SynapseError(
code=400,
msg="'min_lifetime' can't be greater than 'max_lifetime",
errcode=Codes.BAD_JSON,
)
def validate_builder(self, event):
"""Validates that the builder/event has roughly the right format. Only
checks values that we expect a proto event to have, rather than all the

View File

@ -324,10 +324,6 @@ class FederationClient(FederationBase):
A list of events in the state, and a list of events in the auth chain
for the given event.
"""
try:
# First we try and ask for just the IDs, as thats far quicker if
# we have most of the state and auth_chain already.
# However, this may 404 if the other side has an old synapse.
result = yield self.transport_layer.get_room_state_ids(
destination, room_id, event_id=event_id
)
@ -349,62 +345,11 @@ class FederationClient(FederationBase):
event_map = {ev.event_id: ev for ev in fetched_events}
pdus = [event_map[e_id] for e_id in state_event_ids if e_id in event_map]
auth_chain = [
event_map[e_id] for e_id in auth_event_ids if e_id in event_map
]
auth_chain = [event_map[e_id] for e_id in auth_event_ids if e_id in event_map]
auth_chain.sort(key=lambda e: e.depth)
return pdus, auth_chain
except HttpResponseException as e:
if e.code == 400 or e.code == 404:
logger.info("Failed to use get_room_state_ids API, falling back")
else:
raise e
result = yield self.transport_layer.get_room_state(
destination, room_id, event_id=event_id
)
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
pdus = [
event_from_pdu_json(p, format_ver, outlier=True) for p in result["pdus"]
]
auth_chain = [
event_from_pdu_json(p, format_ver, outlier=True)
for p in result.get("auth_chain", [])
]
seen_events = yield self.store.get_events(
[ev.event_id for ev in itertools.chain(pdus, auth_chain)]
)
signed_pdus = yield self._check_sigs_and_hash_and_fetch(
destination,
[p for p in pdus if p.event_id not in seen_events],
outlier=True,
room_version=room_version,
)
signed_pdus.extend(
seen_events[p.event_id] for p in pdus if p.event_id in seen_events
)
signed_auth = yield self._check_sigs_and_hash_and_fetch(
destination,
[p for p in auth_chain if p.event_id not in seen_events],
outlier=True,
room_version=room_version,
)
signed_auth.extend(
seen_events[p.event_id] for p in auth_chain if p.event_id in seen_events
)
signed_auth.sort(key=lambda e: e.depth)
return signed_pdus, signed_auth
@defer.inlineCallbacks
def get_events_from_store_or_dest(self, destination, room_id, event_ids):

View File

@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2019 Matrix.org Federation C.I.C
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -73,6 +74,7 @@ class FederationServer(FederationBase):
self.auth = hs.get_auth()
self.handler = hs.get_handlers().federation_handler
self.state = hs.get_state_handler()
self._server_linearizer = Linearizer("fed_server")
self._transaction_linearizer = Linearizer("fed_txn_handler")
@ -264,9 +266,6 @@ class FederationServer(FederationBase):
await self.registry.on_edu(edu_type, origin, content)
async def on_context_state_request(self, origin, room_id, event_id):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
@ -280,12 +279,17 @@ class FederationServer(FederationBase):
# - but that's non-trivial to get right, and anyway somewhat defeats
# the point of the linearizer.
with (await self._server_linearizer.queue((origin, room_id))):
resp = await self._state_resp_cache.wrap(
resp = dict(
await self._state_resp_cache.wrap(
(room_id, event_id),
self._on_context_state_request_compute,
room_id,
event_id,
)
)
room_version = await self.store.get_room_version(room_id)
resp["room_version"] = room_version
return 200, resp
@ -306,7 +310,11 @@ class FederationServer(FederationBase):
return 200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
async def _on_context_state_request_compute(self, room_id, event_id):
if event_id:
pdus = await self.handler.get_state_for_pdu(room_id, event_id)
else:
pdus = (await self.state.get_current_state(room_id)).values()
auth_chain = await self.store.get_auth_chain([pdu.event_id for pdu in pdus])
return {

View File

@ -44,7 +44,7 @@ class TransactionActions(object):
response code and response body.
"""
if not transaction.transaction_id:
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
raise RuntimeError("Cannot persist a transaction with no transaction_id")
return self.store.get_received_txn_response(transaction.transaction_id, origin)
@ -56,7 +56,7 @@ class TransactionActions(object):
Deferred
"""
if not transaction.transaction_id:
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
raise RuntimeError("Cannot persist a transaction with no transaction_id")
return self.store.set_received_txn_response(
transaction.transaction_id, origin, code, response

View File

@ -49,7 +49,7 @@ sent_pdus_destination_dist_count = Counter(
sent_pdus_destination_dist_total = Counter(
"synapse_federation_client_sent_pdu_destinations:total",
"" "Total number of PDUs queued for sending across all destinations",
"Total number of PDUs queued for sending across all destinations",
)

View File

@ -84,7 +84,7 @@ class TransactionManager(object):
txn_id = str(self._next_txn_id)
logger.debug(
"TX [%s] {%s} Attempting new transaction" " (pdus: %d, edus: %d)",
"TX [%s] {%s} Attempting new transaction (pdus: %d, edus: %d)",
destination,
txn_id,
len(pdus),
@ -103,7 +103,7 @@ class TransactionManager(object):
self._next_txn_id += 1
logger.info(
"TX [%s] {%s} Sending transaction [%s]," " (PDUs: %d, EDUs: %d)",
"TX [%s] {%s} Sending transaction [%s], (PDUs: %d, EDUs: %d)",
destination,
txn_id,
transaction.transaction_id,

View File

@ -38,30 +38,6 @@ class TransportLayerClient(object):
self.server_name = hs.hostname
self.client = hs.get_http_client()
@log_function
def get_room_state(self, destination, room_id, event_id):
""" Requests all state for a given room from the given server at the
given event.
Args:
destination (str): The host name of the remote homeserver we want
to get the state from.
context (str): The name of the context we want the state of
event_id (str): The event we want the context at.
Returns:
Deferred: Results in a dict received from the remote homeserver.
"""
logger.debug("get_room_state dest=%s, room=%s", destination, room_id)
path = _create_v1_path("/state/%s", room_id)
return self.client.get_json(
destination,
path=path,
args={"event_id": event_id},
try_trailing_slash_on_400=True,
)
@log_function
def get_room_state_ids(self, destination, room_id, event_id):
""" Requests all state for a given room from the given server at the

View File

@ -421,7 +421,7 @@ class FederationEventServlet(BaseFederationServlet):
return await self.handler.on_pdu_request(origin, event_id)
class FederationStateServlet(BaseFederationServlet):
class FederationStateV1Servlet(BaseFederationServlet):
PATH = "/state/(?P<context>[^/]*)/?"
# This is when someone asks for all data for a given context.
@ -429,7 +429,7 @@ class FederationStateServlet(BaseFederationServlet):
return await self.handler.on_context_state_request(
origin,
context,
parse_string_from_args(query, "event_id", None, required=True),
parse_string_from_args(query, "event_id", None, required=False),
)
@ -1360,7 +1360,7 @@ class RoomComplexityServlet(BaseFederationServlet):
FEDERATION_SERVLET_CLASSES = (
FederationSendServlet,
FederationEventServlet,
FederationStateServlet,
FederationStateV1Servlet,
FederationStateIdsServlet,
FederationBackfillServlet,
FederationQueryServlet,

View File

@ -56,7 +56,7 @@ class AdminHandler(BaseHandler):
@defer.inlineCallbacks
def get_users(self):
"""Function to reterive a list of users in users table.
"""Function to retrieve a list of users in users table.
Args:
Returns:
@ -67,19 +67,22 @@ class AdminHandler(BaseHandler):
return ret
@defer.inlineCallbacks
def get_users_paginate(self, order, start, limit):
"""Function to reterive a paginated list of users from
users list. This will return a json object, which contains
list of users and the total number of users in users table.
def get_users_paginate(self, start, limit, name, guests, deactivated):
"""Function to retrieve a paginated list of users from
users list. This will return a json list of users.
Args:
order (str): column name to order the select by this column
start (int): start number to begin the query from
limit (int): number of rows to reterive
limit (int): number of rows to retrieve
name (string): filter for user names
guests (bool): whether to in include guest users
deactivated (bool): whether to include deactivated users
Returns:
defer.Deferred: resolves to json object {list[dict[str, Any]], count}
defer.Deferred: resolves to json list[dict[str, Any]]
"""
ret = yield self.store.get_users_paginate(order, start, limit)
ret = yield self.store.get_users_paginate(
start, limit, name, guests, deactivated
)
return ret

View File

@ -119,7 +119,7 @@ class DirectoryHandler(BaseHandler):
if not service.is_interested_in_alias(room_alias.to_string()):
raise SynapseError(
400,
"This application service has not reserved" " this kind of alias.",
"This application service has not reserved this kind of alias.",
errcode=Codes.EXCLUSIVE,
)
else:

View File

@ -30,6 +30,7 @@ from twisted.internet import defer
from synapse.api.errors import CodeMessageException, Codes, NotFoundError, SynapseError
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
from synapse.types import (
UserID,
get_domain_from_id,
@ -53,6 +54,12 @@ class E2eKeysHandler(object):
self._edu_updater = SigningKeyEduUpdater(hs, self)
self._is_master = hs.config.worker_app is None
if not self._is_master:
self._user_device_resync_client = ReplicationUserDevicesResyncRestServlet.make_client(
hs
)
federation_registry = hs.get_federation_registry()
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
@ -191,9 +198,15 @@ class E2eKeysHandler(object):
# probably be tracking their device lists. However, we haven't
# done an initial sync on the device list so we do it now.
try:
if self._is_master:
user_devices = yield self.device_handler.device_list_updater.user_device_resync(
user_id
)
else:
user_devices = yield self._user_device_resync_client(
user_id=user_id
)
user_devices = user_devices["devices"]
for device in user_devices:
results[user_id] = {device["device_id"]: device["keys"]}
@ -251,7 +264,6 @@ class E2eKeysHandler(object):
return ret
@defer.inlineCallbacks
def get_cross_signing_keys_from_cache(self, query, from_user_id):
"""Get cross-signing keys for users from the database
@ -271,35 +283,14 @@ class E2eKeysHandler(object):
self_signing_keys = {}
user_signing_keys = {}
for user_id in query:
# XXX: consider changing the store functions to allow querying
# multiple users simultaneously.
key = yield self.store.get_e2e_cross_signing_key(
user_id, "master", from_user_id
)
if key:
master_keys[user_id] = key
key = yield self.store.get_e2e_cross_signing_key(
user_id, "self_signing", from_user_id
)
if key:
self_signing_keys[user_id] = key
# users can see other users' master and self-signing keys, but can
# only see their own user-signing keys
if from_user_id == user_id:
key = yield self.store.get_e2e_cross_signing_key(
user_id, "user_signing", from_user_id
)
if key:
user_signing_keys[user_id] = key
return {
# Currently a stub, implementation coming in https://github.com/matrix-org/synapse/pull/6486
return defer.succeed(
{
"master_keys": master_keys,
"self_signing_keys": self_signing_keys,
"user_signing_keys": user_signing_keys,
}
)
@trace
@defer.inlineCallbacks

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2017, 2018 New Vector Ltd
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -103,14 +104,35 @@ class E2eRoomKeysHandler(object):
rooms
session_id(string): session ID to delete keys for, for None to delete keys
for all sessions
Raises:
NotFoundError: if the backup version does not exist
Returns:
A deferred of the deletion transaction
A dict containing the count and etag for the backup version
"""
# lock for consistency with uploading
with (yield self._upload_linearizer.queue(user_id)):
# make sure the backup version exists
try:
version_info = yield self.store.get_e2e_room_keys_version_info(
user_id, version
)
except StoreError as e:
if e.code == 404:
raise NotFoundError("Unknown backup version")
else:
raise
yield self.store.delete_e2e_room_keys(user_id, version, room_id, session_id)
version_etag = version_info["etag"] + 1
yield self.store.update_e2e_room_keys_version(
user_id, version, None, version_etag
)
count = yield self.store.count_e2e_room_keys(user_id, version)
return {"etag": str(version_etag), "count": count}
@trace
@defer.inlineCallbacks
def upload_room_keys(self, user_id, version, room_keys):
@ -138,6 +160,9 @@ class E2eRoomKeysHandler(object):
}
}
Returns:
A dict containing the count and etag for the backup version
Raises:
NotFoundError: if there are no versions defined
RoomKeysVersionError: if the uploaded version is not the current version
@ -171,26 +196,17 @@ class E2eRoomKeysHandler(object):
else:
raise
# go through the room_keys.
# XXX: this should/could be done concurrently, given we're in a lock.
for room_id, room in iteritems(room_keys["rooms"]):
for session_id, session in iteritems(room["sessions"]):
yield self._upload_room_key(
user_id, version, room_id, session_id, session
# Fetch any existing room keys for the sessions that have been
# submitted. Then compare them with the submitted keys. If the
# key is new, insert it; if the key should be updated, then update
# it; otherwise, drop it.
existing_keys = yield self.store.get_e2e_room_keys_multi(
user_id, version, room_keys["rooms"]
)
@defer.inlineCallbacks
def _upload_room_key(self, user_id, version, room_id, session_id, room_key):
"""Upload a given room_key for a given room and session into a given
version of the backup. Merges the key with any which might already exist.
Args:
user_id(str): the user whose backup we're setting
version(str): the version ID of the backup we're updating
room_id(str): the ID of the room whose keys we're setting
session_id(str): the session whose room_key we're setting
room_key(dict): the room_key being set
"""
to_insert = [] # batch the inserts together
changed = False # if anything has changed, we need to update the etag
for room_id, room in iteritems(room_keys["rooms"]):
for session_id, room_key in iteritems(room["sessions"]):
log_kv(
{
"message": "Trying to upload room key",
@ -199,14 +215,20 @@ class E2eRoomKeysHandler(object):
"user_id": user_id,
}
)
# get the room_key for this particular row
current_room_key = None
try:
current_room_key = yield self.store.get_e2e_room_key(
user_id, version, room_id, session_id
current_room_key = existing_keys.get(room_id, {}).get(session_id)
if current_room_key:
if self._should_replace_room_key(current_room_key, room_key):
log_kv({"message": "Replacing room key."})
# updates are done one at a time in the DB, so send
# updates right away rather than batching them up,
# like we do with the inserts
yield self.store.update_e2e_room_key(
user_id, version, room_id, session_id, room_key
)
except StoreError as e:
if e.code == 404:
changed = True
else:
log_kv({"message": "Not replacing room_key."})
else:
log_kv(
{
"message": "Room key not found.",
@ -214,16 +236,22 @@ class E2eRoomKeysHandler(object):
"user_id": user_id,
}
)
else:
raise
if self._should_replace_room_key(current_room_key, room_key):
log_kv({"message": "Replacing room key."})
yield self.store.set_e2e_room_key(
user_id, version, room_id, session_id, room_key
to_insert.append((room_id, session_id, room_key))
changed = True
if len(to_insert):
yield self.store.add_e2e_room_keys(user_id, version, to_insert)
version_etag = version_info["etag"]
if changed:
version_etag = version_etag + 1
yield self.store.update_e2e_room_keys_version(
user_id, version, None, version_etag
)
else:
log_kv({"message": "Not replacing room_key."})
count = yield self.store.count_e2e_room_keys(user_id, version)
return {"etag": str(version_etag), "count": count}
@staticmethod
def _should_replace_room_key(current_room_key, room_key):
@ -314,6 +342,8 @@ class E2eRoomKeysHandler(object):
raise NotFoundError("Unknown backup version")
else:
raise
res["count"] = yield self.store.count_e2e_room_keys(user_id, res["version"])
return res
@trace

View File

@ -16,8 +16,6 @@
import logging
import random
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase
@ -50,9 +48,8 @@ class EventStreamHandler(BaseHandler):
self._server_notices_sender = hs.get_server_notices_sender()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
@log_function
def get_stream(
async def get_stream(
self,
auth_user_id,
pagin_config,
@ -69,17 +66,17 @@ class EventStreamHandler(BaseHandler):
"""
if room_id:
blocked = yield self.store.is_room_blocked(room_id)
blocked = await self.store.is_room_blocked(room_id)
if blocked:
raise SynapseError(403, "This room has been blocked on this server")
# send any outstanding server notices to the user.
yield self._server_notices_sender.on_user_syncing(auth_user_id)
await self._server_notices_sender.on_user_syncing(auth_user_id)
auth_user = UserID.from_string(auth_user_id)
presence_handler = self.hs.get_presence_handler()
context = yield presence_handler.user_syncing(
context = await presence_handler.user_syncing(
auth_user_id, affect_presence=affect_presence
)
with context:
@ -91,7 +88,7 @@ class EventStreamHandler(BaseHandler):
# thundering herds on restart.
timeout = random.randint(int(timeout * 0.9), int(timeout * 1.1))
events, tokens = yield self.notifier.get_events_for(
events, tokens = await self.notifier.get_events_for(
auth_user,
pagin_config,
timeout,
@ -112,14 +109,14 @@ class EventStreamHandler(BaseHandler):
# Send down presence.
if event.state_key == auth_user_id:
# Send down presence for everyone in the room.
users = yield self.state.get_current_users_in_room(
users = await self.state.get_current_users_in_room(
event.room_id
)
states = yield presence_handler.get_states(users, as_event=True)
states = await presence_handler.get_states(users, as_event=True)
to_add.extend(states)
else:
ev = yield presence_handler.get_state(
ev = await presence_handler.get_state(
UserID.from_string(event.state_key), as_event=True
)
to_add.append(ev)
@ -128,7 +125,7 @@ class EventStreamHandler(BaseHandler):
time_now = self.clock.time_msec()
chunks = yield self._event_serializer.serialize_events(
chunks = await self._event_serializer.serialize_events(
events,
time_now,
as_client_event=as_client_event,
@ -151,8 +148,7 @@ class EventHandler(BaseHandler):
super(EventHandler, self).__init__(hs)
self.storage = hs.get_storage()
@defer.inlineCallbacks
def get_event(self, user, room_id, event_id):
async def get_event(self, user, room_id, event_id):
"""Retrieve a single specified event.
Args:
@ -167,15 +163,15 @@ class EventHandler(BaseHandler):
AuthError if the user does not have the rights to inspect this
event.
"""
event = yield self.store.get_event(event_id, check_room_id=room_id)
event = await self.store.get_event(event_id, check_room_id=room_id)
if not event:
return None
users = yield self.store.get_users_in_room(event.room_id)
users = await self.store.get_users_in_room(event.room_id)
is_peeking = user.to_string() not in users
filtered = yield filter_events_for_client(
filtered = await filter_events_for_client(
self.storage, user.to_string(), [event], is_peeking=is_peeking
)

View File

@ -19,11 +19,13 @@
import itertools
import logging
from typing import Dict, Iterable, Optional, Sequence, Tuple
import six
from six import iteritems, itervalues
from six.moves import http_client, zip
import attr
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64
@ -45,6 +47,7 @@ from synapse.api.errors import (
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.crypto.event_signing import compute_event_signature
from synapse.event_auth import auth_types_for_event
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.events.validator import EventValidator
from synapse.logging.context import (
@ -72,6 +75,23 @@ from ._base import BaseHandler
logger = logging.getLogger(__name__)
@attr.s
class _NewEventInfo:
"""Holds information about a received event, ready for passing to _handle_new_events
Attributes:
event: the received event
state: the state at that event
auth_events: the auth_event map for that event
"""
event = attr.ib(type=EventBase)
state = attr.ib(type=Optional[Sequence[EventBase]], default=None)
auth_events = attr.ib(type=Optional[Dict[Tuple[str, str], EventBase]], default=None)
def shortstr(iterable, maxitems=5):
"""If iterable has maxitems or fewer, return the stringification of a list
containing those items.
@ -121,6 +141,7 @@ class FederationHandler(BaseHandler):
self.pusher_pool = hs.get_pusherpool()
self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler()
self._message_handler = hs.get_message_handler()
self._server_notices_mxid = hs.config.server_notices_mxid
self.config = hs.config
self.http_client = hs.get_simple_http_client()
@ -141,6 +162,8 @@ class FederationHandler(BaseHandler):
self.third_party_event_rules = hs.get_third_party_event_rules()
self._ephemeral_messages_enabled = hs.config.enable_ephemeral_messages
@defer.inlineCallbacks
def on_receive_pdu(self, origin, pdu, sent_to_us_directly=False):
""" Process a PDU received via a federation /send/ transaction, or
@ -594,14 +617,14 @@ class FederationHandler(BaseHandler):
for e in auth_chain
if e.event_id in auth_ids or e.type == EventTypes.Create
}
event_infos.append({"event": e, "auth_events": auth})
event_infos.append(_NewEventInfo(event=e, auth_events=auth))
seen_ids.add(e.event_id)
logger.info(
"[%s %s] persisting newly-received auth/state events %s",
room_id,
event_id,
[e["event"].event_id for e in event_infos],
[e.event.event_id for e in event_infos],
)
yield self._handle_new_events(origin, event_infos)
@ -792,9 +815,9 @@ class FederationHandler(BaseHandler):
a.internal_metadata.outlier = True
ev_infos.append(
{
"event": a,
"auth_events": {
_NewEventInfo(
event=a,
auth_events={
(
auth_events[a_id].type,
auth_events[a_id].state_key,
@ -802,7 +825,7 @@ class FederationHandler(BaseHandler):
for a_id in a.auth_event_ids()
if a_id in auth_events
},
}
)
)
# Step 1b: persist the events in the chunk we fetched state for (i.e.
@ -814,10 +837,10 @@ class FederationHandler(BaseHandler):
assert not ev.internal_metadata.is_outlier()
ev_infos.append(
{
"event": ev,
"state": events_to_state[e_id],
"auth_events": {
_NewEventInfo(
event=ev,
state=events_to_state[e_id],
auth_events={
(
auth_events[a_id].type,
auth_events[a_id].state_key,
@ -825,7 +848,7 @@ class FederationHandler(BaseHandler):
for a_id in ev.auth_event_ids()
if a_id in auth_events
},
}
)
)
yield self._handle_new_events(dest, ev_infos, backfilled=True)
@ -1428,9 +1451,9 @@ class FederationHandler(BaseHandler):
return event
@defer.inlineCallbacks
def do_remotely_reject_invite(self, target_hosts, room_id, user_id):
def do_remotely_reject_invite(self, target_hosts, room_id, user_id, content):
origin, event, event_format_version = yield self._make_and_verify_event(
target_hosts, room_id, user_id, "leave"
target_hosts, room_id, user_id, "leave", content=content,
)
# Mark as outlier as we don't have any state for this event; we're not
# even in the room.
@ -1710,7 +1733,12 @@ class FederationHandler(BaseHandler):
return context
@defer.inlineCallbacks
def _handle_new_events(self, origin, event_infos, backfilled=False):
def _handle_new_events(
self,
origin: str,
event_infos: Iterable[_NewEventInfo],
backfilled: bool = False,
):
"""Creates the appropriate contexts and persists events. The events
should not depend on one another, e.g. this should be used to persist
a bunch of outliers, but not a chunk of individual events that depend
@ -1720,14 +1748,14 @@ class FederationHandler(BaseHandler):
"""
@defer.inlineCallbacks
def prep(ev_info):
event = ev_info["event"]
def prep(ev_info: _NewEventInfo):
event = ev_info.event
with nested_logging_context(suffix=event.event_id):
res = yield self._prep_event(
origin,
event,
state=ev_info.get("state"),
auth_events=ev_info.get("auth_events"),
state=ev_info.state,
auth_events=ev_info.auth_events,
backfilled=backfilled,
)
return res
@ -1741,7 +1769,7 @@ class FederationHandler(BaseHandler):
yield self.persist_events_and_notify(
[
(ev_info["event"], context)
(ev_info.event, context)
for ev_info, context in zip(event_infos, contexts)
],
backfilled=backfilled,
@ -1843,7 +1871,14 @@ class FederationHandler(BaseHandler):
yield self.persist_events_and_notify([(event, new_event_context)])
@defer.inlineCallbacks
def _prep_event(self, origin, event, state, auth_events, backfilled):
def _prep_event(
self,
origin: str,
event: EventBase,
state: Optional[Iterable[EventBase]],
auth_events: Optional[Dict[Tuple[str, str], EventBase]],
backfilled: bool,
):
"""
Args:
@ -1851,7 +1886,7 @@ class FederationHandler(BaseHandler):
event:
state:
auth_events:
backfilled (bool)
backfilled:
Returns:
Deferred, which resolves to synapse.events.snapshot.EventContext
@ -1887,15 +1922,16 @@ class FederationHandler(BaseHandler):
return context
@defer.inlineCallbacks
def _check_for_soft_fail(self, event, state, backfilled):
def _check_for_soft_fail(
self, event: EventBase, state: Optional[Iterable[EventBase]], backfilled: bool
):
"""Checks if we should soft fail the event, if so marks the event as
such.
Args:
event (FrozenEvent)
state (dict|None): The state at the event if we don't have all the
event's prev events
backfilled (bool): Whether the event is from backfill
event
state: The state at the event if we don't have all the event's prev events
backfilled: Whether the event is from backfill
Returns:
Deferred
@ -2040,8 +2076,10 @@ class FederationHandler(BaseHandler):
auth_events (dict[(str, str)->synapse.events.EventBase]):
Map from (event_type, state_key) to event
What we expect the event's auth_events to be, based on the event's
position in the dag. I think? maybe??
Normally, our calculated auth_events based on the state of the room
at the event's position in the DAG, though occasionally (eg if the
event is an outlier), may be the auth events claimed by the remote
server.
Also NB that this function adds entries to it.
Returns:
@ -2091,35 +2129,35 @@ class FederationHandler(BaseHandler):
origin (str):
event (synapse.events.EventBase):
context (synapse.events.snapshot.EventContext):
auth_events (dict[(str, str)->synapse.events.EventBase]):
Map from (event_type, state_key) to event
Normally, our calculated auth_events based on the state of the room
at the event's position in the DAG, though occasionally (eg if the
event is an outlier), may be the auth events claimed by the remote
server.
Also NB that this function adds entries to it.
Returns:
defer.Deferred[EventContext]: updated context
"""
event_auth_events = set(event.auth_event_ids())
if event.is_state():
event_key = (event.type, event.state_key)
else:
event_key = None
# if the event's auth_events refers to events which are not in our
# calculated auth_events, we need to fetch those events from somewhere.
#
# we start by fetching them from the store, and then try calling /event_auth/.
# missing_auth is the set of the event's auth_events which we don't yet have
# in auth_events.
missing_auth = event_auth_events.difference(
e.event_id for e in auth_events.values()
)
# if we have missing events, we need to fetch those events from somewhere.
#
# we start by checking if they are in the store, and then try calling /event_auth/.
if missing_auth:
# TODO: can we use store.have_seen_events here instead?
have_events = yield self.store.get_seen_events_with_rejections(missing_auth)
logger.debug("Got events %s from store", have_events)
missing_auth.difference_update(have_events.keys())
else:
have_events = {}
have_events.update({e.event_id: "" for e in auth_events.values()})
have_events = yield self.store.have_seen_events(missing_auth)
logger.debug("Events %s are in the store", have_events)
missing_auth.difference_update(have_events)
if missing_auth:
# If we don't have all the auth events, we need to get them.
@ -2165,19 +2203,18 @@ class FederationHandler(BaseHandler):
except AuthError:
pass
have_events = yield self.store.get_seen_events_with_rejections(
event.auth_event_ids()
)
except Exception:
# FIXME:
logger.exception("Failed to get auth chain")
if event.internal_metadata.is_outlier():
# XXX: given that, for an outlier, we'll be working with the
# event's *claimed* auth events rather than those we calculated:
# (a) is there any point in this test, since different_auth below will
# obviously be empty
# (b) alternatively, why don't we do it earlier?
logger.info("Skipping auth_event fetch for outlier")
return context
# FIXME: Assumes we have and stored all the state for all the
# prev_events
different_auth = event_auth_events.difference(
e.event_id for e in auth_events.values()
)
@ -2191,32 +2228,37 @@ class FederationHandler(BaseHandler):
different_auth,
)
# XXX: currently this checks for redactions but I'm not convinced that is
# necessary?
different_events = yield self.store.get_events_as_list(different_auth)
for d in different_events:
if d.room_id != event.room_id:
logger.warning(
"Event %s refers to auth_event %s which is in a different room",
event.event_id,
d.event_id,
)
# don't attempt to resolve the claimed auth events against our own
# in this case: just use our own auth events.
#
# XXX: should we reject the event in this case? It feels like we should,
# but then shouldn't we also do so if we've failed to fetch any of the
# auth events?
return context
# now we state-resolve between our own idea of the auth events, and the remote's
# idea of them.
local_state = auth_events.values()
remote_auth_events = dict(auth_events)
remote_auth_events.update({(d.type, d.state_key): d for d in different_events})
remote_state = remote_auth_events.values()
room_version = yield self.store.get_room_version(event.room_id)
different_events = yield make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self.store.get_event, d, allow_none=True, allow_rejected=False
)
for d in different_auth
if d in have_events and not have_events[d]
],
consumeErrors=True,
)
).addErrback(unwrapFirstError)
if different_events:
local_view = dict(auth_events)
remote_view = dict(auth_events)
remote_view.update(
{(d.type, d.state_key): d for d in different_events if d}
)
new_state = yield self.state_handler.resolve_events(
room_version,
[list(local_view.values()), list(remote_view.values())],
event,
room_version, (local_state, remote_state), event
)
logger.info(
@ -2231,13 +2273,13 @@ class FederationHandler(BaseHandler):
auth_events.update(new_state)
context = yield self._update_context_for_auth_events(
event, context, auth_events, event_key
event, context, auth_events
)
return context
@defer.inlineCallbacks
def _update_context_for_auth_events(self, event, context, auth_events, event_key):
def _update_context_for_auth_events(self, event, context, auth_events):
"""Update the state_ids in an event context after auth event resolution,
storing the changes as a new state group.
@ -2246,18 +2288,21 @@ class FederationHandler(BaseHandler):
context (synapse.events.snapshot.EventContext): initial event context
auth_events (dict[(str, str)->str]): Events to update in the event
auth_events (dict[(str, str)->EventBase]): Events to update in the event
context.
event_key ((str, str)): (type, state_key) for the current event.
this will not be included in the current_state in the context.
Returns:
Deferred[EventContext]: new event context
"""
# exclude the state key of the new event from the current_state in the context.
if event.is_state():
event_key = (event.type, event.state_key)
else:
event_key = None
state_updates = {
k: a.event_id for k, a in iteritems(auth_events) if k != event_key
}
current_state_ids = yield context.get_current_state_ids(self.store)
current_state_ids = dict(current_state_ids)
@ -2459,7 +2504,7 @@ class FederationHandler(BaseHandler):
room_version, event_dict, event, context
)
EventValidator().validate_new(event)
EventValidator().validate_new(event, self.config)
# We need to tell the transaction queue to send this out, even
# though the sender isn't a local user.
@ -2574,7 +2619,7 @@ class FederationHandler(BaseHandler):
event, context = yield self.event_creation_handler.create_new_client_event(
builder=builder
)
EventValidator().validate_new(event)
EventValidator().validate_new(event, self.config)
return (event, context)
@defer.inlineCallbacks
@ -2708,6 +2753,11 @@ class FederationHandler(BaseHandler):
event_and_contexts, backfilled=backfilled
)
if self._ephemeral_messages_enabled:
for (event, context) in event_and_contexts:
# If there's an expiry timestamp on the event, schedule its expiry.
self._message_handler.maybe_schedule_expiry(event)
if not backfilled: # Never notify for backfilled events
for event, _ in event_and_contexts:
yield self._notify_persisted_event(event, max_stream_id)

View File

@ -15,6 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Optional
from six import iteritems, itervalues, string_types
@ -22,9 +23,16 @@ from canonicaljson import encode_canonical_json, json
from twisted.internet import defer
from twisted.internet.defer import succeed
from twisted.internet.interfaces import IDelayedCall
from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, RelationTypes, UserTypes
from synapse.api.constants import (
EventContentFields,
EventTypes,
Membership,
RelationTypes,
UserTypes,
)
from synapse.api.errors import (
AuthError,
Codes,
@ -62,6 +70,17 @@ class MessageHandler(object):
self.storage = hs.get_storage()
self.state_store = self.storage.state
self._event_serializer = hs.get_event_client_serializer()
self._ephemeral_events_enabled = hs.config.enable_ephemeral_messages
self._is_worker_app = bool(hs.config.worker_app)
# The scheduled call to self._expire_event. None if no call is currently
# scheduled.
self._scheduled_expiry = None # type: Optional[IDelayedCall]
if not hs.config.worker_app:
run_as_background_process(
"_schedule_next_expiry", self._schedule_next_expiry
)
@defer.inlineCallbacks
def get_room_data(
@ -138,7 +157,7 @@ class MessageHandler(object):
raise NotFoundError("Can't find event for token %s" % (at_token,))
visible_events = yield filter_events_for_client(
self.storage, user_id, last_events
self.storage, user_id, last_events, apply_retention_policies=False
)
event = last_events[0]
@ -225,6 +244,100 @@ class MessageHandler(object):
for user_id, profile in iteritems(users_with_profile)
}
def maybe_schedule_expiry(self, event):
"""Schedule the expiry of an event if there's not already one scheduled,
or if the one running is for an event that will expire after the provided
timestamp.
This function needs to invalidate the event cache, which is only possible on
the master process, and therefore needs to be run on there.
Args:
event (EventBase): The event to schedule the expiry of.
"""
assert not self._is_worker_app
expiry_ts = event.content.get(EventContentFields.SELF_DESTRUCT_AFTER)
if not isinstance(expiry_ts, int) or event.is_state():
return
# _schedule_expiry_for_event won't actually schedule anything if there's already
# a task scheduled for a timestamp that's sooner than the provided one.
self._schedule_expiry_for_event(event.event_id, expiry_ts)
@defer.inlineCallbacks
def _schedule_next_expiry(self):
"""Retrieve the ID and the expiry timestamp of the next event to be expired,
and schedule an expiry task for it.
If there's no event left to expire, set _expiry_scheduled to None so that a
future call to save_expiry_ts can schedule a new expiry task.
"""
# Try to get the expiry timestamp of the next event to expire.
res = yield self.store.get_next_event_to_expire()
if res:
event_id, expiry_ts = res
self._schedule_expiry_for_event(event_id, expiry_ts)
def _schedule_expiry_for_event(self, event_id, expiry_ts):
"""Schedule an expiry task for the provided event if there's not already one
scheduled at a timestamp that's sooner than the provided one.
Args:
event_id (str): The ID of the event to expire.
expiry_ts (int): The timestamp at which to expire the event.
"""
if self._scheduled_expiry:
# If the provided timestamp refers to a time before the scheduled time of the
# next expiry task, cancel that task and reschedule it for this timestamp.
next_scheduled_expiry_ts = self._scheduled_expiry.getTime() * 1000
if expiry_ts < next_scheduled_expiry_ts:
self._scheduled_expiry.cancel()
else:
return
# Figure out how many seconds we need to wait before expiring the event.
now_ms = self.clock.time_msec()
delay = (expiry_ts - now_ms) / 1000
# callLater doesn't support negative delays, so trim the delay to 0 if we're
# in that case.
if delay < 0:
delay = 0
logger.info("Scheduling expiry for event %s in %.3fs", event_id, delay)
self._scheduled_expiry = self.clock.call_later(
delay,
run_as_background_process,
"_expire_event",
self._expire_event,
event_id,
)
@defer.inlineCallbacks
def _expire_event(self, event_id):
"""Retrieve and expire an event that needs to be expired from the database.
If the event doesn't exist in the database, log it and delete the expiry date
from the database (so that we don't try to expire it again).
"""
assert self._ephemeral_events_enabled
self._scheduled_expiry = None
logger.info("Expiring event %s", event_id)
try:
# Expire the event if we know about it. This function also deletes the expiry
# date from the database in the same database transaction.
yield self.store.expire_event(event_id)
except Exception as e:
logger.error("Could not expire event %s: %r", event_id, e)
# Schedule the expiry of the next event to expire.
yield self._schedule_next_expiry()
# The duration (in ms) after which rooms should be removed
# `_rooms_to_exclude_from_dummy_event_insertion` (with the effect that we will try
@ -250,6 +363,8 @@ class EventCreationHandler(object):
self.config = hs.config
self.require_membership_for_aliases = hs.config.require_membership_for_aliases
self.room_invite_state_types = self.hs.config.room_invite_state_types
self.send_event_to_master = ReplicationSendEventRestServlet.make_client(hs)
# This is only used to get at ratelimit function, and maybe_kick_guest_users
@ -295,6 +410,10 @@ class EventCreationHandler(object):
5 * 60 * 1000,
)
self._message_handler = hs.get_message_handler()
self._ephemeral_events_enabled = hs.config.enable_ephemeral_messages
@defer.inlineCallbacks
def create_event(
self,
@ -417,7 +536,7 @@ class EventCreationHandler(object):
403, "You must be in the room to create an alias for it"
)
self.validator.validate_new(event)
self.validator.validate_new(event, self.config)
return (event, context)
@ -634,7 +753,7 @@ class EventCreationHandler(object):
if requester:
context.app_service = requester.app_service
self.validator.validate_new(event)
self.validator.validate_new(event, self.config)
# If this event is an annotation then we check that that the sender
# can't annotate the same way twice (e.g. stops users from liking an
@ -799,7 +918,7 @@ class EventCreationHandler(object):
state_to_include_ids = [
e_id
for k, e_id in iteritems(current_state_ids)
if k[0] in self.hs.config.room_invite_state_types
if k[0] in self.room_invite_state_types
or k == (EventTypes.Member, event.sender)
]
@ -877,6 +996,10 @@ class EventCreationHandler(object):
event, context=context
)
if self._ephemeral_events_enabled:
# If there's an expiry timestamp on the event, schedule its expiry.
self._message_handler.maybe_schedule_expiry(event)
yield self.pusher_pool.on_new_notifications(event_stream_id, max_stream_id)
def _notify():

View File

@ -15,12 +15,15 @@
# limitations under the License.
import logging
from six import iteritems
from twisted.internet import defer
from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.state import StateFilter
from synapse.types import RoomStreamToken
from synapse.util.async_helpers import ReadWriteLock
@ -80,6 +83,109 @@ class PaginationHandler(object):
self._purges_by_id = {}
self._event_serializer = hs.get_event_client_serializer()
self._retention_default_max_lifetime = hs.config.retention_default_max_lifetime
if hs.config.retention_enabled:
# Run the purge jobs described in the configuration file.
for job in hs.config.retention_purge_jobs:
self.clock.looping_call(
run_as_background_process,
job["interval"],
"purge_history_for_rooms_in_range",
self.purge_history_for_rooms_in_range,
job["shortest_max_lifetime"],
job["longest_max_lifetime"],
)
@defer.inlineCallbacks
def purge_history_for_rooms_in_range(self, min_ms, max_ms):
"""Purge outdated events from rooms within the given retention range.
If a default retention policy is defined in the server's configuration and its
'max_lifetime' is within this range, also targets rooms which don't have a
retention policy.
Args:
min_ms (int|None): Duration in milliseconds that define the lower limit of
the range to handle (exclusive). If None, it means that the range has no
lower limit.
max_ms (int|None): Duration in milliseconds that define the upper limit of
the range to handle (inclusive). If None, it means that the range has no
upper limit.
"""
# We want the storage layer to to include rooms with no retention policy in its
# return value only if a default retention policy is defined in the server's
# configuration and that policy's 'max_lifetime' is either lower (or equal) than
# max_ms or higher than min_ms (or both).
if self._retention_default_max_lifetime is not None:
include_null = True
if min_ms is not None and min_ms >= self._retention_default_max_lifetime:
# The default max_lifetime is lower than (or equal to) min_ms.
include_null = False
if max_ms is not None and max_ms < self._retention_default_max_lifetime:
# The default max_lifetime is higher than max_ms.
include_null = False
else:
include_null = False
rooms = yield self.store.get_rooms_for_retention_period_in_range(
min_ms, max_ms, include_null
)
for room_id, retention_policy in iteritems(rooms):
if room_id in self._purges_in_progress_by_room:
logger.warning(
"[purge] not purging room %s as there's an ongoing purge running"
" for this room",
room_id,
)
continue
max_lifetime = retention_policy["max_lifetime"]
if max_lifetime is None:
# If max_lifetime is None, it means that include_null equals True,
# therefore we can safely assume that there is a default policy defined
# in the server's configuration.
max_lifetime = self._retention_default_max_lifetime
# Figure out what token we should start purging at.
ts = self.clock.time_msec() - max_lifetime
stream_ordering = yield self.store.find_first_stream_ordering_after_ts(ts)
r = yield self.store.get_room_event_after_stream_ordering(
room_id, stream_ordering,
)
if not r:
logger.warning(
"[purge] purging events not possible: No event found "
"(ts %i => stream_ordering %i)",
ts,
stream_ordering,
)
continue
(stream, topo, _event_id) = r
token = "t%d-%d" % (topo, stream)
purge_id = random_string(16)
self._purges_by_id[purge_id] = PurgeStatus()
logger.info(
"Starting purging events in room %s (purge_id %s)" % (room_id, purge_id)
)
# We want to purge everything, including local events, and to run the purge in
# the background so that it's not blocking any other operation apart from
# other purges in the same room.
run_as_background_process(
"_purge_history", self._purge_history, purge_id, room_id, token, True,
)
def start_purge_history(self, room_id, token, delete_local_events=False):
"""Start off a history purge on a room.

View File

@ -266,7 +266,7 @@ class RegistrationHandler(BaseHandler):
}
# Bind email to new account
yield self._register_email_threepid(user_id, threepid_dict, None, False)
yield self._register_email_threepid(user_id, threepid_dict, None)
return user_id

View File

@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2018-2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -198,21 +199,21 @@ class RoomCreationHandler(BaseHandler):
# finally, shut down the PLs in the old room, and update them in the new
# room.
yield self._update_upgraded_room_pls(
requester, old_room_id, new_room_id, old_room_state
requester, old_room_id, new_room_id, old_room_state,
)
return new_room_id
@defer.inlineCallbacks
def _update_upgraded_room_pls(
self, requester, old_room_id, new_room_id, old_room_state
self, requester, old_room_id, new_room_id, old_room_state,
):
"""Send updated power levels in both rooms after an upgrade
Args:
requester (synapse.types.Requester): the user requesting the upgrade
old_room_id (unicode): the id of the room to be replaced
new_room_id (unicode): the id of the replacement room
old_room_id (str): the id of the room to be replaced
new_room_id (str): the id of the replacement room
old_room_state (dict[tuple[str, str], str]): the state map for the old room
Returns:
@ -298,7 +299,7 @@ class RoomCreationHandler(BaseHandler):
tombstone_event_id (unicode|str): the ID of the tombstone event in the old
room.
Returns:
Deferred[None]
Deferred
"""
user_id = requester.user.to_string()
@ -333,6 +334,7 @@ class RoomCreationHandler(BaseHandler):
(EventTypes.Encryption, ""),
(EventTypes.ServerACL, ""),
(EventTypes.RelatedGroups, ""),
(EventTypes.PowerLevels, ""),
)
old_room_state_ids = yield self.store.get_filtered_current_state_ids(
@ -346,6 +348,31 @@ class RoomCreationHandler(BaseHandler):
if old_event:
initial_state[k] = old_event.content
# Resolve the minimum power level required to send any state event
# We will give the upgrading user this power level temporarily (if necessary) such that
# they are able to copy all of the state events over, then revert them back to their
# original power level afterwards in _update_upgraded_room_pls
# Copy over user power levels now as this will not be possible with >100PL users once
# the room has been created
power_levels = initial_state[(EventTypes.PowerLevels, "")]
# Calculate the minimum power level needed to clone the room
event_power_levels = power_levels.get("events", {})
state_default = power_levels.get("state_default", 0)
ban = power_levels.get("ban")
needed_power_level = max(state_default, ban, max(event_power_levels.values()))
# Raise the requester's power level in the new room if necessary
current_power_level = power_levels["users"][requester.user.to_string()]
if current_power_level < needed_power_level:
# Assign this power level to the requester
power_levels["users"][requester.user.to_string()] = needed_power_level
# Set the power levels to the modified state
initial_state[(EventTypes.PowerLevels, "")] = power_levels
yield self._send_events_for_new_room(
requester,
new_room_id,
@ -874,6 +901,10 @@ class RoomContextHandler(object):
room_id, event_id, before_limit, after_limit, event_filter
)
if event_filter:
results["events_before"] = event_filter.filter(results["events_before"])
results["events_after"] = event_filter.filter(results["events_after"])
results["events_before"] = yield filter_evts(results["events_before"])
results["events_after"] = yield filter_evts(results["events_after"])
results["event"] = event
@ -902,7 +933,12 @@ class RoomContextHandler(object):
state = yield self.state_store.get_state_for_events(
[last_event_id], state_filter=state_filter
)
results["state"] = list(state[last_event_id].values())
state_events = list(state[last_event_id].values())
if event_filter:
state_events = event_filter.filter(state_events)
results["state"] = state_events
# We use a dummy token here as we only care about the room portion of
# the token, which we replace.

View File

@ -94,7 +94,9 @@ class RoomMemberHandler(object):
raise NotImplementedError()
@abc.abstractmethod
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
def _remote_reject_invite(
self, requester, remote_room_hosts, room_id, target, content
):
"""Attempt to reject an invite for a room this server is not in. If we
fail to do so we locally mark the invite as rejected.
@ -104,6 +106,7 @@ class RoomMemberHandler(object):
reject invite
room_id (str)
target (UserID): The user rejecting the invite
content (dict): The content for the rejection event
Returns:
Deferred[dict]: A dictionary to be returned to the client, may
@ -471,7 +474,7 @@ class RoomMemberHandler(object):
# send the rejection to the inviter's HS.
remote_room_hosts = remote_room_hosts + [inviter.domain]
res = yield self._remote_reject_invite(
requester, remote_room_hosts, room_id, target
requester, remote_room_hosts, room_id, target, content,
)
return res
@ -971,13 +974,15 @@ class RoomMemberMasterHandler(RoomMemberHandler):
)
@defer.inlineCallbacks
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
def _remote_reject_invite(
self, requester, remote_room_hosts, room_id, target, content
):
"""Implements RoomMemberHandler._remote_reject_invite
"""
fed_handler = self.federation_handler
try:
ret = yield fed_handler.do_remotely_reject_invite(
remote_room_hosts, room_id, target.to_string()
remote_room_hosts, room_id, target.to_string(), content=content,
)
return ret
except Exception as e:

View File

@ -55,7 +55,9 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
return ret
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
def _remote_reject_invite(
self, requester, remote_room_hosts, room_id, target, content
):
"""Implements RoomMemberHandler._remote_reject_invite
"""
return self._remote_reject_client(
@ -63,6 +65,7 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
remote_room_hosts=remote_room_hosts,
room_id=room_id,
user_id=target.to_string(),
content=content,
)
def _user_joined_room(self, target, room_id):

View File

@ -22,8 +22,6 @@ from six import iteritems, itervalues
from prometheus_client import Counter
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.logging.context import LoggingContext
from synapse.push.clientformat import format_push_rules_for_user
@ -241,8 +239,7 @@ class SyncHandler(object):
expiry_ms=LAZY_LOADED_MEMBERS_CACHE_MAX_AGE,
)
@defer.inlineCallbacks
def wait_for_sync_for_user(
async def wait_for_sync_for_user(
self, sync_config, since_token=None, timeout=0, full_state=False
):
"""Get the sync for a client if we have new data for it now. Otherwise
@ -255,9 +252,9 @@ class SyncHandler(object):
# not been exceeded (if not part of the group by this point, almost certain
# auth_blocking will occur)
user_id = sync_config.user.to_string()
yield self.auth.check_auth_blocking(user_id)
await self.auth.check_auth_blocking(user_id)
res = yield self.response_cache.wrap(
res = await self.response_cache.wrap(
sync_config.request_key,
self._wait_for_sync_for_user,
sync_config,
@ -267,8 +264,9 @@ class SyncHandler(object):
)
return res
@defer.inlineCallbacks
def _wait_for_sync_for_user(self, sync_config, since_token, timeout, full_state):
async def _wait_for_sync_for_user(
self, sync_config, since_token, timeout, full_state
):
if since_token is None:
sync_type = "initial_sync"
elif full_state:
@ -283,7 +281,7 @@ class SyncHandler(object):
if timeout == 0 or since_token is None or full_state:
# we are going to return immediately, so don't bother calling
# notifier.wait_for_events.
result = yield self.current_sync_for_user(
result = await self.current_sync_for_user(
sync_config, since_token, full_state=full_state
)
else:
@ -291,7 +289,7 @@ class SyncHandler(object):
def current_sync_callback(before_token, after_token):
return self.current_sync_for_user(sync_config, since_token)
result = yield self.notifier.wait_for_events(
result = await self.notifier.wait_for_events(
sync_config.user.to_string(),
timeout,
current_sync_callback,
@ -314,15 +312,13 @@ class SyncHandler(object):
"""
return self.generate_sync_result(sync_config, since_token, full_state)
@defer.inlineCallbacks
def push_rules_for_user(self, user):
async def push_rules_for_user(self, user):
user_id = user.to_string()
rules = yield self.store.get_push_rules_for_user(user_id)
rules = await self.store.get_push_rules_for_user(user_id)
rules = format_push_rules_for_user(user, rules)
return rules
@defer.inlineCallbacks
def ephemeral_by_room(self, sync_result_builder, now_token, since_token=None):
async def ephemeral_by_room(self, sync_result_builder, now_token, since_token=None):
"""Get the ephemeral events for each room the user is in
Args:
sync_result_builder(SyncResultBuilder)
@ -343,7 +339,7 @@ class SyncHandler(object):
room_ids = sync_result_builder.joined_room_ids
typing_source = self.event_sources.sources["typing"]
typing, typing_key = yield typing_source.get_new_events(
typing, typing_key = await typing_source.get_new_events(
user=sync_config.user,
from_key=typing_key,
limit=sync_config.filter_collection.ephemeral_limit(),
@ -365,7 +361,7 @@ class SyncHandler(object):
receipt_key = since_token.receipt_key if since_token else "0"
receipt_source = self.event_sources.sources["receipt"]
receipts, receipt_key = yield receipt_source.get_new_events(
receipts, receipt_key = await receipt_source.get_new_events(
user=sync_config.user,
from_key=receipt_key,
limit=sync_config.filter_collection.ephemeral_limit(),
@ -382,8 +378,7 @@ class SyncHandler(object):
return now_token, ephemeral_by_room
@defer.inlineCallbacks
def _load_filtered_recents(
async def _load_filtered_recents(
self,
room_id,
sync_config,
@ -415,10 +410,10 @@ class SyncHandler(object):
# ensure that we always include current state in the timeline
current_state_ids = frozenset()
if any(e.is_state() for e in recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = await self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(itervalues(current_state_ids))
recents = yield filter_events_for_client(
recents = await filter_events_for_client(
self.storage,
sync_config.user.to_string(),
recents,
@ -449,14 +444,14 @@ class SyncHandler(object):
# Otherwise, we want to return the last N events in the room
# in toplogical ordering.
if since_key:
events, end_key = yield self.store.get_room_events_stream_for_room(
events, end_key = await self.store.get_room_events_stream_for_room(
room_id,
limit=load_limit + 1,
from_key=since_key,
to_key=end_key,
)
else:
events, end_key = yield self.store.get_recent_events_for_room(
events, end_key = await self.store.get_recent_events_for_room(
room_id, limit=load_limit + 1, end_token=end_key
)
loaded_recents = sync_config.filter_collection.filter_room_timeline(
@ -468,10 +463,10 @@ class SyncHandler(object):
# ensure that we always include current state in the timeline
current_state_ids = frozenset()
if any(e.is_state() for e in loaded_recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = await self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(itervalues(current_state_ids))
loaded_recents = yield filter_events_for_client(
loaded_recents = await filter_events_for_client(
self.storage,
sync_config.user.to_string(),
loaded_recents,
@ -498,8 +493,7 @@ class SyncHandler(object):
limited=limited or newly_joined_room,
)
@defer.inlineCallbacks
def get_state_after_event(self, event, state_filter=StateFilter.all()):
async def get_state_after_event(self, event, state_filter=StateFilter.all()):
"""
Get the room state after the given event
@ -511,7 +505,7 @@ class SyncHandler(object):
Returns:
A Deferred map from ((type, state_key)->Event)
"""
state_ids = yield self.state_store.get_state_ids_for_event(
state_ids = await self.state_store.get_state_ids_for_event(
event.event_id, state_filter=state_filter
)
if event.is_state():
@ -519,8 +513,9 @@ class SyncHandler(object):
state_ids[(event.type, event.state_key)] = event.event_id
return state_ids
@defer.inlineCallbacks
def get_state_at(self, room_id, stream_position, state_filter=StateFilter.all()):
async def get_state_at(
self, room_id, stream_position, state_filter=StateFilter.all()
):
""" Get the room state at a particular stream position
Args:
@ -536,13 +531,13 @@ class SyncHandler(object):
# get_recent_events_for_room operates by topo ordering. This therefore
# does not reliably give you the state at the given stream position.
# (https://github.com/matrix-org/synapse/issues/3305)
last_events, _ = yield self.store.get_recent_events_for_room(
last_events, _ = await self.store.get_recent_events_for_room(
room_id, end_token=stream_position.room_key, limit=1
)
if last_events:
last_event = last_events[-1]
state = yield self.get_state_after_event(
state = await self.get_state_after_event(
last_event, state_filter=state_filter
)
@ -551,8 +546,7 @@ class SyncHandler(object):
state = {}
return state
@defer.inlineCallbacks
def compute_summary(self, room_id, sync_config, batch, state, now_token):
async def compute_summary(self, room_id, sync_config, batch, state, now_token):
""" Works out a room summary block for this room, summarising the number
of joined members in the room, and providing the 'hero' members if the
room has no name so clients can consistently name rooms. Also adds
@ -574,7 +568,7 @@ class SyncHandler(object):
# FIXME: we could/should get this from room_stats when matthew/stats lands
# FIXME: this promulgates https://github.com/matrix-org/synapse/issues/3305
last_events, _ = yield self.store.get_recent_event_ids_for_room(
last_events, _ = await self.store.get_recent_event_ids_for_room(
room_id, end_token=now_token.room_key, limit=1
)
@ -582,7 +576,7 @@ class SyncHandler(object):
return None
last_event = last_events[-1]
state_ids = yield self.state_store.get_state_ids_for_event(
state_ids = await self.state_store.get_state_ids_for_event(
last_event.event_id,
state_filter=StateFilter.from_types(
[(EventTypes.Name, ""), (EventTypes.CanonicalAlias, "")]
@ -590,7 +584,7 @@ class SyncHandler(object):
)
# this is heavily cached, thus: fast.
details = yield self.store.get_room_summary(room_id)
details = await self.store.get_room_summary(room_id)
name_id = state_ids.get((EventTypes.Name, ""))
canonical_alias_id = state_ids.get((EventTypes.CanonicalAlias, ""))
@ -608,12 +602,12 @@ class SyncHandler(object):
# calculating heroes. Empty strings are falsey, so we check
# for the "name" value and default to an empty string.
if name_id:
name = yield self.store.get_event(name_id, allow_none=True)
name = await self.store.get_event(name_id, allow_none=True)
if name and name.content.get("name"):
return summary
if canonical_alias_id:
canonical_alias = yield self.store.get_event(
canonical_alias = await self.store.get_event(
canonical_alias_id, allow_none=True
)
if canonical_alias and canonical_alias.content.get("alias"):
@ -678,7 +672,7 @@ class SyncHandler(object):
)
]
missing_hero_state = yield self.store.get_events(missing_hero_event_ids)
missing_hero_state = await self.store.get_events(missing_hero_event_ids)
missing_hero_state = missing_hero_state.values()
for s in missing_hero_state:
@ -697,8 +691,7 @@ class SyncHandler(object):
logger.debug("found LruCache for %r", cache_key)
return cache
@defer.inlineCallbacks
def compute_state_delta(
async def compute_state_delta(
self, room_id, batch, sync_config, since_token, now_token, full_state
):
""" Works out the difference in state between the start of the timeline
@ -759,16 +752,16 @@ class SyncHandler(object):
if full_state:
if batch:
current_state_ids = yield self.state_store.get_state_ids_for_event(
current_state_ids = await self.state_store.get_state_ids_for_event(
batch.events[-1].event_id, state_filter=state_filter
)
state_ids = yield self.state_store.get_state_ids_for_event(
state_ids = await self.state_store.get_state_ids_for_event(
batch.events[0].event_id, state_filter=state_filter
)
else:
current_state_ids = yield self.get_state_at(
current_state_ids = await self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
)
@ -783,13 +776,13 @@ class SyncHandler(object):
)
elif batch.limited:
if batch:
state_at_timeline_start = yield self.state_store.get_state_ids_for_event(
state_at_timeline_start = await self.state_store.get_state_ids_for_event(
batch.events[0].event_id, state_filter=state_filter
)
else:
# We can get here if the user has ignored the senders of all
# the recent events.
state_at_timeline_start = yield self.get_state_at(
state_at_timeline_start = await self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
)
@ -807,19 +800,19 @@ class SyncHandler(object):
# about them).
state_filter = StateFilter.all()
state_at_previous_sync = yield self.get_state_at(
state_at_previous_sync = await self.get_state_at(
room_id, stream_position=since_token, state_filter=state_filter
)
if batch:
current_state_ids = yield self.state_store.get_state_ids_for_event(
current_state_ids = await self.state_store.get_state_ids_for_event(
batch.events[-1].event_id, state_filter=state_filter
)
else:
# Its not clear how we get here, but empirically we do
# (#5407). Logging has been added elsewhere to try and
# figure out where this state comes from.
current_state_ids = yield self.get_state_at(
current_state_ids = await self.get_state_at(
room_id, stream_position=now_token, state_filter=state_filter
)
@ -843,7 +836,7 @@ class SyncHandler(object):
# So we fish out all the member events corresponding to the
# timeline here, and then dedupe any redundant ones below.
state_ids = yield self.state_store.get_state_ids_for_event(
state_ids = await self.state_store.get_state_ids_for_event(
batch.events[0].event_id,
# we only want members!
state_filter=StateFilter.from_types(
@ -883,7 +876,7 @@ class SyncHandler(object):
state = {}
if state_ids:
state = yield self.store.get_events(list(state_ids.values()))
state = await self.store.get_events(list(state_ids.values()))
return {
(e.type, e.state_key): e
@ -892,10 +885,9 @@ class SyncHandler(object):
)
}
@defer.inlineCallbacks
def unread_notifs_for_room_id(self, room_id, sync_config):
async def unread_notifs_for_room_id(self, room_id, sync_config):
with Measure(self.clock, "unread_notifs_for_room_id"):
last_unread_event_id = yield self.store.get_last_receipt_event_id_for_user(
last_unread_event_id = await self.store.get_last_receipt_event_id_for_user(
user_id=sync_config.user.to_string(),
room_id=room_id,
receipt_type="m.read",
@ -903,7 +895,7 @@ class SyncHandler(object):
notifs = []
if last_unread_event_id:
notifs = yield self.store.get_unread_event_push_actions_by_room_for_user(
notifs = await self.store.get_unread_event_push_actions_by_room_for_user(
room_id, sync_config.user.to_string(), last_unread_event_id
)
return notifs
@ -912,8 +904,9 @@ class SyncHandler(object):
# count is whatever it was last time.
return None
@defer.inlineCallbacks
def generate_sync_result(self, sync_config, since_token=None, full_state=False):
async def generate_sync_result(
self, sync_config, since_token=None, full_state=False
):
"""Generates a sync result.
Args:
@ -928,7 +921,7 @@ class SyncHandler(object):
# this is due to some of the underlying streams not supporting the ability
# to query up to a given point.
# Always use the `now_token` in `SyncResultBuilder`
now_token = yield self.event_sources.get_current_token()
now_token = await self.event_sources.get_current_token()
logger.info(
"Calculating sync response for %r between %s and %s",
@ -944,10 +937,9 @@ class SyncHandler(object):
# See https://github.com/matrix-org/matrix-doc/issues/1144
raise NotImplementedError()
else:
joined_room_ids = yield self.get_rooms_for_user_at(
joined_room_ids = await self.get_rooms_for_user_at(
user_id, now_token.room_stream_id
)
sync_result_builder = SyncResultBuilder(
sync_config,
full_state,
@ -956,11 +948,11 @@ class SyncHandler(object):
joined_room_ids=joined_room_ids,
)
account_data_by_room = yield self._generate_sync_entry_for_account_data(
account_data_by_room = await self._generate_sync_entry_for_account_data(
sync_result_builder
)
res = yield self._generate_sync_entry_for_rooms(
res = await self._generate_sync_entry_for_rooms(
sync_result_builder, account_data_by_room
)
newly_joined_rooms, newly_joined_or_invited_users, _, _ = res
@ -970,13 +962,13 @@ class SyncHandler(object):
since_token is None and sync_config.filter_collection.blocks_all_presence()
)
if self.hs_config.use_presence and not block_all_presence_data:
yield self._generate_sync_entry_for_presence(
await self._generate_sync_entry_for_presence(
sync_result_builder, newly_joined_rooms, newly_joined_or_invited_users
)
yield self._generate_sync_entry_for_to_device(sync_result_builder)
await self._generate_sync_entry_for_to_device(sync_result_builder)
device_lists = yield self._generate_sync_entry_for_device_list(
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_users=newly_joined_or_invited_users,
@ -987,11 +979,11 @@ class SyncHandler(object):
device_id = sync_config.device_id
one_time_key_counts = {}
if device_id:
one_time_key_counts = yield self.store.count_e2e_one_time_keys(
one_time_key_counts = await self.store.count_e2e_one_time_keys(
user_id, device_id
)
yield self._generate_sync_entry_for_groups(sync_result_builder)
await self._generate_sync_entry_for_groups(sync_result_builder)
# debug for https://github.com/matrix-org/synapse/issues/4422
for joined_room in sync_result_builder.joined:
@ -1015,18 +1007,17 @@ class SyncHandler(object):
)
@measure_func("_generate_sync_entry_for_groups")
@defer.inlineCallbacks
def _generate_sync_entry_for_groups(self, sync_result_builder):
async def _generate_sync_entry_for_groups(self, sync_result_builder):
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
now_token = sync_result_builder.now_token
if since_token and since_token.groups_key:
results = yield self.store.get_groups_changes_for_user(
results = await self.store.get_groups_changes_for_user(
user_id, since_token.groups_key, now_token.groups_key
)
else:
results = yield self.store.get_all_groups_for_user(
results = await self.store.get_all_groups_for_user(
user_id, now_token.groups_key
)
@ -1059,8 +1050,7 @@ class SyncHandler(object):
)
@measure_func("_generate_sync_entry_for_device_list")
@defer.inlineCallbacks
def _generate_sync_entry_for_device_list(
async def _generate_sync_entry_for_device_list(
self,
sync_result_builder,
newly_joined_rooms,
@ -1108,32 +1098,32 @@ class SyncHandler(object):
# room with by looking at all users that have left a room plus users
# that were in a room we've left.
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
users_who_share_room = await self.store.get_users_who_share_room_with_user(
user_id
)
# Step 1a, check for changes in devices of users we share a room with
users_that_have_changed = yield self.store.get_users_whose_devices_changed(
users_that_have_changed = await self.store.get_users_whose_devices_changed(
since_token.device_list_key, users_who_share_room
)
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = yield self.state.get_current_users_in_room(room_id)
joined_users = await self.state.get_current_users_in_room(room_id)
newly_joined_or_invited_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_users)
user_signatures_changed = yield self.store.get_users_whose_signatures_changed(
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = yield self.state.get_current_users_in_room(room_id)
left_users = await self.state.get_current_users_in_room(room_id)
newly_left_users.update(left_users)
# Remove any users that we still share a room with.
@ -1143,8 +1133,7 @@ class SyncHandler(object):
else:
return DeviceLists(changed=[], left=[])
@defer.inlineCallbacks
def _generate_sync_entry_for_to_device(self, sync_result_builder):
async def _generate_sync_entry_for_to_device(self, sync_result_builder):
"""Generates the portion of the sync response. Populates
`sync_result_builder` with the result.
@ -1165,14 +1154,14 @@ class SyncHandler(object):
# We only delete messages when a new message comes in, but that's
# fine so long as we delete them at some point.
deleted = yield self.store.delete_messages_for_device(
deleted = await self.store.delete_messages_for_device(
user_id, device_id, since_stream_id
)
logger.debug(
"Deleted %d to-device messages up to %d", deleted, since_stream_id
)
messages, stream_id = yield self.store.get_new_messages_for_device(
messages, stream_id = await self.store.get_new_messages_for_device(
user_id, device_id, since_stream_id, now_token.to_device_key
)
@ -1190,8 +1179,7 @@ class SyncHandler(object):
else:
sync_result_builder.to_device = []
@defer.inlineCallbacks
def _generate_sync_entry_for_account_data(self, sync_result_builder):
async def _generate_sync_entry_for_account_data(self, sync_result_builder):
"""Generates the account data portion of the sync response. Populates
`sync_result_builder` with the result.
@ -1209,25 +1197,25 @@ class SyncHandler(object):
(
account_data,
account_data_by_room,
) = yield self.store.get_updated_account_data_for_user(
) = await self.store.get_updated_account_data_for_user(
user_id, since_token.account_data_key
)
push_rules_changed = yield self.store.have_push_rules_changed_for_user(
push_rules_changed = await self.store.have_push_rules_changed_for_user(
user_id, int(since_token.push_rules_key)
)
if push_rules_changed:
account_data["m.push_rules"] = yield self.push_rules_for_user(
account_data["m.push_rules"] = await self.push_rules_for_user(
sync_config.user
)
else:
(
account_data,
account_data_by_room,
) = yield self.store.get_account_data_for_user(sync_config.user.to_string())
) = await self.store.get_account_data_for_user(sync_config.user.to_string())
account_data["m.push_rules"] = yield self.push_rules_for_user(
account_data["m.push_rules"] = await self.push_rules_for_user(
sync_config.user
)
@ -1242,8 +1230,7 @@ class SyncHandler(object):
return account_data_by_room
@defer.inlineCallbacks
def _generate_sync_entry_for_presence(
async def _generate_sync_entry_for_presence(
self, sync_result_builder, newly_joined_rooms, newly_joined_or_invited_users
):
"""Generates the presence portion of the sync response. Populates the
@ -1271,7 +1258,7 @@ class SyncHandler(object):
presence_key = None
include_offline = False
presence, presence_key = yield presence_source.get_new_events(
presence, presence_key = await presence_source.get_new_events(
user=user,
from_key=presence_key,
is_guest=sync_config.is_guest,
@ -1283,12 +1270,12 @@ class SyncHandler(object):
extra_users_ids = set(newly_joined_or_invited_users)
for room_id in newly_joined_rooms:
users = yield self.state.get_current_users_in_room(room_id)
users = await self.state.get_current_users_in_room(room_id)
extra_users_ids.update(users)
extra_users_ids.discard(user.to_string())
if extra_users_ids:
states = yield self.presence_handler.get_states(extra_users_ids)
states = await self.presence_handler.get_states(extra_users_ids)
presence.extend(states)
# Deduplicate the presence entries so that there's at most one per user
@ -1298,8 +1285,9 @@ class SyncHandler(object):
sync_result_builder.presence = presence
@defer.inlineCallbacks
def _generate_sync_entry_for_rooms(self, sync_result_builder, account_data_by_room):
async def _generate_sync_entry_for_rooms(
self, sync_result_builder, account_data_by_room
):
"""Generates the rooms portion of the sync response. Populates the
`sync_result_builder` with the result.
@ -1321,7 +1309,7 @@ class SyncHandler(object):
if block_all_room_ephemeral:
ephemeral_by_room = {}
else:
now_token, ephemeral_by_room = yield self.ephemeral_by_room(
now_token, ephemeral_by_room = await self.ephemeral_by_room(
sync_result_builder,
now_token=sync_result_builder.now_token,
since_token=sync_result_builder.since_token,
@ -1333,16 +1321,16 @@ class SyncHandler(object):
since_token = sync_result_builder.since_token
if not sync_result_builder.full_state:
if since_token and not ephemeral_by_room and not account_data_by_room:
have_changed = yield self._have_rooms_changed(sync_result_builder)
have_changed = await self._have_rooms_changed(sync_result_builder)
if not have_changed:
tags_by_room = yield self.store.get_updated_tags(
tags_by_room = await self.store.get_updated_tags(
user_id, since_token.account_data_key
)
if not tags_by_room:
logger.debug("no-oping sync")
return [], [], [], []
ignored_account_data = yield self.store.get_global_account_data_by_type_for_user(
ignored_account_data = await self.store.get_global_account_data_by_type_for_user(
"m.ignored_user_list", user_id=user_id
)
@ -1352,18 +1340,18 @@ class SyncHandler(object):
ignored_users = frozenset()
if since_token:
res = yield self._get_rooms_changed(sync_result_builder, ignored_users)
res = await self._get_rooms_changed(sync_result_builder, ignored_users)
room_entries, invited, newly_joined_rooms, newly_left_rooms = res
tags_by_room = yield self.store.get_updated_tags(
tags_by_room = await self.store.get_updated_tags(
user_id, since_token.account_data_key
)
else:
res = yield self._get_all_rooms(sync_result_builder, ignored_users)
res = await self._get_all_rooms(sync_result_builder, ignored_users)
room_entries, invited, newly_joined_rooms = res
newly_left_rooms = []
tags_by_room = yield self.store.get_tags_for_user(user_id)
tags_by_room = await self.store.get_tags_for_user(user_id)
def handle_room_entries(room_entry):
return self._generate_room_entry(
@ -1376,7 +1364,7 @@ class SyncHandler(object):
always_include=sync_result_builder.full_state,
)
yield concurrently_execute(handle_room_entries, room_entries, 10)
await concurrently_execute(handle_room_entries, room_entries, 10)
sync_result_builder.invited.extend(invited)
@ -1410,8 +1398,7 @@ class SyncHandler(object):
newly_left_users,
)
@defer.inlineCallbacks
def _have_rooms_changed(self, sync_result_builder):
async def _have_rooms_changed(self, sync_result_builder):
"""Returns whether there may be any new events that should be sent down
the sync. Returns True if there are.
"""
@ -1422,7 +1409,7 @@ class SyncHandler(object):
assert since_token
# Get a list of membership change events that have happened.
rooms_changed = yield self.store.get_membership_changes_for_user(
rooms_changed = await self.store.get_membership_changes_for_user(
user_id, since_token.room_key, now_token.room_key
)
@ -1435,8 +1422,7 @@ class SyncHandler(object):
return True
return False
@defer.inlineCallbacks
def _get_rooms_changed(self, sync_result_builder, ignored_users):
async def _get_rooms_changed(self, sync_result_builder, ignored_users):
"""Gets the the changes that have happened since the last sync.
Args:
@ -1461,7 +1447,7 @@ class SyncHandler(object):
assert since_token
# Get a list of membership change events that have happened.
rooms_changed = yield self.store.get_membership_changes_for_user(
rooms_changed = await self.store.get_membership_changes_for_user(
user_id, since_token.room_key, now_token.room_key
)
@ -1499,11 +1485,11 @@ class SyncHandler(object):
continue
if room_id in sync_result_builder.joined_room_ids or has_join:
old_state_ids = yield self.get_state_at(room_id, since_token)
old_state_ids = await self.get_state_at(room_id, since_token)
old_mem_ev_id = old_state_ids.get((EventTypes.Member, user_id), None)
old_mem_ev = None
if old_mem_ev_id:
old_mem_ev = yield self.store.get_event(
old_mem_ev = await self.store.get_event(
old_mem_ev_id, allow_none=True
)
@ -1536,13 +1522,13 @@ class SyncHandler(object):
newly_left_rooms.append(room_id)
else:
if not old_state_ids:
old_state_ids = yield self.get_state_at(room_id, since_token)
old_state_ids = await self.get_state_at(room_id, since_token)
old_mem_ev_id = old_state_ids.get(
(EventTypes.Member, user_id), None
)
old_mem_ev = None
if old_mem_ev_id:
old_mem_ev = yield self.store.get_event(
old_mem_ev = await self.store.get_event(
old_mem_ev_id, allow_none=True
)
if old_mem_ev and old_mem_ev.membership == Membership.JOIN:
@ -1566,7 +1552,7 @@ class SyncHandler(object):
if leave_events:
leave_event = leave_events[-1]
leave_stream_token = yield self.store.get_stream_token_for_event(
leave_stream_token = await self.store.get_stream_token_for_event(
leave_event.event_id
)
leave_token = since_token.copy_and_replace(
@ -1603,7 +1589,7 @@ class SyncHandler(object):
timeline_limit = sync_config.filter_collection.timeline_limit()
# Get all events for rooms we're currently joined to.
room_to_events = yield self.store.get_room_events_stream_for_rooms(
room_to_events = await self.store.get_room_events_stream_for_rooms(
room_ids=sync_result_builder.joined_room_ids,
from_key=since_token.room_key,
to_key=now_token.room_key,
@ -1652,8 +1638,7 @@ class SyncHandler(object):
return room_entries, invited, newly_joined_rooms, newly_left_rooms
@defer.inlineCallbacks
def _get_all_rooms(self, sync_result_builder, ignored_users):
async def _get_all_rooms(self, sync_result_builder, ignored_users):
"""Returns entries for all rooms for the user.
Args:
@ -1677,7 +1662,7 @@ class SyncHandler(object):
Membership.BAN,
)
room_list = yield self.store.get_rooms_for_user_where_membership_is(
room_list = await self.store.get_rooms_for_user_where_membership_is(
user_id=user_id, membership_list=membership_list
)
@ -1700,7 +1685,7 @@ class SyncHandler(object):
elif event.membership == Membership.INVITE:
if event.sender in ignored_users:
continue
invite = yield self.store.get_event(event.event_id)
invite = await self.store.get_event(event.event_id)
invited.append(InvitedSyncResult(room_id=event.room_id, invite=invite))
elif event.membership in (Membership.LEAVE, Membership.BAN):
# Always send down rooms we were banned or kicked from.
@ -1726,8 +1711,7 @@ class SyncHandler(object):
return room_entries, invited, []
@defer.inlineCallbacks
def _generate_room_entry(
async def _generate_room_entry(
self,
sync_result_builder,
ignored_users,
@ -1769,7 +1753,7 @@ class SyncHandler(object):
since_token = room_builder.since_token
upto_token = room_builder.upto_token
batch = yield self._load_filtered_recents(
batch = await self._load_filtered_recents(
room_id,
sync_config,
now_token=upto_token,
@ -1796,7 +1780,7 @@ class SyncHandler(object):
# tag was added by synapse e.g. for server notice rooms.
if full_state:
user_id = sync_result_builder.sync_config.user.to_string()
tags = yield self.store.get_tags_for_room(user_id, room_id)
tags = await self.store.get_tags_for_room(user_id, room_id)
# If there aren't any tags, don't send the empty tags list down
# sync
@ -1821,7 +1805,7 @@ class SyncHandler(object):
):
return
state = yield self.compute_state_delta(
state = await self.compute_state_delta(
room_id, batch, sync_config, since_token, now_token, full_state=full_state
)
@ -1844,7 +1828,7 @@ class SyncHandler(object):
)
or since_token is None
):
summary = yield self.compute_summary(
summary = await self.compute_summary(
room_id, sync_config, batch, state, now_token
)
@ -1861,7 +1845,7 @@ class SyncHandler(object):
)
if room_sync or always_include:
notifs = yield self.unread_notifs_for_room_id(room_id, sync_config)
notifs = await self.unread_notifs_for_room_id(room_id, sync_config)
if notifs is not None:
unread_notifications["notification_count"] = notifs["notify_count"]
@ -1887,8 +1871,7 @@ class SyncHandler(object):
else:
raise Exception("Unrecognized rtype: %r", room_builder.rtype)
@defer.inlineCallbacks
def get_rooms_for_user_at(self, user_id, stream_ordering):
async def get_rooms_for_user_at(self, user_id, stream_ordering):
"""Get set of joined rooms for a user at the given stream ordering.
The stream ordering *must* be recent, otherwise this may throw an
@ -1903,7 +1886,7 @@ class SyncHandler(object):
Deferred[frozenset[str]]: Set of room_ids the user is in at given
stream_ordering.
"""
joined_rooms = yield self.store.get_rooms_for_user_with_stream_ordering(user_id)
joined_rooms = await self.store.get_rooms_for_user_with_stream_ordering(user_id)
joined_room_ids = set()
@ -1921,10 +1904,10 @@ class SyncHandler(object):
logger.info("User joined room after current token: %s", room_id)
extrems = yield self.store.get_forward_extremeties_for_room(
extrems = await self.store.get_forward_extremeties_for_room(
room_id, stream_ordering
)
users_in_room = yield self.state.get_current_users_in_room(room_id, extrems)
users_in_room = await self.state.get_current_users_in_room(room_id, extrems)
if user_id in users_in_room:
joined_room_ids.add(room_id)

View File

@ -313,7 +313,7 @@ class TypingNotificationEventSource(object):
events.append(self._make_event_for(room_id))
return events, handler._latest_room_serial
return defer.succeed((events, handler._latest_room_serial))
def get_current_key(self):
return self.get_typing_handler()._latest_room_serial

View File

@ -96,7 +96,7 @@ def parse_boolean_from_args(args, name, default=None, required=False):
return {b"true": True, b"false": False}[args[name][0]]
except Exception:
message = (
"Boolean query parameter %r must be one of" " ['true', 'false']"
"Boolean query parameter %r must be one of ['true', 'false']"
) % (name,)
raise SynapseError(400, message)
else:

View File

@ -261,6 +261,18 @@ def parse_drain_configs(
)
class StoppableLogPublisher(LogPublisher):
"""
A log publisher that can tell its observers to shut down any external
communications.
"""
def stop(self):
for obs in self._observers:
if hasattr(obs, "stop"):
obs.stop()
def setup_structured_logging(
hs,
config,
@ -336,7 +348,7 @@ def setup_structured_logging(
# We should never get here, but, just in case, throw an error.
raise ConfigError("%s drain type cannot be configured" % (observer.type,))
publisher = LogPublisher(*observers)
publisher = StoppableLogPublisher(*observers)
log_filter = LogLevelFilterPredicate()
for namespace, namespace_config in log_config.get(

View File

@ -17,25 +17,29 @@
Log formatters that output terse JSON.
"""
import json
import sys
import traceback
from collections import deque
from ipaddress import IPv4Address, IPv6Address, ip_address
from math import floor
from typing import IO
from typing import IO, Optional
import attr
from simplejson import dumps
from zope.interface import implementer
from twisted.application.internet import ClientService
from twisted.internet.defer import Deferred
from twisted.internet.endpoints import (
HostnameEndpoint,
TCP4ClientEndpoint,
TCP6ClientEndpoint,
)
from twisted.internet.interfaces import IPushProducer, ITransport
from twisted.internet.protocol import Factory, Protocol
from twisted.logger import FileLogObserver, ILogObserver, Logger
from twisted.python.failure import Failure
_encoder = json.JSONEncoder(ensure_ascii=False, separators=(",", ":"))
def flatten_event(event: dict, metadata: dict, include_time: bool = False):
@ -141,11 +145,49 @@ def TerseJSONToConsoleLogObserver(outFile: IO[str], metadata: dict) -> FileLogOb
def formatEvent(_event: dict) -> str:
flattened = flatten_event(_event, metadata)
return dumps(flattened, ensure_ascii=False, separators=(",", ":")) + "\n"
return _encoder.encode(flattened) + "\n"
return FileLogObserver(outFile, formatEvent)
@attr.s
@implementer(IPushProducer)
class LogProducer(object):
"""
An IPushProducer that writes logs from its buffer to its transport when it
is resumed.
Args:
buffer: Log buffer to read logs from.
transport: Transport to write to.
"""
transport = attr.ib(type=ITransport)
_buffer = attr.ib(type=deque)
_paused = attr.ib(default=False, type=bool, init=False)
def pauseProducing(self):
self._paused = True
def stopProducing(self):
self._paused = True
self._buffer = None
def resumeProducing(self):
self._paused = False
while self._paused is False and (self._buffer and self.transport.connected):
try:
event = self._buffer.popleft()
self.transport.write(_encoder.encode(event).encode("utf8"))
self.transport.write(b"\n")
except Exception:
# Something has gone wrong writing to the transport -- log it
# and break out of the while.
traceback.print_exc(file=sys.__stderr__)
break
@attr.s
@implementer(ILogObserver)
class TerseJSONToTCPLogObserver(object):
@ -165,8 +207,9 @@ class TerseJSONToTCPLogObserver(object):
metadata = attr.ib(type=dict)
maximum_buffer = attr.ib(type=int)
_buffer = attr.ib(default=attr.Factory(deque), type=deque)
_writer = attr.ib(default=None)
_connection_waiter = attr.ib(default=None, type=Optional[Deferred])
_logger = attr.ib(default=attr.Factory(Logger))
_producer = attr.ib(default=None, type=Optional[LogProducer])
def start(self) -> None:
@ -187,38 +230,44 @@ class TerseJSONToTCPLogObserver(object):
factory = Factory.forProtocol(Protocol)
self._service = ClientService(endpoint, factory, clock=self.hs.get_reactor())
self._service.startService()
self._connect()
def _write_loop(self) -> None:
def stop(self):
self._service.stopService()
def _connect(self) -> None:
"""
Implement the write loop.
Triggers an attempt to connect then write to the remote if not already writing.
"""
if self._writer:
if self._connection_waiter:
return
self._writer = self._service.whenConnected()
self._connection_waiter = self._service.whenConnected(failAfterFailures=1)
@self._writer.addBoth
def writer(r):
if isinstance(r, Failure):
@self._connection_waiter.addErrback
def fail(r):
r.printTraceback(file=sys.__stderr__)
self._writer = None
self.hs.get_reactor().callLater(1, self._write_loop)
self._connection_waiter = None
self._connect()
@self._connection_waiter.addCallback
def writer(r):
# We have a connection. If we already have a producer, and its
# transport is the same, just trigger a resumeProducing.
if self._producer and r.transport is self._producer.transport:
self._producer.resumeProducing()
self._connection_waiter = None
return
try:
for event in self._buffer:
r.transport.write(
dumps(event, ensure_ascii=False, separators=(",", ":")).encode(
"utf8"
)
)
r.transport.write(b"\n")
self._buffer.clear()
except Exception as e:
sys.__stderr__.write("Failed writing out logs with %s\n" % (str(e),))
# If the producer is still producing, stop it.
if self._producer:
self._producer.stopProducing()
self._writer = False
self.hs.get_reactor().callLater(1, self._write_loop)
# Make a new producer and start it.
self._producer = LogProducer(buffer=self._buffer, transport=r.transport)
r.transport.registerProducer(self._producer, True)
self._producer.resumeProducing()
self._connection_waiter = None
def _handle_pressure(self) -> None:
"""
@ -277,4 +326,4 @@ class TerseJSONToTCPLogObserver(object):
self._logger.failure("Failed clearing backpressure")
# Try and write immediately.
self._write_loop()
self._connect()

View File

@ -175,4 +175,4 @@ class ModuleApi(object):
Returns:
Deferred[object]: result of func
"""
return self._store.runInteraction(desc, func, *args, **kwargs)
return self._store.db.runInteraction(desc, func, *args, **kwargs)

View File

@ -304,8 +304,7 @@ class Notifier(object):
without waking up any of the normal user event streams"""
self.notify_replication()
@defer.inlineCallbacks
def wait_for_events(
async def wait_for_events(
self, user_id, timeout, callback, room_ids=None, from_token=StreamToken.START
):
"""Wait until the callback returns a non empty response or the
@ -313,9 +312,9 @@ class Notifier(object):
"""
user_stream = self.user_to_user_stream.get(user_id)
if user_stream is None:
current_token = yield self.event_sources.get_current_token()
current_token = await self.event_sources.get_current_token()
if room_ids is None:
room_ids = yield self.store.get_rooms_for_user(user_id)
room_ids = await self.store.get_rooms_for_user(user_id)
user_stream = _NotifierUserStream(
user_id=user_id,
rooms=room_ids,
@ -344,11 +343,11 @@ class Notifier(object):
self.hs.get_reactor(),
)
with PreserveLoggingContext():
yield listener.deferred
await listener.deferred
current_token = user_stream.current_token
result = yield callback(prev_token, current_token)
result = await callback(prev_token, current_token)
if result:
break
@ -364,12 +363,11 @@ class Notifier(object):
# This happened if there was no timeout or if the timeout had
# already expired.
current_token = user_stream.current_token
result = yield callback(prev_token, current_token)
result = await callback(prev_token, current_token)
return result
@defer.inlineCallbacks
def get_events_for(
async def get_events_for(
self,
user,
pagination_config,
@ -391,15 +389,14 @@ class Notifier(object):
"""
from_token = pagination_config.from_token
if not from_token:
from_token = yield self.event_sources.get_current_token()
from_token = await self.event_sources.get_current_token()
limit = pagination_config.limit
room_ids, is_joined = yield self._get_room_ids(user, explicit_room_id)
room_ids, is_joined = await self._get_room_ids(user, explicit_room_id)
is_peeking = not is_joined
@defer.inlineCallbacks
def check_for_updates(before_token, after_token):
async def check_for_updates(before_token, after_token):
if not after_token.is_after(before_token):
return EventStreamResult([], (from_token, from_token))
@ -415,7 +412,7 @@ class Notifier(object):
if only_keys and name not in only_keys:
continue
new_events, new_key = yield source.get_new_events(
new_events, new_key = await source.get_new_events(
user=user,
from_key=getattr(from_token, keyname),
limit=limit,
@ -425,7 +422,7 @@ class Notifier(object):
)
if name == "room":
new_events = yield filter_events_for_client(
new_events = await filter_events_for_client(
self.storage,
user.to_string(),
new_events,
@ -461,7 +458,7 @@ class Notifier(object):
user_id_for_stream,
)
result = yield self.wait_for_events(
result = await self.wait_for_events(
user_id_for_stream,
timeout,
check_for_updates,

View File

@ -386,15 +386,7 @@ class RulesForRoom(object):
"""
sequence = self.sequence
rows = yield self.store._simple_select_many_batch(
table="room_memberships",
column="event_id",
iterable=member_event_ids.values(),
retcols=("user_id", "membership", "event_id"),
keyvalues={},
batch_size=500,
desc="_get_rules_for_member_event_ids",
)
rows = yield self.store.get_membership_from_event_ids(member_event_ids.values())
members = {row["event_id"]: (row["user_id"], row["membership"]) for row in rows}

View File

@ -246,7 +246,7 @@ class HttpPusher(object):
# fixed, we don't suddenly deliver a load
# of old notifications.
logger.warning(
"Giving up on a notification to user %s, " "pushkey %s",
"Giving up on a notification to user %s, pushkey %s",
self.user_id,
self.pushkey,
)
@ -299,8 +299,7 @@ class HttpPusher(object):
# for sanity, we only remove the pushkey if it
# was the one we actually sent...
logger.warning(
("Ignoring rejected pushkey %s because we" " didn't send it"),
pk,
("Ignoring rejected pushkey %s because we didn't send it"), pk,
)
else:
logger.info("Pushkey %s was rejected: removing", pk)

View File

@ -43,7 +43,7 @@ logger = logging.getLogger(__name__)
MESSAGE_FROM_PERSON_IN_ROOM = (
"You have a message on %(app)s from %(person)s " "in the %(room)s room..."
"You have a message on %(app)s from %(person)s in the %(room)s room..."
)
MESSAGE_FROM_PERSON = "You have a message on %(app)s from %(person)s..."
MESSAGES_FROM_PERSON = "You have messages on %(app)s from %(person)s..."
@ -55,7 +55,7 @@ MESSAGES_FROM_PERSON_AND_OTHERS = (
"You have messages on %(app)s from %(person)s and others..."
)
INVITE_FROM_PERSON_TO_ROOM = (
"%(person)s has invited you to join the " "%(room)s room on %(app)s..."
"%(person)s has invited you to join the %(room)s room on %(app)s..."
)
INVITE_FROM_PERSON = "%(person)s has invited you to chat on %(app)s..."

View File

@ -14,7 +14,14 @@
# limitations under the License.
from synapse.http.server import JsonResource
from synapse.replication.http import federation, login, membership, register, send_event
from synapse.replication.http import (
devices,
federation,
login,
membership,
register,
send_event,
)
REPLICATION_PREFIX = "/_synapse/replication"
@ -30,3 +37,4 @@ class ReplicationRestResource(JsonResource):
federation.register_servlets(hs, self)
login.register_servlets(hs, self)
register.register_servlets(hs, self)
devices.register_servlets(hs, self)

View File

@ -0,0 +1,73 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from synapse.replication.http._base import ReplicationEndpoint
logger = logging.getLogger(__name__)
class ReplicationUserDevicesResyncRestServlet(ReplicationEndpoint):
"""Ask master to resync the device list for a user by contacting their
server.
This must happen on master so that the results can be correctly cached in
the database and streamed to workers.
Request format:
POST /_synapse/replication/user_device_resync/:user_id
{}
Response is equivalent to ` /_matrix/federation/v1/user/devices/:user_id`
response, e.g.:
{
"user_id": "@alice:example.org",
"devices": [
{
"device_id": "JLAFKJWSCS",
"keys": { ... },
"device_display_name": "Alice's Mobile Phone"
}
]
}
"""
NAME = "user_device_resync"
PATH_ARGS = ("user_id",)
CACHE = False
def __init__(self, hs):
super(ReplicationUserDevicesResyncRestServlet, self).__init__(hs)
self.device_list_updater = hs.get_device_handler().device_list_updater
self.store = hs.get_datastore()
self.clock = hs.get_clock()
@staticmethod
def _serialize_payload(user_id):
return {}
async def _handle_request(self, request, user_id):
user_devices = await self.device_list_updater.user_device_resync(user_id)
return 200, user_devices
def register_servlets(hs, http_server):
ReplicationUserDevicesResyncRestServlet(hs).register(http_server)

View File

@ -93,6 +93,7 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
{
"requester": ...,
"remote_room_hosts": [...],
"content": { ... }
}
"""
@ -107,7 +108,7 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
self.clock = hs.get_clock()
@staticmethod
def _serialize_payload(requester, room_id, user_id, remote_room_hosts):
def _serialize_payload(requester, room_id, user_id, remote_room_hosts, content):
"""
Args:
requester(Requester)
@ -118,12 +119,14 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
return {
"requester": requester.serialize(),
"remote_room_hosts": remote_room_hosts,
"content": content,
}
async def _handle_request(self, request, room_id, user_id):
content = parse_json_object_from_request(request)
remote_room_hosts = content["remote_room_hosts"]
event_content = content["content"]
requester = Requester.deserialize(self.store, content["requester"])
@ -134,7 +137,7 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
try:
event = await self.federation_handler.do_remotely_reject_invite(
remote_room_hosts, room_id, user_id
remote_room_hosts, room_id, user_id, event_content,
)
ret = event.get_pdu_json()
except Exception as e:

View File

@ -18,7 +18,9 @@ from typing import Dict
import six
from synapse.storage._base import _CURRENT_STATE_CACHE_NAME, SQLBaseStore
from synapse.storage._base import SQLBaseStore
from synapse.storage.data_stores.main.cache import CURRENT_STATE_CACHE_NAME
from synapse.storage.database import Database
from synapse.storage.engines import PostgresEngine
from ._slaved_id_tracker import SlavedIdTracker
@ -34,8 +36,8 @@ def __func__(inp):
class BaseSlavedStore(SQLBaseStore):
def __init__(self, db_conn, hs):
super(BaseSlavedStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(BaseSlavedStore, self).__init__(database, db_conn, hs)
if isinstance(self.database_engine, PostgresEngine):
self._cache_id_gen = SlavedIdTracker(
db_conn, "cache_invalidation_stream", "stream_id"
@ -62,7 +64,7 @@ class BaseSlavedStore(SQLBaseStore):
if stream_name == "caches":
self._cache_id_gen.advance(token)
for row in rows:
if row.cache_func == _CURRENT_STATE_CACHE_NAME:
if row.cache_func == CURRENT_STATE_CACHE_NAME:
room_id = row.keys[0]
members_changed = set(row.keys[1:])
self._invalidate_state_caches(room_id, members_changed)

View File

@ -18,15 +18,16 @@ from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.storage.data_stores.main.account_data import AccountDataWorkerStore
from synapse.storage.data_stores.main.tags import TagsWorkerStore
from synapse.storage.database import Database
class SlavedAccountDataStore(TagsWorkerStore, AccountDataWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
def __init__(self, database: Database, db_conn, hs):
self._account_data_id_gen = SlavedIdTracker(
db_conn, "account_data_max_stream_id", "stream_id"
)
super(SlavedAccountDataStore, self).__init__(db_conn, hs)
super(SlavedAccountDataStore, self).__init__(database, db_conn, hs)
def get_max_account_data_stream_id(self):
return self._account_data_id_gen.get_current_token()

View File

@ -14,6 +14,7 @@
# limitations under the License.
from synapse.storage.data_stores.main.client_ips import LAST_SEEN_GRANULARITY
from synapse.storage.database import Database
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.caches.descriptors import Cache
@ -21,8 +22,8 @@ from ._base import BaseSlavedStore
class SlavedClientIpStore(BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedClientIpStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(SlavedClientIpStore, self).__init__(database, db_conn, hs)
self.client_ip_last_seen = Cache(
name="client_ip_last_seen", keylen=4, max_entries=50000 * CACHE_SIZE_FACTOR

View File

@ -16,13 +16,14 @@
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.storage.data_stores.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.database import Database
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.caches.stream_change_cache import StreamChangeCache
class SlavedDeviceInboxStore(DeviceInboxWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedDeviceInboxStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(SlavedDeviceInboxStore, self).__init__(database, db_conn, hs)
self._device_inbox_id_gen = SlavedIdTracker(
db_conn, "device_max_stream_id", "stream_id"
)

View File

@ -18,12 +18,13 @@ from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.replication.tcp.streams._base import DeviceListsStream, UserSignatureStream
from synapse.storage.data_stores.main.devices import DeviceWorkerStore
from synapse.storage.data_stores.main.end_to_end_keys import EndToEndKeyWorkerStore
from synapse.storage.database import Database
from synapse.util.caches.stream_change_cache import StreamChangeCache
class SlavedDeviceStore(EndToEndKeyWorkerStore, DeviceWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedDeviceStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(SlavedDeviceStore, self).__init__(database, db_conn, hs)
self.hs = hs

View File

@ -31,6 +31,7 @@ from synapse.storage.data_stores.main.signatures import SignatureWorkerStore
from synapse.storage.data_stores.main.state import StateGroupWorkerStore
from synapse.storage.data_stores.main.stream import StreamWorkerStore
from synapse.storage.data_stores.main.user_erasure_store import UserErasureWorkerStore
from synapse.storage.database import Database
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
@ -59,13 +60,13 @@ class SlavedEventStore(
RelationsWorkerStore,
BaseSlavedStore,
):
def __init__(self, db_conn, hs):
def __init__(self, database: Database, db_conn, hs):
self._stream_id_gen = SlavedIdTracker(db_conn, "events", "stream_ordering")
self._backfill_id_gen = SlavedIdTracker(
db_conn, "events", "stream_ordering", step=-1
)
super(SlavedEventStore, self).__init__(db_conn, hs)
super(SlavedEventStore, self).__init__(database, db_conn, hs)
# Cached functions can't be accessed through a class instance so we need
# to reach inside the __dict__ to extract them.

View File

@ -14,13 +14,14 @@
# limitations under the License.
from synapse.storage.data_stores.main.filtering import FilteringStore
from synapse.storage.database import Database
from ._base import BaseSlavedStore
class SlavedFilteringStore(BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedFilteringStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(SlavedFilteringStore, self).__init__(database, db_conn, hs)
# Filters are immutable so this cache doesn't need to be expired
get_user_filter = FilteringStore.__dict__["get_user_filter"]

View File

@ -14,6 +14,7 @@
# limitations under the License.
from synapse.storage import DataStore
from synapse.storage.database import Database
from synapse.util.caches.stream_change_cache import StreamChangeCache
from ._base import BaseSlavedStore, __func__
@ -21,8 +22,8 @@ from ._slaved_id_tracker import SlavedIdTracker
class SlavedGroupServerStore(BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedGroupServerStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(SlavedGroupServerStore, self).__init__(database, db_conn, hs)
self.hs = hs

View File

@ -15,6 +15,7 @@
from synapse.storage import DataStore
from synapse.storage.data_stores.main.presence import PresenceStore
from synapse.storage.database import Database
from synapse.util.caches.stream_change_cache import StreamChangeCache
from ._base import BaseSlavedStore, __func__
@ -22,8 +23,8 @@ from ._slaved_id_tracker import SlavedIdTracker
class SlavedPresenceStore(BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedPresenceStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(SlavedPresenceStore, self).__init__(database, db_conn, hs)
self._presence_id_gen = SlavedIdTracker(db_conn, "presence_stream", "stream_id")
self._presence_on_startup = self._get_active_presence(db_conn)

View File

@ -15,17 +15,18 @@
# limitations under the License.
from synapse.storage.data_stores.main.push_rule import PushRulesWorkerStore
from synapse.storage.database import Database
from ._slaved_id_tracker import SlavedIdTracker
from .events import SlavedEventStore
class SlavedPushRuleStore(SlavedEventStore, PushRulesWorkerStore):
def __init__(self, db_conn, hs):
def __init__(self, database: Database, db_conn, hs):
self._push_rules_stream_id_gen = SlavedIdTracker(
db_conn, "push_rules_stream", "stream_id"
)
super(SlavedPushRuleStore, self).__init__(db_conn, hs)
super(SlavedPushRuleStore, self).__init__(database, db_conn, hs)
def get_push_rules_stream_token(self):
return (

View File

@ -15,14 +15,15 @@
# limitations under the License.
from synapse.storage.data_stores.main.pusher import PusherWorkerStore
from synapse.storage.database import Database
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
class SlavedPusherStore(PusherWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedPusherStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(SlavedPusherStore, self).__init__(database, db_conn, hs)
self._pushers_id_gen = SlavedIdTracker(
db_conn, "pushers", "id", extra_tables=[("deleted_pushers", "stream_id")]
)

View File

@ -15,6 +15,7 @@
# limitations under the License.
from synapse.storage.data_stores.main.receipts import ReceiptsWorkerStore
from synapse.storage.database import Database
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
@ -29,14 +30,14 @@ from ._slaved_id_tracker import SlavedIdTracker
class SlavedReceiptsStore(ReceiptsWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
def __init__(self, database: Database, db_conn, hs):
# We instantiate this first as the ReceiptsWorkerStore constructor
# needs to be able to call get_max_receipt_stream_id
self._receipts_id_gen = SlavedIdTracker(
db_conn, "receipts_linearized", "stream_id"
)
super(SlavedReceiptsStore, self).__init__(db_conn, hs)
super(SlavedReceiptsStore, self).__init__(database, db_conn, hs)
def get_max_receipt_stream_id(self):
return self._receipts_id_gen.get_current_token()

View File

@ -14,14 +14,15 @@
# limitations under the License.
from synapse.storage.data_stores.main.room import RoomWorkerStore
from synapse.storage.database import Database
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
class RoomStore(RoomWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
super(RoomStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(RoomStore, self).__init__(database, db_conn, hs)
self._public_room_id_gen = SlavedIdTracker(
db_conn, "public_room_list_stream", "stream_id"
)

View File

@ -88,8 +88,7 @@ TagAccountDataStreamRow = namedtuple(
"TagAccountDataStreamRow", ("user_id", "room_id", "data") # str # str # dict
)
AccountDataStreamRow = namedtuple(
"AccountDataStream",
("user_id", "room_id", "data_type", "data"), # str # str # str # dict
"AccountDataStream", ("user_id", "room_id", "data_type") # str # str # str
)
GroupsStreamRow = namedtuple(
"GroupsStreamRow",
@ -421,8 +420,8 @@ class AccountDataStream(Stream):
results = list(room_results)
results.extend(
(stream_id, user_id, None, account_data_type, content)
for stream_id, user_id, account_data_type, content in global_results
(stream_id, user_id, None, account_data_type)
for stream_id, user_id, account_data_type in global_results
)
return results

View File

@ -34,12 +34,12 @@ from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
from synapse.rest.admin.users import (
AccountValidityRenewServlet,
DeactivateAccountRestServlet,
GetUsersPaginatedRestServlet,
ResetPasswordRestServlet,
SearchUsersRestServlet,
UserAdminServlet,
UserRegisterServlet,
UsersRestServlet,
UsersRestServletV2,
WhoisRestServlet,
)
from synapse.util.versionstring import get_version_string
@ -191,6 +191,7 @@ def register_servlets(hs, http_server):
SendServerNoticeServlet(hs).register(http_server)
VersionServlet(hs).register(http_server)
UserAdminServlet(hs).register(http_server)
UsersRestServletV2(hs).register(http_server)
def register_servlets_for_client_rest_resource(hs, http_server):
@ -201,7 +202,6 @@ def register_servlets_for_client_rest_resource(hs, http_server):
PurgeHistoryRestServlet(hs).register(http_server)
UsersRestServlet(hs).register(http_server)
ResetPasswordRestServlet(hs).register(http_server)
GetUsersPaginatedRestServlet(hs).register(http_server)
SearchUsersRestServlet(hs).register(http_server)
ShutdownRoomRestServlet(hs).register(http_server)
UserRegisterServlet(hs).register(http_server)

View File

@ -25,6 +25,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.http.servlet import (
RestServlet,
assert_params_in_dict,
parse_boolean,
parse_integer,
parse_json_object_from_request,
parse_string,
@ -59,71 +60,45 @@ class UsersRestServlet(RestServlet):
return 200, ret
class GetUsersPaginatedRestServlet(RestServlet):
"""Get request to get specific number of users from Synapse.
This needs user to have administrator access in Synapse.
Example:
http://localhost:8008/_synapse/admin/v1/users_paginate/
@admin:user?access_token=admin_access_token&start=0&limit=10
Returns:
200 OK with json object {list[dict[str, Any]], count} or empty object.
"""
class UsersRestServletV2(RestServlet):
PATTERNS = (re.compile("^/_synapse/admin/v2/users$"),)
PATTERNS = historical_admin_path_patterns(
"/users_paginate/(?P<target_user_id>[^/]*)"
)
"""Get request to list all local users.
This needs user to have administrator access in Synapse.
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
returns:
200 OK with list of users if success otherwise an error.
The parameters `from` and `limit` are required only for pagination.
By default, a `limit` of 100 is used.
The parameter `user_id` can be used to filter by user id.
The parameter `guests` can be used to exclude guest users.
The parameter `deactivated` can be used to include deactivated users.
"""
def __init__(self, hs):
self.store = hs.get_datastore()
self.hs = hs
self.auth = hs.get_auth()
self.handlers = hs.get_handlers()
self.admin_handler = hs.get_handlers().admin_handler
async def on_GET(self, request, target_user_id):
"""Get request to get specific number of users from Synapse.
This needs user to have administrator access in Synapse.
"""
async def on_GET(self, request):
await assert_requester_is_admin(self.auth, request)
target_user = UserID.from_string(target_user_id)
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
user_id = parse_string(request, "user_id", default=None)
guests = parse_boolean(request, "guests", default=True)
deactivated = parse_boolean(request, "deactivated", default=False)
if not self.hs.is_mine(target_user):
raise SynapseError(400, "Can only users a local user")
users = await self.admin_handler.get_users_paginate(
start, limit, user_id, guests, deactivated
)
ret = {"users": users}
if len(users) >= limit:
ret["next_token"] = str(start + len(users))
order = "name" # order by name in user table
start = parse_integer(request, "start", required=True)
limit = parse_integer(request, "limit", required=True)
logger.info("limit: %s, start: %s", limit, start)
ret = await self.handlers.admin_handler.get_users_paginate(order, start, limit)
return 200, ret
async def on_POST(self, request, target_user_id):
"""Post request to get specific number of users from Synapse..
This needs user to have administrator access in Synapse.
Example:
http://localhost:8008/_synapse/admin/v1/users_paginate/
@admin:user?access_token=admin_access_token
JsonBodyToSend:
{
"start": "0",
"limit": "10
}
Returns:
200 OK with json object {list[dict[str, Any]], count} or empty object.
"""
await assert_requester_is_admin(self.auth, request)
UserID.from_string(target_user_id)
order = "name" # order by name in user table
params = parse_json_object_from_request(request)
assert_params_in_dict(params, ["limit", "start"])
limit = params["limit"]
start = params["start"]
logger.info("limit: %s, start: %s", limit, start)
ret = await self.handlers.admin_handler.get_users_paginate(order, start, limit)
return 200, ret

View File

@ -16,8 +16,6 @@
import logging
from twisted.internet import defer
from synapse.api.errors import (
AuthError,
Codes,
@ -47,17 +45,15 @@ class ClientDirectoryServer(RestServlet):
self.handlers = hs.get_handlers()
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request, room_alias):
async def on_GET(self, request, room_alias):
room_alias = RoomAlias.from_string(room_alias)
dir_handler = self.handlers.directory_handler
res = yield dir_handler.get_association(room_alias)
res = await dir_handler.get_association(room_alias)
return 200, res
@defer.inlineCallbacks
def on_PUT(self, request, room_alias):
async def on_PUT(self, request, room_alias):
room_alias = RoomAlias.from_string(room_alias)
content = parse_json_object_from_request(request)
@ -77,26 +73,25 @@ class ClientDirectoryServer(RestServlet):
# TODO(erikj): Check types.
room = yield self.store.get_room(room_id)
room = await self.store.get_room(room_id)
if room is None:
raise SynapseError(400, "Room does not exist")
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
yield self.handlers.directory_handler.create_association(
await self.handlers.directory_handler.create_association(
requester, room_alias, room_id, servers
)
return 200, {}
@defer.inlineCallbacks
def on_DELETE(self, request, room_alias):
async def on_DELETE(self, request, room_alias):
dir_handler = self.handlers.directory_handler
try:
service = yield self.auth.get_appservice_by_req(request)
service = await self.auth.get_appservice_by_req(request)
room_alias = RoomAlias.from_string(room_alias)
yield dir_handler.delete_appservice_association(service, room_alias)
await dir_handler.delete_appservice_association(service, room_alias)
logger.info(
"Application service at %s deleted alias %s",
service.url,
@ -107,12 +102,12 @@ class ClientDirectoryServer(RestServlet):
# fallback to default user behaviour if they aren't an AS
pass
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
user = requester.user
room_alias = RoomAlias.from_string(room_alias)
yield dir_handler.delete_association(requester, room_alias)
await dir_handler.delete_association(requester, room_alias)
logger.info(
"User %s deleted alias %s", user.to_string(), room_alias.to_string()
@ -130,32 +125,29 @@ class ClientDirectoryListServer(RestServlet):
self.handlers = hs.get_handlers()
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request, room_id):
room = yield self.store.get_room(room_id)
async def on_GET(self, request, room_id):
room = await self.store.get_room(room_id)
if room is None:
raise NotFoundError("Unknown room")
return 200, {"visibility": "public" if room["is_public"] else "private"}
@defer.inlineCallbacks
def on_PUT(self, request, room_id):
requester = yield self.auth.get_user_by_req(request)
async def on_PUT(self, request, room_id):
requester = await self.auth.get_user_by_req(request)
content = parse_json_object_from_request(request)
visibility = content.get("visibility", "public")
yield self.handlers.directory_handler.edit_published_room_list(
await self.handlers.directory_handler.edit_published_room_list(
requester, room_id, visibility
)
return 200, {}
@defer.inlineCallbacks
def on_DELETE(self, request, room_id):
requester = yield self.auth.get_user_by_req(request)
async def on_DELETE(self, request, room_id):
requester = await self.auth.get_user_by_req(request)
yield self.handlers.directory_handler.edit_published_room_list(
await self.handlers.directory_handler.edit_published_room_list(
requester, room_id, "private"
)
@ -181,15 +173,14 @@ class ClientAppserviceDirectoryListServer(RestServlet):
def on_DELETE(self, request, network_id, room_id):
return self._edit(request, network_id, room_id, "private")
@defer.inlineCallbacks
def _edit(self, request, network_id, room_id, visibility):
requester = yield self.auth.get_user_by_req(request)
async def _edit(self, request, network_id, room_id, visibility):
requester = await self.auth.get_user_by_req(request)
if not requester.app_service:
raise AuthError(
403, "Only appservices can edit the appservice published room list"
)
yield self.handlers.directory_handler.edit_published_appservice_room_list(
await self.handlers.directory_handler.edit_published_appservice_room_list(
requester.app_service.id, network_id, room_id, visibility
)

View File

@ -16,8 +16,6 @@
"""This module contains REST servlets to do with event streaming, /events."""
import logging
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.http.servlet import RestServlet
from synapse.rest.client.v2_alpha._base import client_patterns
@ -36,9 +34,8 @@ class EventStreamRestServlet(RestServlet):
self.event_stream_handler = hs.get_event_stream_handler()
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
async def on_GET(self, request):
requester = await self.auth.get_user_by_req(request, allow_guest=True)
is_guest = requester.is_guest
room_id = None
if is_guest:
@ -57,7 +54,7 @@ class EventStreamRestServlet(RestServlet):
as_client_event = b"raw" not in request.args
chunk = yield self.event_stream_handler.get_stream(
chunk = await self.event_stream_handler.get_stream(
requester.user.to_string(),
pagin_config,
timeout=timeout,
@ -83,14 +80,13 @@ class EventRestServlet(RestServlet):
self.event_handler = hs.get_event_handler()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
def on_GET(self, request, event_id):
requester = yield self.auth.get_user_by_req(request)
event = yield self.event_handler.get_event(requester.user, None, event_id)
async def on_GET(self, request, event_id):
requester = await self.auth.get_user_by_req(request)
event = await self.event_handler.get_event(requester.user, None, event_id)
time_now = self.clock.time_msec()
if event:
event = yield self._event_serializer.serialize_event(event, time_now)
event = await self._event_serializer.serialize_event(event, time_now)
return 200, event
else:
return 404, "Event not found."

View File

@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.http.servlet import RestServlet, parse_boolean
from synapse.rest.client.v2_alpha._base import client_patterns
@ -29,13 +28,12 @@ class InitialSyncRestServlet(RestServlet):
self.initial_sync_handler = hs.get_initial_sync_handler()
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request):
requester = yield self.auth.get_user_by_req(request)
async def on_GET(self, request):
requester = await self.auth.get_user_by_req(request)
as_client_event = b"raw" not in request.args
pagination_config = PaginationConfig.from_request(request)
include_archived = parse_boolean(request, "archived", default=False)
content = yield self.initial_sync_handler.snapshot_all_rooms(
content = await self.initial_sync_handler.snapshot_all_rooms(
user_id=requester.user.to_string(),
pagin_config=pagination_config,
as_client_event=as_client_event,

View File

@ -18,7 +18,6 @@ import xml.etree.ElementTree as ET
from six.moves import urllib
from twisted.internet import defer
from twisted.web.client import PartialDownloadError
from synapse.api.errors import Codes, LoginError, SynapseError
@ -130,8 +129,7 @@ class LoginRestServlet(RestServlet):
def on_OPTIONS(self, request):
return 200, {}
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
self._address_ratelimiter.ratelimit(
request.getClientIP(),
time_now_s=self.hs.clock.time(),
@ -145,11 +143,11 @@ class LoginRestServlet(RestServlet):
if self.jwt_enabled and (
login_submission["type"] == LoginRestServlet.JWT_TYPE
):
result = yield self.do_jwt_login(login_submission)
result = await self.do_jwt_login(login_submission)
elif login_submission["type"] == LoginRestServlet.TOKEN_TYPE:
result = yield self.do_token_login(login_submission)
result = await self.do_token_login(login_submission)
else:
result = yield self._do_other_login(login_submission)
result = await self._do_other_login(login_submission)
except KeyError:
raise SynapseError(400, "Missing JSON keys.")
@ -158,8 +156,7 @@ class LoginRestServlet(RestServlet):
result["well_known"] = well_known_data
return 200, result
@defer.inlineCallbacks
def _do_other_login(self, login_submission):
async def _do_other_login(self, login_submission):
"""Handle non-token/saml/jwt logins
Args:
@ -219,20 +216,20 @@ class LoginRestServlet(RestServlet):
(
canonical_user_id,
callback_3pid,
) = yield self.auth_handler.check_password_provider_3pid(
) = await self.auth_handler.check_password_provider_3pid(
medium, address, login_submission["password"]
)
if canonical_user_id:
# Authentication through password provider and 3pid succeeded
result = yield self._complete_login(
result = await self._complete_login(
canonical_user_id, login_submission, callback_3pid
)
return result
# No password providers were able to handle this 3pid
# Check local store
user_id = yield self.hs.get_datastore().get_user_id_by_threepid(
user_id = await self.hs.get_datastore().get_user_id_by_threepid(
medium, address
)
if not user_id:
@ -280,7 +277,7 @@ class LoginRestServlet(RestServlet):
)
try:
canonical_user_id, callback = yield self.auth_handler.validate_login(
canonical_user_id, callback = await self.auth_handler.validate_login(
identifier["user"], login_submission
)
except LoginError:
@ -297,13 +294,12 @@ class LoginRestServlet(RestServlet):
)
raise
result = yield self._complete_login(
result = await self._complete_login(
canonical_user_id, login_submission, callback
)
return result
@defer.inlineCallbacks
def _complete_login(
async def _complete_login(
self, user_id, login_submission, callback=None, create_non_existant_users=False
):
"""Called when we've successfully authed the user and now need to
@ -337,15 +333,15 @@ class LoginRestServlet(RestServlet):
)
if create_non_existant_users:
user_id = yield self.auth_handler.check_user_exists(user_id)
user_id = await self.auth_handler.check_user_exists(user_id)
if not user_id:
user_id = yield self.registration_handler.register_user(
user_id = await self.registration_handler.register_user(
localpart=UserID.from_string(user_id).localpart
)
device_id = login_submission.get("device_id")
initial_display_name = login_submission.get("initial_device_display_name")
device_id, access_token = yield self.registration_handler.register_device(
device_id, access_token = await self.registration_handler.register_device(
user_id, device_id, initial_display_name
)
@ -357,23 +353,21 @@ class LoginRestServlet(RestServlet):
}
if callback is not None:
yield callback(result)
await callback(result)
return result
@defer.inlineCallbacks
def do_token_login(self, login_submission):
async def do_token_login(self, login_submission):
token = login_submission["token"]
auth_handler = self.auth_handler
user_id = yield auth_handler.validate_short_term_login_token_and_get_user_id(
user_id = await auth_handler.validate_short_term_login_token_and_get_user_id(
token
)
result = yield self._complete_login(user_id, login_submission)
result = await self._complete_login(user_id, login_submission)
return result
@defer.inlineCallbacks
def do_jwt_login(self, login_submission):
async def do_jwt_login(self, login_submission):
token = login_submission.get("token", None)
if token is None:
raise LoginError(
@ -397,7 +391,7 @@ class LoginRestServlet(RestServlet):
raise LoginError(401, "Invalid JWT", errcode=Codes.UNAUTHORIZED)
user_id = UserID(user, self.hs.hostname).to_string()
result = yield self._complete_login(
result = await self._complete_login(
user_id, login_submission, create_non_existant_users=True
)
return result
@ -460,8 +454,7 @@ class CasTicketServlet(RestServlet):
self._sso_auth_handler = SSOAuthHandler(hs)
self._http_client = hs.get_proxied_http_client()
@defer.inlineCallbacks
def on_GET(self, request):
async def on_GET(self, request):
client_redirect_url = parse_string(request, "redirectUrl", required=True)
uri = self.cas_server_url + "/proxyValidate"
args = {
@ -469,12 +462,12 @@ class CasTicketServlet(RestServlet):
"service": self.cas_service_url,
}
try:
body = yield self._http_client.get_raw(uri, args)
body = await self._http_client.get_raw(uri, args)
except PartialDownloadError as pde:
# Twisted raises this error if the connection is closed,
# even if that's being used old-http style to signal end-of-data
body = pde.response
result = yield self.handle_cas_response(request, body, client_redirect_url)
result = await self.handle_cas_response(request, body, client_redirect_url)
return result
def handle_cas_response(self, request, cas_response_body, client_redirect_url):
@ -555,8 +548,7 @@ class SSOAuthHandler(object):
self._registration_handler = hs.get_registration_handler()
self._macaroon_gen = hs.get_macaroon_generator()
@defer.inlineCallbacks
def on_successful_auth(
async def on_successful_auth(
self, username, request, client_redirect_url, user_display_name=None
):
"""Called once the user has successfully authenticated with the SSO.
@ -582,9 +574,9 @@ class SSOAuthHandler(object):
"""
localpart = map_username_to_mxid_localpart(username)
user_id = UserID(localpart, self._hostname).to_string()
registered_user_id = yield self._auth_handler.check_user_exists(user_id)
registered_user_id = await self._auth_handler.check_user_exists(user_id)
if not registered_user_id:
registered_user_id = yield self._registration_handler.register_user(
registered_user_id = await self._registration_handler.register_user(
localpart=localpart, default_display_name=user_display_name
)

View File

@ -15,8 +15,6 @@
import logging
from twisted.internet import defer
from synapse.http.servlet import RestServlet
from synapse.rest.client.v2_alpha._base import client_patterns
@ -35,17 +33,16 @@ class LogoutRestServlet(RestServlet):
def on_OPTIONS(self, request):
return 200, {}
@defer.inlineCallbacks
def on_POST(self, request):
requester = yield self.auth.get_user_by_req(request)
async def on_POST(self, request):
requester = await self.auth.get_user_by_req(request)
if requester.device_id is None:
# the acccess token wasn't associated with a device.
# Just delete the access token
access_token = self.auth.get_access_token_from_request(request)
yield self._auth_handler.delete_access_token(access_token)
await self._auth_handler.delete_access_token(access_token)
else:
yield self._device_handler.delete_device(
await self._device_handler.delete_device(
requester.user.to_string(), requester.device_id
)
@ -64,17 +61,16 @@ class LogoutAllRestServlet(RestServlet):
def on_OPTIONS(self, request):
return 200, {}
@defer.inlineCallbacks
def on_POST(self, request):
requester = yield self.auth.get_user_by_req(request)
async def on_POST(self, request):
requester = await self.auth.get_user_by_req(request)
user_id = requester.user.to_string()
# first delete all of the user's devices
yield self._device_handler.delete_all_devices_for_user(user_id)
await self._device_handler.delete_all_devices_for_user(user_id)
# .. and then delete any access tokens which weren't associated with
# devices.
yield self._auth_handler.delete_access_tokens_for_user(user_id)
await self._auth_handler.delete_access_tokens_for_user(user_id)
return 200, {}

View File

@ -19,8 +19,6 @@ import logging
from six import string_types
from twisted.internet import defer
from synapse.api.errors import AuthError, SynapseError
from synapse.handlers.presence import format_user_presence_state
from synapse.http.servlet import RestServlet, parse_json_object_from_request
@ -40,27 +38,25 @@ class PresenceStatusRestServlet(RestServlet):
self.clock = hs.get_clock()
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request, user_id):
requester = yield self.auth.get_user_by_req(request)
async def on_GET(self, request, user_id):
requester = await self.auth.get_user_by_req(request)
user = UserID.from_string(user_id)
if requester.user != user:
allowed = yield self.presence_handler.is_visible(
allowed = await self.presence_handler.is_visible(
observed_user=user, observer_user=requester.user
)
if not allowed:
raise AuthError(403, "You are not allowed to see their presence.")
state = yield self.presence_handler.get_state(target_user=user)
state = await self.presence_handler.get_state(target_user=user)
state = format_user_presence_state(state, self.clock.time_msec())
return 200, state
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
requester = yield self.auth.get_user_by_req(request)
async def on_PUT(self, request, user_id):
requester = await self.auth.get_user_by_req(request)
user = UserID.from_string(user_id)
if requester.user != user:
@ -86,7 +82,7 @@ class PresenceStatusRestServlet(RestServlet):
raise SynapseError(400, "Unable to parse state")
if self.hs.config.use_presence:
yield self.presence_handler.set_state(user, state)
await self.presence_handler.set_state(user, state)
return 200, {}

View File

@ -14,8 +14,8 @@
# limitations under the License.
""" This module contains REST servlets to do with profile: /profile/<paths> """
from twisted.internet import defer
from synapse.api.errors import Codes, SynapseError
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.rest.client.v2_alpha._base import client_patterns
from synapse.types import UserID
@ -30,19 +30,18 @@ class ProfileDisplaynameRestServlet(RestServlet):
self.profile_handler = hs.get_profile_handler()
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request, user_id):
async def on_GET(self, request, user_id):
requester_user = None
if self.hs.config.require_auth_for_profile_requests:
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
requester_user = requester.user
user = UserID.from_string(user_id)
yield self.profile_handler.check_profile_query_allowed(user, requester_user)
await self.profile_handler.check_profile_query_allowed(user, requester_user)
displayname = yield self.profile_handler.get_displayname(user)
displayname = await self.profile_handler.get_displayname(user)
ret = {}
if displayname is not None:
@ -50,11 +49,10 @@ class ProfileDisplaynameRestServlet(RestServlet):
return 200, ret
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
async def on_PUT(self, request, user_id):
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user = UserID.from_string(user_id)
is_admin = yield self.auth.is_server_admin(requester.user)
is_admin = await self.auth.is_server_admin(requester.user)
content = parse_json_object_from_request(request)
@ -63,7 +61,7 @@ class ProfileDisplaynameRestServlet(RestServlet):
except Exception:
return 400, "Unable to parse name"
yield self.profile_handler.set_displayname(user, requester, new_name, is_admin)
await self.profile_handler.set_displayname(user, requester, new_name, is_admin)
return 200, {}
@ -80,19 +78,18 @@ class ProfileAvatarURLRestServlet(RestServlet):
self.profile_handler = hs.get_profile_handler()
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request, user_id):
async def on_GET(self, request, user_id):
requester_user = None
if self.hs.config.require_auth_for_profile_requests:
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
requester_user = requester.user
user = UserID.from_string(user_id)
yield self.profile_handler.check_profile_query_allowed(user, requester_user)
await self.profile_handler.check_profile_query_allowed(user, requester_user)
avatar_url = yield self.profile_handler.get_avatar_url(user)
avatar_url = await self.profile_handler.get_avatar_url(user)
ret = {}
if avatar_url is not None:
@ -100,19 +97,22 @@ class ProfileAvatarURLRestServlet(RestServlet):
return 200, ret
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
requester = yield self.auth.get_user_by_req(request)
async def on_PUT(self, request, user_id):
requester = await self.auth.get_user_by_req(request)
user = UserID.from_string(user_id)
is_admin = yield self.auth.is_server_admin(requester.user)
is_admin = await self.auth.is_server_admin(requester.user)
content = parse_json_object_from_request(request)
try:
new_name = content["avatar_url"]
except Exception:
return 400, "Unable to parse name"
new_avatar_url = content["avatar_url"]
except KeyError:
raise SynapseError(
400, "Missing key 'avatar_url'", errcode=Codes.MISSING_PARAM
)
yield self.profile_handler.set_avatar_url(user, requester, new_name, is_admin)
await self.profile_handler.set_avatar_url(
user, requester, new_avatar_url, is_admin
)
return 200, {}
@ -129,20 +129,19 @@ class ProfileRestServlet(RestServlet):
self.profile_handler = hs.get_profile_handler()
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request, user_id):
async def on_GET(self, request, user_id):
requester_user = None
if self.hs.config.require_auth_for_profile_requests:
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
requester_user = requester.user
user = UserID.from_string(user_id)
yield self.profile_handler.check_profile_query_allowed(user, requester_user)
await self.profile_handler.check_profile_query_allowed(user, requester_user)
displayname = yield self.profile_handler.get_displayname(user)
avatar_url = yield self.profile_handler.get_avatar_url(user)
displayname = await self.profile_handler.get_displayname(user)
avatar_url = await self.profile_handler.get_avatar_url(user)
ret = {}
if displayname is not None:

View File

@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.api.errors import (
NotFoundError,
@ -46,8 +45,7 @@ class PushRuleRestServlet(RestServlet):
self.notifier = hs.get_notifier()
self._is_worker = hs.config.worker_app is not None
@defer.inlineCallbacks
def on_PUT(self, request, path):
async def on_PUT(self, request, path):
if self._is_worker:
raise Exception("Cannot handle PUT /push_rules on worker")
@ -57,7 +55,7 @@ class PushRuleRestServlet(RestServlet):
except InvalidRuleException as e:
raise SynapseError(400, str(e))
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
if "/" in spec["rule_id"] or "\\" in spec["rule_id"]:
raise SynapseError(400, "rule_id may not contain slashes")
@ -67,7 +65,7 @@ class PushRuleRestServlet(RestServlet):
user_id = requester.user.to_string()
if "attr" in spec:
yield self.set_rule_attr(user_id, spec, content)
await self.set_rule_attr(user_id, spec, content)
self.notify_user(user_id)
return 200, {}
@ -91,7 +89,7 @@ class PushRuleRestServlet(RestServlet):
after = _namespaced_rule_id(spec, after)
try:
yield self.store.add_push_rule(
await self.store.add_push_rule(
user_id=user_id,
rule_id=_namespaced_rule_id_from_spec(spec),
priority_class=priority_class,
@ -108,20 +106,19 @@ class PushRuleRestServlet(RestServlet):
return 200, {}
@defer.inlineCallbacks
def on_DELETE(self, request, path):
async def on_DELETE(self, request, path):
if self._is_worker:
raise Exception("Cannot handle DELETE /push_rules on worker")
spec = _rule_spec_from_path([x for x in path.split("/")])
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
user_id = requester.user.to_string()
namespaced_rule_id = _namespaced_rule_id_from_spec(spec)
try:
yield self.store.delete_push_rule(user_id, namespaced_rule_id)
await self.store.delete_push_rule(user_id, namespaced_rule_id)
self.notify_user(user_id)
return 200, {}
except StoreError as e:
@ -130,15 +127,14 @@ class PushRuleRestServlet(RestServlet):
else:
raise
@defer.inlineCallbacks
def on_GET(self, request, path):
requester = yield self.auth.get_user_by_req(request)
async def on_GET(self, request, path):
requester = await self.auth.get_user_by_req(request)
user_id = requester.user.to_string()
# we build up the full structure and then decide which bits of it
# to send which means doing unnecessary work sometimes but is
# is probably not going to make a whole lot of difference
rules = yield self.store.get_push_rules_for_user(user_id)
rules = await self.store.get_push_rules_for_user(user_id)
rules = format_push_rules_for_user(requester.user, rules)

View File

@ -15,8 +15,6 @@
import logging
from twisted.internet import defer
from synapse.api.errors import Codes, StoreError, SynapseError
from synapse.http.server import finish_request
from synapse.http.servlet import (
@ -39,12 +37,11 @@ class PushersRestServlet(RestServlet):
self.hs = hs
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request):
requester = yield self.auth.get_user_by_req(request)
async def on_GET(self, request):
requester = await self.auth.get_user_by_req(request)
user = requester.user
pushers = yield self.hs.get_datastore().get_pushers_by_user_id(user.to_string())
pushers = await self.hs.get_datastore().get_pushers_by_user_id(user.to_string())
allowed_keys = [
"app_display_name",
@ -78,9 +75,8 @@ class PushersSetRestServlet(RestServlet):
self.notifier = hs.get_notifier()
self.pusher_pool = self.hs.get_pusherpool()
@defer.inlineCallbacks
def on_POST(self, request):
requester = yield self.auth.get_user_by_req(request)
async def on_POST(self, request):
requester = await self.auth.get_user_by_req(request)
user = requester.user
content = parse_json_object_from_request(request)
@ -91,7 +87,7 @@ class PushersSetRestServlet(RestServlet):
and "kind" in content
and content["kind"] is None
):
yield self.pusher_pool.remove_pusher(
await self.pusher_pool.remove_pusher(
content["app_id"], content["pushkey"], user_id=user.to_string()
)
return 200, {}
@ -117,14 +113,14 @@ class PushersSetRestServlet(RestServlet):
append = content["append"]
if not append:
yield self.pusher_pool.remove_pushers_by_app_id_and_pushkey_not_user(
await self.pusher_pool.remove_pushers_by_app_id_and_pushkey_not_user(
app_id=content["app_id"],
pushkey=content["pushkey"],
not_user_id=user.to_string(),
)
try:
yield self.pusher_pool.add_pusher(
await self.pusher_pool.add_pusher(
user_id=user.to_string(),
access_token=requester.access_token_id,
kind=content["kind"],
@ -164,16 +160,15 @@ class PushersRemoveRestServlet(RestServlet):
self.auth = hs.get_auth()
self.pusher_pool = self.hs.get_pusherpool()
@defer.inlineCallbacks
def on_GET(self, request):
requester = yield self.auth.get_user_by_req(request, rights="delete_pusher")
async def on_GET(self, request):
requester = await self.auth.get_user_by_req(request, rights="delete_pusher")
user = requester.user
app_id = parse_string(request, "app_id", required=True)
pushkey = parse_string(request, "pushkey", required=True)
try:
yield self.pusher_pool.remove_pusher(
await self.pusher_pool.remove_pusher(
app_id=app_id, pushkey=pushkey, user_id=user.to_string()
)
except StoreError as se:

View File

@ -714,7 +714,7 @@ class RoomMembershipRestServlet(TransactionRestServlet):
target = UserID.from_string(content["user_id"])
event_content = None
if "reason" in content and membership_action in ["kick", "ban"]:
if "reason" in content:
event_content = {"reason": content["reason"]}
await self.room_member_handler.update_membership(

View File

@ -17,8 +17,6 @@ import base64
import hashlib
import hmac
from twisted.internet import defer
from synapse.http.servlet import RestServlet
from synapse.rest.client.v2_alpha._base import client_patterns
@ -31,9 +29,8 @@ class VoipRestServlet(RestServlet):
self.hs = hs
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request):
requester = yield self.auth.get_user_by_req(
async def on_GET(self, request):
requester = await self.auth.get_user_by_req(
request, self.hs.config.turn_allow_guests
)

View File

@ -78,7 +78,7 @@ def interactive_auth_handler(orig):
"""
def wrapped(*args, **kwargs):
res = defer.maybeDeferred(orig, *args, **kwargs)
res = defer.ensureDeferred(orig(*args, **kwargs))
res.addErrback(_catch_incomplete_interactive_auth)
return res

View File

@ -18,8 +18,6 @@ import logging
from six.moves import http_client
from twisted.internet import defer
from synapse.api.constants import LoginType
from synapse.api.errors import Codes, SynapseError, ThreepidValidationError
from synapse.config.emailconfig import ThreepidBehaviour
@ -67,8 +65,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
template_text=template_text,
)
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
if self.config.threepid_behaviour_email == ThreepidBehaviour.OFF:
if self.config.local_threepid_handling_disabled_due_to_email_config:
logger.warning(
@ -95,7 +92,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
Codes.THREEPID_DENIED,
)
existing_user_id = yield self.hs.get_datastore().get_user_id_by_threepid(
existing_user_id = await self.hs.get_datastore().get_user_id_by_threepid(
"email", email
)
@ -106,7 +103,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
assert self.hs.config.account_threepid_delegate_email
# Have the configured identity server handle the request
ret = yield self.identity_handler.requestEmailToken(
ret = await self.identity_handler.requestEmailToken(
self.hs.config.account_threepid_delegate_email,
email,
client_secret,
@ -115,7 +112,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
)
else:
# Send password reset emails from Synapse
sid = yield self.identity_handler.send_threepid_validation(
sid = await self.identity_handler.send_threepid_validation(
email,
client_secret,
send_attempt,
@ -153,8 +150,7 @@ class PasswordResetSubmitTokenServlet(RestServlet):
[self.config.email_password_reset_template_failure_html],
)
@defer.inlineCallbacks
def on_GET(self, request, medium):
async def on_GET(self, request, medium):
# We currently only handle threepid token submissions for email
if medium != "email":
raise SynapseError(
@ -176,7 +172,7 @@ class PasswordResetSubmitTokenServlet(RestServlet):
# Attempt to validate a 3PID session
try:
# Mark the session as valid
next_link = yield self.store.validate_threepid_session(
next_link = await self.store.validate_threepid_session(
sid, client_secret, token, self.clock.time_msec()
)
@ -218,8 +214,7 @@ class PasswordRestServlet(RestServlet):
self._set_password_handler = hs.get_set_password_handler()
@interactive_auth_handler
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
body = parse_json_object_from_request(request)
# there are two possibilities here. Either the user does not have an
@ -233,14 +228,14 @@ class PasswordRestServlet(RestServlet):
# In the second case, we require a password to confirm their identity.
if self.auth.has_access_token(request):
requester = yield self.auth.get_user_by_req(request)
params = yield self.auth_handler.validate_user_via_ui_auth(
requester = await self.auth.get_user_by_req(request)
params = await self.auth_handler.validate_user_via_ui_auth(
requester, body, self.hs.get_ip_from_request(request)
)
user_id = requester.user.to_string()
else:
requester = None
result, params, _ = yield self.auth_handler.check_auth(
result, params, _ = await self.auth_handler.check_auth(
[[LoginType.EMAIL_IDENTITY]], body, self.hs.get_ip_from_request(request)
)
@ -254,7 +249,7 @@ class PasswordRestServlet(RestServlet):
# (See add_threepid in synapse/handlers/auth.py)
threepid["address"] = threepid["address"].lower()
# if using email, we must know about the email they're authing with!
threepid_user_id = yield self.datastore.get_user_id_by_threepid(
threepid_user_id = await self.datastore.get_user_id_by_threepid(
threepid["medium"], threepid["address"]
)
if not threepid_user_id:
@ -267,7 +262,7 @@ class PasswordRestServlet(RestServlet):
assert_params_in_dict(params, ["new_password"])
new_password = params["new_password"]
yield self._set_password_handler.set_password(user_id, new_password, requester)
await self._set_password_handler.set_password(user_id, new_password, requester)
return 200, {}
@ -286,8 +281,7 @@ class DeactivateAccountRestServlet(RestServlet):
self._deactivate_account_handler = hs.get_deactivate_account_handler()
@interactive_auth_handler
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
body = parse_json_object_from_request(request)
erase = body.get("erase", False)
if not isinstance(erase, bool):
@ -297,19 +291,19 @@ class DeactivateAccountRestServlet(RestServlet):
Codes.BAD_JSON,
)
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
# allow ASes to dectivate their own users
if requester.app_service:
yield self._deactivate_account_handler.deactivate_account(
await self._deactivate_account_handler.deactivate_account(
requester.user.to_string(), erase
)
return 200, {}
yield self.auth_handler.validate_user_via_ui_auth(
await self.auth_handler.validate_user_via_ui_auth(
requester, body, self.hs.get_ip_from_request(request)
)
result = yield self._deactivate_account_handler.deactivate_account(
result = await self._deactivate_account_handler.deactivate_account(
requester.user.to_string(), erase, id_server=body.get("id_server")
)
if result:
@ -346,8 +340,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
template_text=template_text,
)
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
if self.config.threepid_behaviour_email == ThreepidBehaviour.OFF:
if self.config.local_threepid_handling_disabled_due_to_email_config:
logger.warning(
@ -371,7 +364,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
Codes.THREEPID_DENIED,
)
existing_user_id = yield self.store.get_user_id_by_threepid(
existing_user_id = await self.store.get_user_id_by_threepid(
"email", body["email"]
)
@ -382,7 +375,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
assert self.hs.config.account_threepid_delegate_email
# Have the configured identity server handle the request
ret = yield self.identity_handler.requestEmailToken(
ret = await self.identity_handler.requestEmailToken(
self.hs.config.account_threepid_delegate_email,
email,
client_secret,
@ -391,7 +384,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
)
else:
# Send threepid validation emails from Synapse
sid = yield self.identity_handler.send_threepid_validation(
sid = await self.identity_handler.send_threepid_validation(
email,
client_secret,
send_attempt,
@ -414,8 +407,7 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
self.store = self.hs.get_datastore()
self.identity_handler = hs.get_handlers().identity_handler
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
body = parse_json_object_from_request(request)
assert_params_in_dict(
body, ["client_secret", "country", "phone_number", "send_attempt"]
@ -435,7 +427,7 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
Codes.THREEPID_DENIED,
)
existing_user_id = yield self.store.get_user_id_by_threepid("msisdn", msisdn)
existing_user_id = await self.store.get_user_id_by_threepid("msisdn", msisdn)
if existing_user_id is not None:
raise SynapseError(400, "MSISDN is already in use", Codes.THREEPID_IN_USE)
@ -450,7 +442,7 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
"Adding phone numbers to user account is not supported by this homeserver",
)
ret = yield self.identity_handler.requestMsisdnToken(
ret = await self.identity_handler.requestMsisdnToken(
self.hs.config.account_threepid_delegate_msisdn,
country,
phone_number,
@ -484,8 +476,7 @@ class AddThreepidEmailSubmitTokenServlet(RestServlet):
[self.config.email_add_threepid_template_failure_html],
)
@defer.inlineCallbacks
def on_GET(self, request):
async def on_GET(self, request):
if self.config.threepid_behaviour_email == ThreepidBehaviour.OFF:
if self.config.local_threepid_handling_disabled_due_to_email_config:
logger.warning(
@ -508,7 +499,7 @@ class AddThreepidEmailSubmitTokenServlet(RestServlet):
# Attempt to validate a 3PID session
try:
# Mark the session as valid
next_link = yield self.store.validate_threepid_session(
next_link = await self.store.validate_threepid_session(
sid, client_secret, token, self.clock.time_msec()
)
@ -558,8 +549,7 @@ class AddThreepidMsisdnSubmitTokenServlet(RestServlet):
self.store = hs.get_datastore()
self.identity_handler = hs.get_handlers().identity_handler
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
if not self.config.account_threepid_delegate_msisdn:
raise SynapseError(
400,
@ -571,7 +561,7 @@ class AddThreepidMsisdnSubmitTokenServlet(RestServlet):
assert_params_in_dict(body, ["client_secret", "sid", "token"])
# Proxy submit_token request to msisdn threepid delegate
response = yield self.identity_handler.proxy_msisdn_submit_token(
response = await self.identity_handler.proxy_msisdn_submit_token(
self.config.account_threepid_delegate_msisdn,
body["client_secret"],
body["sid"],
@ -591,17 +581,15 @@ class ThreepidRestServlet(RestServlet):
self.auth_handler = hs.get_auth_handler()
self.datastore = self.hs.get_datastore()
@defer.inlineCallbacks
def on_GET(self, request):
requester = yield self.auth.get_user_by_req(request)
async def on_GET(self, request):
requester = await self.auth.get_user_by_req(request)
threepids = yield self.datastore.user_get_threepids(requester.user.to_string())
threepids = await self.datastore.user_get_threepids(requester.user.to_string())
return 200, {"threepids": threepids}
@defer.inlineCallbacks
def on_POST(self, request):
requester = yield self.auth.get_user_by_req(request)
async def on_POST(self, request):
requester = await self.auth.get_user_by_req(request)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
@ -615,11 +603,11 @@ class ThreepidRestServlet(RestServlet):
client_secret = threepid_creds["client_secret"]
sid = threepid_creds["sid"]
validation_session = yield self.identity_handler.validate_threepid_session(
validation_session = await self.identity_handler.validate_threepid_session(
client_secret, sid
)
if validation_session:
yield self.auth_handler.add_threepid(
await self.auth_handler.add_threepid(
user_id,
validation_session["medium"],
validation_session["address"],
@ -642,9 +630,9 @@ class ThreepidAddRestServlet(RestServlet):
self.auth = hs.get_auth()
self.auth_handler = hs.get_auth_handler()
@defer.inlineCallbacks
def on_POST(self, request):
requester = yield self.auth.get_user_by_req(request)
@interactive_auth_handler
async def on_POST(self, request):
requester = await self.auth.get_user_by_req(request)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
@ -652,11 +640,15 @@ class ThreepidAddRestServlet(RestServlet):
client_secret = body["client_secret"]
sid = body["sid"]
validation_session = yield self.identity_handler.validate_threepid_session(
await self.auth_handler.validate_user_via_ui_auth(
requester, body, self.hs.get_ip_from_request(request)
)
validation_session = await self.identity_handler.validate_threepid_session(
client_secret, sid
)
if validation_session:
yield self.auth_handler.add_threepid(
await self.auth_handler.add_threepid(
user_id,
validation_session["medium"],
validation_session["address"],
@ -678,8 +670,7 @@ class ThreepidBindRestServlet(RestServlet):
self.identity_handler = hs.get_handlers().identity_handler
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
body = parse_json_object_from_request(request)
assert_params_in_dict(body, ["id_server", "sid", "client_secret"])
@ -688,10 +679,10 @@ class ThreepidBindRestServlet(RestServlet):
client_secret = body["client_secret"]
id_access_token = body.get("id_access_token") # optional
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
user_id = requester.user.to_string()
yield self.identity_handler.bind_threepid(
await self.identity_handler.bind_threepid(
client_secret, sid, user_id, id_server, id_access_token
)
@ -708,12 +699,11 @@ class ThreepidUnbindRestServlet(RestServlet):
self.auth = hs.get_auth()
self.datastore = self.hs.get_datastore()
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
"""Unbind the given 3pid from a specific identity server, or identity servers that are
known to have this 3pid bound
"""
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
body = parse_json_object_from_request(request)
assert_params_in_dict(body, ["medium", "address"])
@ -723,7 +713,7 @@ class ThreepidUnbindRestServlet(RestServlet):
# Attempt to unbind the threepid from an identity server. If id_server is None, try to
# unbind from all identity servers this threepid has been added to in the past
result = yield self.identity_handler.try_unbind_threepid(
result = await self.identity_handler.try_unbind_threepid(
requester.user.to_string(),
{"address": address, "medium": medium, "id_server": id_server},
)
@ -738,16 +728,15 @@ class ThreepidDeleteRestServlet(RestServlet):
self.auth = hs.get_auth()
self.auth_handler = hs.get_auth_handler()
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
body = parse_json_object_from_request(request)
assert_params_in_dict(body, ["medium", "address"])
requester = yield self.auth.get_user_by_req(request)
requester = await self.auth.get_user_by_req(request)
user_id = requester.user.to_string()
try:
ret = yield self.auth_handler.delete_threepid(
ret = await self.auth_handler.delete_threepid(
user_id, body["medium"], body["address"], body.get("id_server")
)
except Exception:
@ -772,9 +761,8 @@ class WhoamiRestServlet(RestServlet):
super(WhoamiRestServlet, self).__init__()
self.auth = hs.get_auth()
@defer.inlineCallbacks
def on_GET(self, request):
requester = yield self.auth.get_user_by_req(request)
async def on_GET(self, request):
requester = await self.auth.get_user_by_req(request)
return 200, {"user_id": requester.user.to_string()}

View File

@ -15,8 +15,6 @@
import logging
from twisted.internet import defer
from synapse.api.errors import AuthError, NotFoundError, SynapseError
from synapse.http.servlet import RestServlet, parse_json_object_from_request
@ -41,15 +39,14 @@ class AccountDataServlet(RestServlet):
self.store = hs.get_datastore()
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def on_PUT(self, request, user_id, account_data_type):
requester = yield self.auth.get_user_by_req(request)
async def on_PUT(self, request, user_id, account_data_type):
requester = await self.auth.get_user_by_req(request)
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot add account data for other users.")
body = parse_json_object_from_request(request)
max_id = yield self.store.add_account_data_for_user(
max_id = await self.store.add_account_data_for_user(
user_id, account_data_type, body
)
@ -57,13 +54,12 @@ class AccountDataServlet(RestServlet):
return 200, {}
@defer.inlineCallbacks
def on_GET(self, request, user_id, account_data_type):
requester = yield self.auth.get_user_by_req(request)
async def on_GET(self, request, user_id, account_data_type):
requester = await self.auth.get_user_by_req(request)
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot get account data for other users.")
event = yield self.store.get_global_account_data_by_type_for_user(
event = await self.store.get_global_account_data_by_type_for_user(
account_data_type, user_id
)
@ -91,9 +87,8 @@ class RoomAccountDataServlet(RestServlet):
self.store = hs.get_datastore()
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def on_PUT(self, request, user_id, room_id, account_data_type):
requester = yield self.auth.get_user_by_req(request)
async def on_PUT(self, request, user_id, room_id, account_data_type):
requester = await self.auth.get_user_by_req(request)
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot add account data for other users.")
@ -106,7 +101,7 @@ class RoomAccountDataServlet(RestServlet):
" Use /rooms/!roomId:server.name/read_markers",
)
max_id = yield self.store.add_account_data_to_room(
max_id = await self.store.add_account_data_to_room(
user_id, room_id, account_data_type, body
)
@ -114,13 +109,12 @@ class RoomAccountDataServlet(RestServlet):
return 200, {}
@defer.inlineCallbacks
def on_GET(self, request, user_id, room_id, account_data_type):
requester = yield self.auth.get_user_by_req(request)
async def on_GET(self, request, user_id, room_id, account_data_type):
requester = await self.auth.get_user_by_req(request)
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot get account data for other users.")
event = yield self.store.get_account_data_for_room_and_type(
event = await self.store.get_account_data_for_room_and_type(
user_id, room_id, account_data_type
)

View File

@ -15,8 +15,6 @@
import logging
from twisted.internet import defer
from synapse.api.errors import AuthError, SynapseError
from synapse.http.server import finish_request
from synapse.http.servlet import RestServlet
@ -45,13 +43,12 @@ class AccountValidityRenewServlet(RestServlet):
self.success_html = hs.config.account_validity.account_renewed_html_content
self.failure_html = hs.config.account_validity.invalid_token_html_content
@defer.inlineCallbacks
def on_GET(self, request):
async def on_GET(self, request):
if b"token" not in request.args:
raise SynapseError(400, "Missing renewal token")
renewal_token = request.args[b"token"][0]
token_valid = yield self.account_activity_handler.renew_account(
token_valid = await self.account_activity_handler.renew_account(
renewal_token.decode("utf8")
)
@ -67,7 +64,6 @@ class AccountValidityRenewServlet(RestServlet):
request.setHeader(b"Content-Length", b"%d" % (len(response),))
request.write(response.encode("utf8"))
finish_request(request)
defer.returnValue(None)
class AccountValiditySendMailServlet(RestServlet):
@ -85,18 +81,17 @@ class AccountValiditySendMailServlet(RestServlet):
self.auth = hs.get_auth()
self.account_validity = self.hs.config.account_validity
@defer.inlineCallbacks
def on_POST(self, request):
async def on_POST(self, request):
if not self.account_validity.renew_by_email_enabled:
raise AuthError(
403, "Account renewal via email is disabled on this server."
)
requester = yield self.auth.get_user_by_req(request, allow_expired=True)
requester = await self.auth.get_user_by_req(request, allow_expired=True)
user_id = requester.user.to_string()
yield self.account_activity_handler.send_renewal_email_to_user(user_id)
await self.account_activity_handler.send_renewal_email_to_user(user_id)
defer.returnValue((200, {}))
return 200, {}
def register_servlets(hs, http_server):

View File

@ -15,8 +15,6 @@
import logging
from twisted.internet import defer
from synapse.api.constants import LoginType
from synapse.api.errors import SynapseError
from synapse.api.urls import CLIENT_API_PREFIX
@ -171,8 +169,7 @@ class AuthRestServlet(RestServlet):
else:
raise SynapseError(404, "Unknown auth stage type")
@defer.inlineCallbacks
def on_POST(self, request, stagetype):
async def on_POST(self, request, stagetype):
session = parse_string(request, "session")
if not session:
@ -186,7 +183,7 @@ class AuthRestServlet(RestServlet):
authdict = {"response": response, "session": session}
success = yield self.auth_handler.add_oob_auth(
success = await self.auth_handler.add_oob_auth(
LoginType.RECAPTCHA, authdict, self.hs.get_ip_from_request(request)
)
@ -215,7 +212,7 @@ class AuthRestServlet(RestServlet):
session = request.args["session"][0]
authdict = {"session": session}
success = yield self.auth_handler.add_oob_auth(
success = await self.auth_handler.add_oob_auth(
LoginType.TERMS, authdict, self.hs.get_ip_from_request(request)
)

Some files were not shown because too many files have changed in this diff Show More