Synapse 1.51.0 (2022-01-25)

===========================
 
 No significant changes since 1.51.0rc2.
 
 Synapse 1.51.0 deprecates `webclient` listeners and non-HTTP(S) `web_client_location`s. Support for these will be removed in Synapse 1.53.0, at which point Synapse will not be capable of directly serving a web client for Matrix.
 
 Synapse 1.51.0rc2 (2022-01-24)
 ==============================
 
 Bugfixes
 --------
 
 - Fix a bug introduced in Synapse 1.40.0 that caused Synapse to fail to process incoming federation traffic after handling a large amount of events in a v1 room. ([\#11806](https://github.com/matrix-org/synapse/issues/11806))
 
 Synapse 1.51.0rc1 (2022-01-21)
 ==============================
 
 Features
 --------
 
 - Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts. ([\#11561](https://github.com/matrix-org/synapse/issues/11561), [\#11749](https://github.com/matrix-org/synapse/issues/11749), [\#11757](https://github.com/matrix-org/synapse/issues/11757))
 - Include whether the requesting user has participated in a thread when generating a summary for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11577](https://github.com/matrix-org/synapse/issues/11577))
 - Return an `M_FORBIDDEN` error code instead of `M_UNKNOWN` when a spam checker module prevents a user from creating a room. ([\#11672](https://github.com/matrix-org/synapse/issues/11672))
 - Add a flag to the `synapse_review_recent_signups` script to ignore and filter appservice users. ([\#11675](https://github.com/matrix-org/synapse/issues/11675), [\#11770](https://github.com/matrix-org/synapse/issues/11770))
 
 Bugfixes
 --------
 
 - Fix a long-standing issue which could cause Synapse to incorrectly accept data in the unsigned field of events
   received over federation. ([\#11530](https://github.com/matrix-org/synapse/issues/11530))
 - Fix a long-standing bug where Synapse wouldn't cache a response indicating that a remote user has no devices. ([\#11587](https://github.com/matrix-org/synapse/issues/11587))
 - Fix an error that occurs whilst trying to get the federation status of a destination server that was working normally. This admin API was newly introduced in Synapse v1.49.0. ([\#11593](https://github.com/matrix-org/synapse/issues/11593))
 - Fix bundled aggregations not being included in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675). ([\#11612](https://github.com/matrix-org/synapse/issues/11612), [\#11659](https://github.com/matrix-org/synapse/issues/11659), [\#11791](https://github.com/matrix-org/synapse/issues/11791))
 - Fix the `/_matrix/client/v1/room/{roomId}/hierarchy` endpoint returning incorrect fields which have been present since Synapse 1.49.0. ([\#11667](https://github.com/matrix-org/synapse/issues/11667))
 - Fix preview of some GIF URLs (like tenor.com). Contributed by Philippe Daouadi. ([\#11669](https://github.com/matrix-org/synapse/issues/11669))
 - Fix a bug where only the first 50 rooms from a space were returned from the `/hierarchy` API. This has existed since the introduction of the API in Synapse v1.41.0. ([\#11695](https://github.com/matrix-org/synapse/issues/11695))
 - Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan. ([\#11710](https://github.com/matrix-org/synapse/issues/11710), [\#11745](https://github.com/matrix-org/synapse/issues/11745))
 - Make the 'List Rooms' Admin API sort stable. Contributed by Daniël Sonck. ([\#11737](https://github.com/matrix-org/synapse/issues/11737))
 - Fix a long-standing bug where space hierarchy over federation would only work correctly some of the time. ([\#11775](https://github.com/matrix-org/synapse/issues/11775))
 - Fix a bug introduced in Synapse v1.46.0 that prevented `on_logged_out` module callbacks from being correctly awaited by Synapse. ([\#11786](https://github.com/matrix-org/synapse/issues/11786))
 
 Improved Documentation
 ----------------------
 
 - Warn against using a Let's Encrypt certificate for TLS/DTLS TURN server client connections, and suggest using ZeroSSL certificate instead. This works around client-side connectivity errors caused by WebRTC libraries that reject Let's Encrypt certificates. Contibuted by @AndrewFerr. ([\#11686](https://github.com/matrix-org/synapse/issues/11686))
 - Document the new `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable in the contributing guide. ([\#11715](https://github.com/matrix-org/synapse/issues/11715))
 - Document that the minimum supported PostgreSQL version is now 10. ([\#11725](https://github.com/matrix-org/synapse/issues/11725))
 - Fix typo in demo docs: differnt. ([\#11735](https://github.com/matrix-org/synapse/issues/11735))
 - Update room spec URL in config files. ([\#11739](https://github.com/matrix-org/synapse/issues/11739))
 - Mention `python3-venv` and `libpq-dev` dependencies in the contribution guide. ([\#11740](https://github.com/matrix-org/synapse/issues/11740))
 - Update documentation for configuring login with Facebook. ([\#11755](https://github.com/matrix-org/synapse/issues/11755))
 - Update installation instructions to note that Python 3.6 is no longer supported. ([\#11781](https://github.com/matrix-org/synapse/issues/11781))
 
 Deprecations and Removals
 -------------------------
 
 - Remove the unstable `/send_relation` endpoint. ([\#11682](https://github.com/matrix-org/synapse/issues/11682))
 - Remove `python_twisted_reactor_pending_calls` Prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724))
 - Remove the `password_hash` field from the response dictionaries of the [Users Admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html). ([\#11576](https://github.com/matrix-org/synapse/issues/11576))
 - **Deprecate support for `webclient` listeners and non-HTTP(S) `web_client_location` configuration. ([\#11774](https://github.com/matrix-org/synapse/issues/11774), [\#11783](https://github.com/matrix-org/synapse/issues/11783))**
 
 Internal Changes
 ----------------
 
 - Run `pyupgrade --py37-plus --keep-percent-format` on Synapse. ([\#11685](https://github.com/matrix-org/synapse/issues/11685))
 - Use buildkit's cache feature to speed up docker builds. ([\#11691](https://github.com/matrix-org/synapse/issues/11691))
 - Use `auto_attribs` and native type hints for attrs classes. ([\#11692](https://github.com/matrix-org/synapse/issues/11692), [\#11768](https://github.com/matrix-org/synapse/issues/11768))
 - Remove debug logging for #4422, which has been closed since Synapse 0.99. ([\#11693](https://github.com/matrix-org/synapse/issues/11693))
 - Remove fallback code for Python 2. ([\#11699](https://github.com/matrix-org/synapse/issues/11699))
 - Add a test for [an edge case](https://github.com/matrix-org/synapse/pull/11532#discussion_r769104461) in the `/sync` logic. ([\#11701](https://github.com/matrix-org/synapse/issues/11701))
 - Add the option to write SQLite test dbs to disk when running tests. ([\#11702](https://github.com/matrix-org/synapse/issues/11702))
 - Improve Complement test output for Gitub Actions. ([\#11707](https://github.com/matrix-org/synapse/issues/11707))
 - Fix docstring on `add_account_data_for_user`. ([\#11716](https://github.com/matrix-org/synapse/issues/11716))
 - Complement environment variable name change and update `.gitignore`. ([\#11718](https://github.com/matrix-org/synapse/issues/11718))
 - Simplify calculation of Prometheus metrics for garbage collection. ([\#11723](https://github.com/matrix-org/synapse/issues/11723))
 - Improve accuracy of `python_twisted_reactor_tick_time` Prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724), [\#11771](https://github.com/matrix-org/synapse/issues/11771))
 - Minor efficiency improvements when inserting many values into the database. ([\#11742](https://github.com/matrix-org/synapse/issues/11742))
 - Invite PR authors to give themselves credit in the changelog. ([\#11744](https://github.com/matrix-org/synapse/issues/11744))
 - Add optional debugging to investigate [issue 8631](https://github.com/matrix-org/synapse/issues/8631). ([\#11760](https://github.com/matrix-org/synapse/issues/11760))
 - Remove `log_function` utility function and its uses. ([\#11761](https://github.com/matrix-org/synapse/issues/11761))
 - Add a unit test that checks both `client` and `webclient` resources will function when simultaneously enabled. ([\#11765](https://github.com/matrix-org/synapse/issues/11765))
 - Allow overriding complement commit using `COMPLEMENT_REF`. ([\#11766](https://github.com/matrix-org/synapse/issues/11766))
 - Add some comments and type annotations for `_update_outliers_txn`. ([\#11776](https://github.com/matrix-org/synapse/issues/11776))
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE1508oLYUKainYFJakD7OEIo53t0FAmHv5K8ACgkQkD7OEIo5
 3t0TOg/+LQAWWi2RFUiezajSJS5LH5n7FNVs0XPAahKsrU6Xc6NHPHCQ5TpUSfKu
 MxLE5E4+5rsXpcHkb4NqNhyl5sEXPVWatmPkD+o9eo3LXBh42CHjcfOrboPaBRBJ
 DkmPYE3DhL++RBtdYoTRTLfVwq6L35Yi9CNjc19IMDAecqayC9Yl2E4ilcFZpS0R
 RB1ZULeNBa5oQmQEWbceGgAxhjAKgI/zkadSH2jsVWMxMUoh8zk1OzOc8DOaImRv
 yNr/8kB7wyCLLTRIR3AgcwI5nQHYCaYmnuoOP1+Y0axf1IVuQneo0kSNQJ9bf3Ob
 8/ICIb3e9qqbYKYiy4aCV41n9F4fhGRUoD+2s6S+QWC49hIVIrBoW1d6J8xNlV3r
 CbwKRuif1hIR4MTec+wjOBZScucpTDFx7wn94ivFtGlJmI58wikTunGKseczWTJN
 ujzdF6oPQc/qlJGG8zKIUgn11795RkZfFTFRJTrHaSWYmnMzSX6wEtXBp8ChoaFY
 X4fGdZqkgTjAt/wcReKI5cbhfmoigpwAPZQIGuvt/1E3YlkTl1VPiO7nxx3fNZEY
 SPNG0UtCaQ3IXOzqQy20cGb8lE8DeaGG3WrN2QeNczxeytoqymKFBq5DBoYXviBw
 D4zr0VpRQpDQUqf7EnreAhN+ZZMvYllK/zDeIFk+5ap2nrj8/9A=
 =b1tp
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE1508oLYUKainYFJakD7OEIo53t0FAmHv7oIACgkQkD7OEIo5
 3t2qVw//YPSUAdcW3+MSFJLobheMB/tdycNUd3xoyTYcDQJPmyuoVBrsjibCpojK
 aifZfEqP9r2PLplJIAiS/7Z7F5DixdZ//KBOcHHJWBeIxCtEEPnk/pT9n4bDBx8w
 mGPajpq7JFail0oktmotx+xLHjnK1YdO0XXXOs1Haqz1/TB8OfMzVSKfWv/zI4eT
 ovsGIUiSLuVeOHl1QBHIBoerfyCHegYZf1B00cp9vMDwzIzMFY1oAzSMu//nGW2i
 4hqWMt4y2wPJz9ez0o5sBtLxj91fZGeqAEuvVNb2IkUjtclJpGbLnXXL6LtIqTHq
 ZH+HXNehEzVDtisq1mfSfczFa4Ma7bwFaizcOB8uuc9w8S2r1mxPSRv3u1CEpCIh
 leOPPXdlGlJ4jbOeV7IPj5Hie3AdFBeX6Sd6ivtzmvWvPWVketRiUUKww4X0oi+6
 0/qSdLiE60meleDJa2dB8vMSG8JKk8Swx1FR/T3e53Q3JmU+wZmwHgfl51ou8Ofe
 W1GhycT3Sb8Ei1XmttxxfE4+aikSu2JpUAu8MnizGPPJX9VCRPFYjTv5aMYxlX02
 XQw2TB8xwusNfWs4I84002FTg2Q/hLgloe5wZQOhHICBfYDuKETGF2LcASpMld53
 jS0vm0pp+Jlj8K3faIcHu+KrMFB2DwhDEORiU0QGeZTobckUzOA=
 =vMwg
 -----END PGP SIGNATURE-----

Merge tag 'v1.51.0'

Synapse 1.51.0 (2022-01-25)
===========================

No significant changes since 1.51.0rc2.

Synapse 1.51.0 deprecates `webclient` listeners and non-HTTP(S) `web_client_location`s. Support for these will be removed in Synapse 1.53.0, at which point Synapse will not be capable of directly serving a web client for Matrix.

Synapse 1.51.0rc2 (2022-01-24)
==============================

Bugfixes
--------

- Fix a bug introduced in Synapse 1.40.0 that caused Synapse to fail to process incoming federation traffic after handling a large amount of events in a v1 room. ([\#11806](https://github.com/matrix-org/synapse/issues/11806))

Synapse 1.51.0rc1 (2022-01-21)
==============================

Features
--------

- Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts. ([\#11561](https://github.com/matrix-org/synapse/issues/11561), [\#11749](https://github.com/matrix-org/synapse/issues/11749), [\#11757](https://github.com/matrix-org/synapse/issues/11757))
- Include whether the requesting user has participated in a thread when generating a summary for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11577](https://github.com/matrix-org/synapse/issues/11577))
- Return an `M_FORBIDDEN` error code instead of `M_UNKNOWN` when a spam checker module prevents a user from creating a room. ([\#11672](https://github.com/matrix-org/synapse/issues/11672))
- Add a flag to the `synapse_review_recent_signups` script to ignore and filter appservice users. ([\#11675](https://github.com/matrix-org/synapse/issues/11675), [\#11770](https://github.com/matrix-org/synapse/issues/11770))

Bugfixes
--------

- Fix a long-standing issue which could cause Synapse to incorrectly accept data in the unsigned field of events
  received over federation. ([\#11530](https://github.com/matrix-org/synapse/issues/11530))
- Fix a long-standing bug where Synapse wouldn't cache a response indicating that a remote user has no devices. ([\#11587](https://github.com/matrix-org/synapse/issues/11587))
- Fix an error that occurs whilst trying to get the federation status of a destination server that was working normally. This admin API was newly introduced in Synapse v1.49.0. ([\#11593](https://github.com/matrix-org/synapse/issues/11593))
- Fix bundled aggregations not being included in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675). ([\#11612](https://github.com/matrix-org/synapse/issues/11612), [\#11659](https://github.com/matrix-org/synapse/issues/11659), [\#11791](https://github.com/matrix-org/synapse/issues/11791))
- Fix the `/_matrix/client/v1/room/{roomId}/hierarchy` endpoint returning incorrect fields which have been present since Synapse 1.49.0. ([\#11667](https://github.com/matrix-org/synapse/issues/11667))
- Fix preview of some GIF URLs (like tenor.com). Contributed by Philippe Daouadi. ([\#11669](https://github.com/matrix-org/synapse/issues/11669))
- Fix a bug where only the first 50 rooms from a space were returned from the `/hierarchy` API. This has existed since the introduction of the API in Synapse v1.41.0. ([\#11695](https://github.com/matrix-org/synapse/issues/11695))
- Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan. ([\#11710](https://github.com/matrix-org/synapse/issues/11710), [\#11745](https://github.com/matrix-org/synapse/issues/11745))
- Make the 'List Rooms' Admin API sort stable. Contributed by Daniël Sonck. ([\#11737](https://github.com/matrix-org/synapse/issues/11737))
- Fix a long-standing bug where space hierarchy over federation would only work correctly some of the time. ([\#11775](https://github.com/matrix-org/synapse/issues/11775))
- Fix a bug introduced in Synapse v1.46.0 that prevented `on_logged_out` module callbacks from being correctly awaited by Synapse. ([\#11786](https://github.com/matrix-org/synapse/issues/11786))

Improved Documentation
----------------------

- Warn against using a Let's Encrypt certificate for TLS/DTLS TURN server client connections, and suggest using ZeroSSL certificate instead. This works around client-side connectivity errors caused by WebRTC libraries that reject Let's Encrypt certificates. Contibuted by @AndrewFerr. ([\#11686](https://github.com/matrix-org/synapse/issues/11686))
- Document the new `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable in the contributing guide. ([\#11715](https://github.com/matrix-org/synapse/issues/11715))
- Document that the minimum supported PostgreSQL version is now 10. ([\#11725](https://github.com/matrix-org/synapse/issues/11725))
- Fix typo in demo docs: differnt. ([\#11735](https://github.com/matrix-org/synapse/issues/11735))
- Update room spec URL in config files. ([\#11739](https://github.com/matrix-org/synapse/issues/11739))
- Mention `python3-venv` and `libpq-dev` dependencies in the contribution guide. ([\#11740](https://github.com/matrix-org/synapse/issues/11740))
- Update documentation for configuring login with Facebook. ([\#11755](https://github.com/matrix-org/synapse/issues/11755))
- Update installation instructions to note that Python 3.6 is no longer supported. ([\#11781](https://github.com/matrix-org/synapse/issues/11781))

Deprecations and Removals
-------------------------

- Remove the unstable `/send_relation` endpoint. ([\#11682](https://github.com/matrix-org/synapse/issues/11682))
- Remove `python_twisted_reactor_pending_calls` Prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724))
- Remove the `password_hash` field from the response dictionaries of the [Users Admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html). ([\#11576](https://github.com/matrix-org/synapse/issues/11576))
- **Deprecate support for `webclient` listeners and non-HTTP(S) `web_client_location` configuration. ([\#11774](https://github.com/matrix-org/synapse/issues/11774), [\#11783](https://github.com/matrix-org/synapse/issues/11783))**

Internal Changes
----------------

- Run `pyupgrade --py37-plus --keep-percent-format` on Synapse. ([\#11685](https://github.com/matrix-org/synapse/issues/11685))
- Use buildkit's cache feature to speed up docker builds. ([\#11691](https://github.com/matrix-org/synapse/issues/11691))
- Use `auto_attribs` and native type hints for attrs classes. ([\#11692](https://github.com/matrix-org/synapse/issues/11692), [\#11768](https://github.com/matrix-org/synapse/issues/11768))
- Remove debug logging for #4422, which has been closed since Synapse 0.99. ([\#11693](https://github.com/matrix-org/synapse/issues/11693))
- Remove fallback code for Python 2. ([\#11699](https://github.com/matrix-org/synapse/issues/11699))
- Add a test for [an edge case](https://github.com/matrix-org/synapse/pull/11532#discussion_r769104461) in the `/sync` logic. ([\#11701](https://github.com/matrix-org/synapse/issues/11701))
- Add the option to write SQLite test dbs to disk when running tests. ([\#11702](https://github.com/matrix-org/synapse/issues/11702))
- Improve Complement test output for Gitub Actions. ([\#11707](https://github.com/matrix-org/synapse/issues/11707))
- Fix docstring on `add_account_data_for_user`. ([\#11716](https://github.com/matrix-org/synapse/issues/11716))
- Complement environment variable name change and update `.gitignore`. ([\#11718](https://github.com/matrix-org/synapse/issues/11718))
- Simplify calculation of Prometheus metrics for garbage collection. ([\#11723](https://github.com/matrix-org/synapse/issues/11723))
- Improve accuracy of `python_twisted_reactor_tick_time` Prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724), [\#11771](https://github.com/matrix-org/synapse/issues/11771))
- Minor efficiency improvements when inserting many values into the database. ([\#11742](https://github.com/matrix-org/synapse/issues/11742))
- Invite PR authors to give themselves credit in the changelog. ([\#11744](https://github.com/matrix-org/synapse/issues/11744))
- Add optional debugging to investigate [issue 8631](https://github.com/matrix-org/synapse/issues/8631). ([\#11760](https://github.com/matrix-org/synapse/issues/11760))
- Remove `log_function` utility function and its uses. ([\#11761](https://github.com/matrix-org/synapse/issues/11761))
- Add a unit test that checks both `client` and `webclient` resources will function when simultaneously enabled. ([\#11765](https://github.com/matrix-org/synapse/issues/11765))
- Allow overriding complement commit using `COMPLEMENT_REF`. ([\#11766](https://github.com/matrix-org/synapse/issues/11766))
- Add some comments and type annotations for `_update_outliers_txn`. ([\#11776](https://github.com/matrix-org/synapse/issues/11776))
This commit is contained in:
David Robertson 2022-01-25 12:35:11 +00:00
commit b500fcbc0c
135 changed files with 2593 additions and 1647 deletions

View File

@ -8,6 +8,7 @@
- Use markdown where necessary, mostly for `code blocks`. - Use markdown where necessary, mostly for `code blocks`.
- End with either a period (.) or an exclamation mark (!). - End with either a period (.) or an exclamation mark (!).
- Start with a capital letter. - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
* [ ] Pull request includes a [sign off](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#sign-off) * [ ] Pull request includes a [sign off](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#sign-off)
* [ ] [Code style](https://matrix-org.github.io/synapse/latest/code_style.html) is correct * [ ] [Code style](https://matrix-org.github.io/synapse/latest/code_style.html) is correct
(run the [linters](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#run-the-linters)) (run the [linters](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

View File

@ -366,6 +366,8 @@ jobs:
# Build initial Synapse image # Build initial Synapse image
- run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile . - run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
working-directory: synapse working-directory: synapse
env:
DOCKER_BUILDKIT: 1
# Build a ready-to-run Synapse image based on the initial image above. # Build a ready-to-run Synapse image based on the initial image above.
# This new image includes a config file, keys for signing and TLS, and # This new image includes a config file, keys for signing and TLS, and
@ -374,7 +376,8 @@ jobs:
working-directory: complement/dockerfiles working-directory: complement/dockerfiles
# Run Complement # Run Complement
- run: go test -v -tags synapse_blacklist,msc2403 ./tests/... - run: set -o pipefail && go test -v -json -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt
shell: bash
env: env:
COMPLEMENT_BASE_IMAGE: complement-synapse:latest COMPLEMENT_BASE_IMAGE: complement-synapse:latest
working-directory: complement working-directory: complement

4
.gitignore vendored
View File

@ -50,3 +50,7 @@ __pycache__/
# docs # docs
book/ book/
# complement
/complement-*
/master.tar.gz

View File

@ -1,10 +1,101 @@
Synapse 1.51.0 (2022-01-25)
===========================
No significant changes since 1.51.0rc2.
Synapse 1.51.0 deprecates `webclient` listeners and non-HTTP(S) `web_client_location`s. Support for these will be removed in Synapse 1.53.0, at which point Synapse will not be capable of directly serving a web client for Matrix.
Synapse 1.51.0rc2 (2022-01-24)
==============================
Bugfixes
--------
- Fix a bug introduced in Synapse 1.40.0 that caused Synapse to fail to process incoming federation traffic after handling a large amount of events in a v1 room. ([\#11806](https://github.com/matrix-org/synapse/issues/11806))
Synapse 1.51.0rc1 (2022-01-21)
==============================
Features
--------
- Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts. ([\#11561](https://github.com/matrix-org/synapse/issues/11561), [\#11749](https://github.com/matrix-org/synapse/issues/11749), [\#11757](https://github.com/matrix-org/synapse/issues/11757))
- Include whether the requesting user has participated in a thread when generating a summary for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11577](https://github.com/matrix-org/synapse/issues/11577))
- Return an `M_FORBIDDEN` error code instead of `M_UNKNOWN` when a spam checker module prevents a user from creating a room. ([\#11672](https://github.com/matrix-org/synapse/issues/11672))
- Add a flag to the `synapse_review_recent_signups` script to ignore and filter appservice users. ([\#11675](https://github.com/matrix-org/synapse/issues/11675), [\#11770](https://github.com/matrix-org/synapse/issues/11770))
Bugfixes
--------
- Fix a long-standing issue which could cause Synapse to incorrectly accept data in the unsigned field of events
received over federation. ([\#11530](https://github.com/matrix-org/synapse/issues/11530))
- Fix a long-standing bug where Synapse wouldn't cache a response indicating that a remote user has no devices. ([\#11587](https://github.com/matrix-org/synapse/issues/11587))
- Fix an error that occurs whilst trying to get the federation status of a destination server that was working normally. This admin API was newly introduced in Synapse v1.49.0. ([\#11593](https://github.com/matrix-org/synapse/issues/11593))
- Fix bundled aggregations not being included in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675). ([\#11612](https://github.com/matrix-org/synapse/issues/11612), [\#11659](https://github.com/matrix-org/synapse/issues/11659), [\#11791](https://github.com/matrix-org/synapse/issues/11791))
- Fix the `/_matrix/client/v1/room/{roomId}/hierarchy` endpoint returning incorrect fields which have been present since Synapse 1.49.0. ([\#11667](https://github.com/matrix-org/synapse/issues/11667))
- Fix preview of some GIF URLs (like tenor.com). Contributed by Philippe Daouadi. ([\#11669](https://github.com/matrix-org/synapse/issues/11669))
- Fix a bug where only the first 50 rooms from a space were returned from the `/hierarchy` API. This has existed since the introduction of the API in Synapse v1.41.0. ([\#11695](https://github.com/matrix-org/synapse/issues/11695))
- Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan. ([\#11710](https://github.com/matrix-org/synapse/issues/11710), [\#11745](https://github.com/matrix-org/synapse/issues/11745))
- Make the 'List Rooms' Admin API sort stable. Contributed by Daniël Sonck. ([\#11737](https://github.com/matrix-org/synapse/issues/11737))
- Fix a long-standing bug where space hierarchy over federation would only work correctly some of the time. ([\#11775](https://github.com/matrix-org/synapse/issues/11775))
- Fix a bug introduced in Synapse v1.46.0 that prevented `on_logged_out` module callbacks from being correctly awaited by Synapse. ([\#11786](https://github.com/matrix-org/synapse/issues/11786))
Improved Documentation
----------------------
- Warn against using a Let's Encrypt certificate for TLS/DTLS TURN server client connections, and suggest using ZeroSSL certificate instead. This works around client-side connectivity errors caused by WebRTC libraries that reject Let's Encrypt certificates. Contibuted by @AndrewFerr. ([\#11686](https://github.com/matrix-org/synapse/issues/11686))
- Document the new `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable in the contributing guide. ([\#11715](https://github.com/matrix-org/synapse/issues/11715))
- Document that the minimum supported PostgreSQL version is now 10. ([\#11725](https://github.com/matrix-org/synapse/issues/11725))
- Fix typo in demo docs: differnt. ([\#11735](https://github.com/matrix-org/synapse/issues/11735))
- Update room spec URL in config files. ([\#11739](https://github.com/matrix-org/synapse/issues/11739))
- Mention `python3-venv` and `libpq-dev` dependencies in the contribution guide. ([\#11740](https://github.com/matrix-org/synapse/issues/11740))
- Update documentation for configuring login with Facebook. ([\#11755](https://github.com/matrix-org/synapse/issues/11755))
- Update installation instructions to note that Python 3.6 is no longer supported. ([\#11781](https://github.com/matrix-org/synapse/issues/11781))
Deprecations and Removals
-------------------------
- Remove the unstable `/send_relation` endpoint. ([\#11682](https://github.com/matrix-org/synapse/issues/11682))
- Remove `python_twisted_reactor_pending_calls` Prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724))
- Remove the `password_hash` field from the response dictionaries of the [Users Admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html). ([\#11576](https://github.com/matrix-org/synapse/issues/11576))
- **Deprecate support for `webclient` listeners and non-HTTP(S) `web_client_location` configuration. ([\#11774](https://github.com/matrix-org/synapse/issues/11774), [\#11783](https://github.com/matrix-org/synapse/issues/11783))**
Internal Changes
----------------
- Run `pyupgrade --py37-plus --keep-percent-format` on Synapse. ([\#11685](https://github.com/matrix-org/synapse/issues/11685))
- Use buildkit's cache feature to speed up docker builds. ([\#11691](https://github.com/matrix-org/synapse/issues/11691))
- Use `auto_attribs` and native type hints for attrs classes. ([\#11692](https://github.com/matrix-org/synapse/issues/11692), [\#11768](https://github.com/matrix-org/synapse/issues/11768))
- Remove debug logging for #4422, which has been closed since Synapse 0.99. ([\#11693](https://github.com/matrix-org/synapse/issues/11693))
- Remove fallback code for Python 2. ([\#11699](https://github.com/matrix-org/synapse/issues/11699))
- Add a test for [an edge case](https://github.com/matrix-org/synapse/pull/11532#discussion_r769104461) in the `/sync` logic. ([\#11701](https://github.com/matrix-org/synapse/issues/11701))
- Add the option to write SQLite test dbs to disk when running tests. ([\#11702](https://github.com/matrix-org/synapse/issues/11702))
- Improve Complement test output for Gitub Actions. ([\#11707](https://github.com/matrix-org/synapse/issues/11707))
- Fix docstring on `add_account_data_for_user`. ([\#11716](https://github.com/matrix-org/synapse/issues/11716))
- Complement environment variable name change and update `.gitignore`. ([\#11718](https://github.com/matrix-org/synapse/issues/11718))
- Simplify calculation of Prometheus metrics for garbage collection. ([\#11723](https://github.com/matrix-org/synapse/issues/11723))
- Improve accuracy of `python_twisted_reactor_tick_time` Prometheus metric. ([\#11724](https://github.com/matrix-org/synapse/issues/11724), [\#11771](https://github.com/matrix-org/synapse/issues/11771))
- Minor efficiency improvements when inserting many values into the database. ([\#11742](https://github.com/matrix-org/synapse/issues/11742))
- Invite PR authors to give themselves credit in the changelog. ([\#11744](https://github.com/matrix-org/synapse/issues/11744))
- Add optional debugging to investigate [issue 8631](https://github.com/matrix-org/synapse/issues/8631). ([\#11760](https://github.com/matrix-org/synapse/issues/11760))
- Remove `log_function` utility function and its uses. ([\#11761](https://github.com/matrix-org/synapse/issues/11761))
- Add a unit test that checks both `client` and `webclient` resources will function when simultaneously enabled. ([\#11765](https://github.com/matrix-org/synapse/issues/11765))
- Allow overriding complement commit using `COMPLEMENT_REF`. ([\#11766](https://github.com/matrix-org/synapse/issues/11766))
- Add some comments and type annotations for `_update_outliers_txn`. ([\#11776](https://github.com/matrix-org/synapse/issues/11776))
Synapse 1.50.2 (2022-01-24) Synapse 1.50.2 (2022-01-24)
=========================== ===========================
Bugfixes Bugfixes
-------- --------
- Fix a bug introduced in Synapse 1.40.0 that caused Synapse to fail to process incoming federation traffic after handling a large amount of events in a v1 room. ([\#11806](https://github.com/matrix-org/synapse/issues/11806)) - Backport the sole fix from v1.51.0rc2. This fixes a bug introduced in Synapse 1.40.0 that caused Synapse to fail to process incoming federation traffic after handling a large amount of events in a v1 room. ([\#11806](https://github.com/matrix-org/synapse/issues/11806))
Synapse 1.50.1 (2022-01-18) Synapse 1.50.1 (2022-01-18)

View File

@ -92,22 +92,6 @@ new PromConsole.Graph({
}) })
</script> </script>
<h3>Pending calls per tick</h3>
<div id="reactor_pending_calls"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_pending_calls"),
expr: "rate(python_twisted_reactor_pending_calls_sum[30s]) / rate(python_twisted_reactor_pending_calls_count[30s])",
name: "[[job]]-[[index]]",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yTitle: "Pending Calls"
})
</script>
<h1>Storage</h1> <h1>Storage</h1>
<h3>Queries</h3> <h3>Queries</h3>

18
debian/changelog vendored
View File

@ -1,3 +1,21 @@
matrix-synapse-py3 (1.51.0) stable; urgency=medium
* New synapse release 1.51.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 25 Jan 2022 11:28:51 +0000
matrix-synapse-py3 (1.51.0~rc2) stable; urgency=medium
* New synapse release 1.51.0~rc2.
-- Synapse Packaging team <packages@matrix.org> Mon, 24 Jan 2022 12:25:00 +0000
matrix-synapse-py3 (1.51.0~rc1) stable; urgency=medium
* New synapse release 1.51.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Fri, 21 Jan 2022 10:46:02 +0000
matrix-synapse-py3 (1.50.2) stable; urgency=medium matrix-synapse-py3 (1.50.2) stable; urgency=medium
* New synapse release 1.50.2. * New synapse release 1.50.2.

View File

@ -22,5 +22,5 @@ Logs and sqlitedb will be stored in demo/808{0,1,2}.{log,db}
Also note that when joining a public room on a differnt HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name. Also note that when joining a public room on a different HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name.

View File

@ -1,14 +1,17 @@
# Dockerfile to build the matrixdotorg/synapse docker images. # Dockerfile to build the matrixdotorg/synapse docker images.
# #
# Note that it uses features which are only available in BuildKit - see
# https://docs.docker.com/go/buildkit/ for more information.
#
# To build the image, run `docker build` command from the root of the # To build the image, run `docker build` command from the root of the
# synapse repository: # synapse repository:
# #
# docker build -f docker/Dockerfile . # DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile .
# #
# There is an optional PYTHON_VERSION build argument which sets the # There is an optional PYTHON_VERSION build argument which sets the
# version of python to build against: for example: # version of python to build against: for example:
# #
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 . # DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.9 .
# #
ARG PYTHON_VERSION=3.8 ARG PYTHON_VERSION=3.8
@ -19,7 +22,16 @@ ARG PYTHON_VERSION=3.8
FROM docker.io/python:${PYTHON_VERSION}-slim as builder FROM docker.io/python:${PYTHON_VERSION}-slim as builder
# install the OS build deps # install the OS build deps
RUN apt-get update && apt-get install -y \ #
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
# Here we use it to set up a cache for apt, to improve rebuild speeds on
# slow connections.
#
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
build-essential \ build-essential \
libffi-dev \ libffi-dev \
libjpeg-dev \ libjpeg-dev \
@ -44,7 +56,8 @@ COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
# used while you develop on the source # used while you develop on the source
# #
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py` # This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
RUN pip install --prefix="/install" --no-warn-script-location \ RUN --mount=type=cache,target=/root/.cache/pip \
pip install --prefix="/install" --no-warn-script-location \
/synapse[all] /synapse[all]
# Copy over the rest of the project # Copy over the rest of the project
@ -66,7 +79,10 @@ LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/syna
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git' LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git'
LABEL org.opencontainers.image.licenses='Apache-2.0' LABEL org.opencontainers.image.licenses='Apache-2.0'
RUN apt-get update && apt-get install -y \ RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
curl \ curl \
gosu \ gosu \
libjpeg62-turbo \ libjpeg62-turbo \

View File

@ -15,9 +15,10 @@ server admin: [Admin API](../usage/administration/admin_api)
It returns a JSON body like the following: It returns a JSON body like the following:
```json ```jsonc
{ {
"displayname": "User", "name": "@user:example.com",
"displayname": "User", // can be null if not set
"threepids": [ "threepids": [
{ {
"medium": "email", "medium": "email",
@ -32,11 +33,11 @@ It returns a JSON body like the following:
"validated_at": 1586458409743 "validated_at": 1586458409743
} }
], ],
"avatar_url": "<avatar_url>", "avatar_url": "<avatar_url>", // can be null if not set
"is_guest": 0,
"admin": 0, "admin": 0,
"deactivated": 0, "deactivated": 0,
"shadow_banned": 0, "shadow_banned": 0,
"password_hash": "$2b$12$p9B4GkqYdRTPGD",
"creation_ts": 1560432506, "creation_ts": 1560432506,
"appservice_id": null, "appservice_id": null,
"consent_server_notice_sent": null, "consent_server_notice_sent": null,

View File

@ -20,7 +20,9 @@ recommended for development. More information about WSL can be found at
<https://docs.microsoft.com/en-us/windows/wsl/install>. Running Synapse natively <https://docs.microsoft.com/en-us/windows/wsl/install>. Running Synapse natively
on Windows is not officially supported. on Windows is not officially supported.
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download). The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://www.python.org/downloads/). Your Python also needs support for [virtual environments](https://docs.python.org/3/library/venv.html). This is usually built-in, but some Linux distributions like Debian and Ubuntu split it out into its own package. Running `sudo apt install python3-venv` should be enough.
Synapse can connect to PostgreSQL via the [psycopg2](https://pypi.org/project/psycopg2/) Python library. Building this library from source requires access to PostgreSQL's C header files. On Debian or Ubuntu Linux, these can be installed with `sudo apt install libpq-dev`.
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git). The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git).
@ -169,6 +171,27 @@ To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`:
SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
``` ```
By default, tests will use an in-memory SQLite database for test data. For additional
help with debugging, one can use an on-disk SQLite database file instead, in order to
review database state during and after running tests. This can be done by setting
the `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable. Doing so will cause the
database state to be stored in a file named `test.db` under the trial process'
working directory. Typically, this ends up being `_trial_temp/test.db`. For example:
```sh
SYNAPSE_TEST_PERSIST_SQLITE_DB=1 trial tests
```
The database file can then be inspected with:
```sh
sqlite3 _trial_temp/test.db
```
Note that the database file is cleared at the beginning of each test run. Thus it
will always only contain the data generated by the *last run test*. Though generally
when debugging, one is only running a single test anyway.
### Running tests under PostgreSQL ### Running tests under PostgreSQL
Invoking `trial` as above will use an in-memory SQLite database. This is great for Invoking `trial` as above will use an in-memory SQLite database. This is great for

View File

@ -35,7 +35,12 @@ When Synapse is asked to preview a URL it does the following:
5. If the media is HTML: 5. If the media is HTML:
1. Decodes the HTML via the stored file. 1. Decodes the HTML via the stored file.
2. Generates an Open Graph response from the HTML. 2. Generates an Open Graph response from the HTML.
3. If an image exists in the Open Graph response: 3. If a JSON oEmbed URL was found in the HTML via autodiscovery:
1. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
2. Convert the oEmbed response to an Open Graph response.
3. Override any Open Graph data from the HTML with data from oEmbed.
4. If an image exists in the Open Graph response:
1. Downloads the URL and stores it into a file via the media storage 1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata. provider and saves the local media metadata.
2. Generates thumbnails. 2. Generates thumbnails.

View File

@ -390,9 +390,6 @@ oidc_providers:
### Facebook ### Facebook
Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant
one so requires a little more configuration.
0. You will need a Facebook developer account. You can register for one 0. You will need a Facebook developer account. You can register for one
[here](https://developers.facebook.com/async/registration/). [here](https://developers.facebook.com/async/registration/).
1. On the [apps](https://developers.facebook.com/apps/) page of the developer 1. On the [apps](https://developers.facebook.com/apps/) page of the developer
@ -412,24 +409,28 @@ Synapse config:
idp_name: Facebook idp_name: Facebook
idp_brand: "facebook" # optional: styling hint for clients idp_brand: "facebook" # optional: styling hint for clients
discover: false discover: false
issuer: "https://facebook.com" issuer: "https://www.facebook.com"
client_id: "your-client-id" # TO BE FILLED client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED client_secret: "your-client-secret" # TO BE FILLED
scopes: ["openid", "email"] scopes: ["openid", "email"]
authorization_endpoint: https://facebook.com/dialog/oauth authorization_endpoint: "https://facebook.com/dialog/oauth"
token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token token_endpoint: "https://graph.facebook.com/v9.0/oauth/access_token"
user_profile_method: "userinfo_endpoint" jwks_uri: "https://www.facebook.com/.well-known/oauth/openid/jwks/"
userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture"
user_mapping_provider: user_mapping_provider:
config: config:
subject_claim: "id"
display_name_template: "{{ user.name }}" display_name_template: "{{ user.name }}"
email_template: "{{ '{{ user.email }}' }}"
``` ```
Relevant documents: Relevant documents:
* https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow * [Manually Build a Login Flow](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow)
* Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/ * [Using Facebook's Graph API](https://developers.facebook.com/docs/graph-api/using-graph-api/)
* Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user * [Reference to the User endpoint](https://developers.facebook.com/docs/graph-api/reference/user)
Facebook do have an [OIDC discovery endpoint](https://www.facebook.com/.well-known/openid-configuration),
but it has a `response_types_supported` which excludes "code" (which we rely on, and
is even mentioned in their [documentation](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login)),
so we have to disable discovery and configure the URIs manually.
### Gitea ### Gitea

View File

@ -74,13 +74,7 @@ server_name: "SERVERNAME"
# #
pid_file: DATADIR/homeserver.pid pid_file: DATADIR/homeserver.pid
# The absolute URL to the web client which /_matrix/client will redirect # The absolute URL to the web client which / will redirect to.
# to if 'webclient' is configured under the 'listeners' configuration.
#
# This option can be also set to the filesystem path to the web client
# which will be served at /_matrix/client/ if 'webclient' is configured
# under the 'listeners' configuration, however this is a security risk:
# https://github.com/matrix-org/synapse#security-note
# #
#web_client_location: https://riot.example.com/ #web_client_location: https://riot.example.com/
@ -164,7 +158,7 @@ presence:
# The default room version for newly created rooms. # The default room version for newly created rooms.
# #
# Known room versions are listed here: # Known room versions are listed here:
# https://matrix.org/docs/spec/#complete-list-of-room-versions # https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions
# #
# For example, for room version 1, default_room_version should be set # For example, for room version 1, default_room_version should be set
# to "1". # to "1".
@ -310,8 +304,6 @@ presence:
# static: static resources under synapse/static (/_matrix/static). (Mostly # static: static resources under synapse/static (/_matrix/static). (Mostly
# useful for 'fallback authentication'.) # useful for 'fallback authentication'.)
# #
# webclient: A web client. Requires web_client_location to be set.
#
listeners: listeners:
# TLS-enabled listener: for when matrix traffic is sent directly to synapse. # TLS-enabled listener: for when matrix traffic is sent directly to synapse.
# #
@ -1503,6 +1495,21 @@ room_prejoin_state:
#additional_event_types: #additional_event_types:
# - org.example.custom.event.type # - org.example.custom.event.type
# We record the IP address of clients used to access the API for various
# reasons, including displaying it to the user in the "Where you're signed in"
# dialog.
#
# By default, when puppeting another user via the admin API, the client IP
# address is recorded against the user who created the access token (ie, the
# admin user), and *not* the puppeted user.
#
# Uncomment the following to also record the IP address against the puppeted
# user. (This also means that the puppeted user will count as an "active" user
# for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
# above.)
#
#track_puppeted_user_ips: true
# A list of application service config files to use # A list of application service config files to use
# #
@ -1870,10 +1877,13 @@ saml2_config:
# Defaults to false. Avoid this in production. # Defaults to false. Avoid this in production.
# #
# user_profile_method: Whether to fetch the user profile from the userinfo # user_profile_method: Whether to fetch the user profile from the userinfo
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'. # endpoint, or to rely on the data returned in the id_token from the
# token_endpoint.
# #
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is # Valid values are: 'auto' or 'userinfo_endpoint'.
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the #
# Defaults to 'auto', which uses the userinfo endpoint if 'openid' is
# not included in 'scopes'. Set to 'userinfo_endpoint' to always use the
# userinfo endpoint. # userinfo endpoint.
# #
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to # allow_existing_users: set to 'true' to allow a user logging in via OIDC to

View File

@ -194,7 +194,7 @@ When following this route please make sure that the [Platform-specific prerequis
System requirements: System requirements:
- POSIX-compliant system (tested on Linux & OS X) - POSIX-compliant system (tested on Linux & OS X)
- Python 3.6 or later, up to Python 3.9. - Python 3.7 or later, up to Python 3.9.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org - At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
To install the Synapse homeserver run: To install the Synapse homeserver run:

View File

@ -137,6 +137,10 @@ This will install and start a systemd service called `coturn`.
# TLS private key file # TLS private key file
pkey=/path/to/privkey.pem pkey=/path/to/privkey.pem
# Ensure the configuration lines that disable TLS/DTLS are commented-out or removed
#no-tls
#no-dtls
``` ```
In this case, replace the `turn:` schemes in the `turn_uris` settings below In this case, replace the `turn:` schemes in the `turn_uris` settings below
@ -145,6 +149,14 @@ This will install and start a systemd service called `coturn`.
We recommend that you only try to set up TLS/DTLS once you have set up a We recommend that you only try to set up TLS/DTLS once you have set up a
basic installation and got it working. basic installation and got it working.
NB: If your TLS certificate was provided by Let's Encrypt, TLS/DTLS will
not work with any Matrix client that uses Chromium's WebRTC library. This
currently includes Element Android & iOS; for more details, see their
[respective](https://github.com/vector-im/element-android/issues/1533)
[issues](https://github.com/vector-im/element-ios/issues/2712) as well as the underlying
[WebRTC issue](https://bugs.chromium.org/p/webrtc/issues/detail?id=11710).
Consider using a ZeroSSL certificate for your TURN server as a working alternative.
1. Ensure your firewall allows traffic into the TURN server on the ports 1. Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (By default: 3478 and 5349 for TURN you've configured it to listen on (By default: 3478 and 5349 for TURN
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535 traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
@ -250,6 +262,10 @@ Here are a few things to try:
* Check that you have opened your firewall to allow UDP traffic to the UDP * Check that you have opened your firewall to allow UDP traffic to the UDP
relay ports (49152-65535 by default). relay ports (49152-65535 by default).
* Try disabling `coturn`'s TLS/DTLS listeners and enable only its (unencrypted)
TCP/UDP listeners. (This will only leave signaling traffic unencrypted;
voice & video WebRTC traffic is always encrypted.)
* Some WebRTC implementations (notably, that of Google Chrome) appear to get * Some WebRTC implementations (notably, that of Google Chrome) appear to get
confused by TURN servers which are reachable over IPv6 (this appears to be confused by TURN servers which are reachable over IPv6 (this appears to be
an unexpected side-effect of its handling of multiple IP addresses as an unexpected side-effect of its handling of multiple IP addresses as

View File

@ -85,6 +85,17 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
``` ```
# Upgrading to v1.51.0
## Deprecation of `webclient` listeners and non-HTTP(S) `web_client_location`
Listeners of type `webclient` are deprecated and scheduled to be removed in
Synapse v1.53.0.
Similarly, a non-HTTP(S) `web_client_location` configuration is deprecated and
will become a configuration error in Synapse v1.53.0.
# Upgrading to v1.50.0 # Upgrading to v1.50.0
## Dropping support for old Python and Postgres versions ## Dropping support for old Python and Postgres versions

View File

@ -8,7 +8,8 @@
# By default the script will fetch the latest Complement master branch and # By default the script will fetch the latest Complement master branch and
# run tests with that. This can be overridden to use a custom Complement # run tests with that. This can be overridden to use a custom Complement
# checkout by setting the COMPLEMENT_DIR environment variable to the # checkout by setting the COMPLEMENT_DIR environment variable to the
# filepath of a local Complement checkout. # filepath of a local Complement checkout or by setting the COMPLEMENT_REF
# environment variable to pull a different branch or commit.
# #
# By default Synapse is run in monolith mode. This can be overridden by # By default Synapse is run in monolith mode. This can be overridden by
# setting the WORKERS environment variable. # setting the WORKERS environment variable.
@ -23,16 +24,20 @@
# Exit if a line returns a non-zero exit code # Exit if a line returns a non-zero exit code
set -e set -e
# enable buildkit for the docker builds
export DOCKER_BUILDKIT=1
# Change to the repository root # Change to the repository root
cd "$(dirname $0)/.." cd "$(dirname $0)/.."
# Check for a user-specified Complement checkout # Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then if [[ -z "$COMPLEMENT_DIR" ]]; then
echo "COMPLEMENT_DIR not set. Fetching the latest Complement checkout..." COMPLEMENT_REF=${COMPLEMENT_REF:-master}
wget -Nq https://github.com/matrix-org/complement/archive/master.tar.gz echo "COMPLEMENT_DIR not set. Fetching Complement checkout from ${COMPLEMENT_REF}..."
tar -xzf master.tar.gz wget -Nq https://github.com/matrix-org/complement/archive/${COMPLEMENT_REF}.tar.gz
COMPLEMENT_DIR=complement-master tar -xzf ${COMPLEMENT_REF}.tar.gz
echo "Checkout available at 'complement-master'" COMPLEMENT_DIR=complement-${COMPLEMENT_REF}
echo "Checkout available at 'complement-${COMPLEMENT_REF}'"
fi fi
# Build the base Synapse image from the local checkout # Build the base Synapse image from the local checkout
@ -47,7 +52,7 @@ if [[ -n "$WORKERS" ]]; then
COMPLEMENT_DOCKERFILE=SynapseWorkers.Dockerfile COMPLEMENT_DOCKERFILE=SynapseWorkers.Dockerfile
# And provide some more configuration to complement. # And provide some more configuration to complement.
export COMPLEMENT_CA=true export COMPLEMENT_CA=true
export COMPLEMENT_VERSION_CHECK_ITERATIONS=500 export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=25
else else
export COMPLEMENT_BASE_IMAGE=complement-synapse export COMPLEMENT_BASE_IMAGE=complement-synapse
COMPLEMENT_DOCKERFILE=Synapse.Dockerfile COMPLEMENT_DOCKERFILE=Synapse.Dockerfile
@ -65,4 +70,5 @@ if [[ -n "$1" ]]; then
fi fi
# Run the tests! # Run the tests!
echo "Images built; running complement"
go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/... go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...

View File

@ -47,7 +47,7 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.50.2" __version__ = "1.51.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when # We import here so that we don't have to install a bunch of deps when

View File

@ -46,7 +46,9 @@ class UserInfo:
ips: List[str] = attr.Factory(list) ips: List[str] = attr.Factory(list)
def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]: def get_recent_users(
txn: LoggingTransaction, since_ms: int, exclude_app_service: bool
) -> List[UserInfo]:
"""Fetches recently registered users and some info on them.""" """Fetches recently registered users and some info on them."""
sql = """ sql = """
@ -56,6 +58,9 @@ def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
AND deactivated = 0 AND deactivated = 0
""" """
if exclude_app_service:
sql += " AND appservice_id IS NULL"
txn.execute(sql, (since_ms / 1000,)) txn.execute(sql, (since_ms / 1000,))
user_infos = [UserInfo(user_id, creation_ts) for user_id, creation_ts in txn] user_infos = [UserInfo(user_id, creation_ts) for user_id, creation_ts in txn]
@ -113,7 +118,7 @@ def main() -> None:
"-e", "-e",
"--exclude-emails", "--exclude-emails",
action="store_true", action="store_true",
help="Exclude users that have validated email addresses", help="Exclude users that have validated email addresses.",
) )
parser.add_argument( parser.add_argument(
"-u", "-u",
@ -121,6 +126,12 @@ def main() -> None:
action="store_true", action="store_true",
help="Only print user IDs that match.", help="Only print user IDs that match.",
) )
parser.add_argument(
"-a",
"--exclude-app-service",
help="Exclude appservice users.",
action="store_true",
)
config = ReviewConfig() config = ReviewConfig()
@ -133,6 +144,7 @@ def main() -> None:
since_ms = time.time() * 1000 - Config.parse_duration(config_args.since) since_ms = time.time() * 1000 - Config.parse_duration(config_args.since)
exclude_users_with_email = config_args.exclude_emails exclude_users_with_email = config_args.exclude_emails
exclude_users_with_appservice = config_args.exclude_app_service
include_context = not config_args.only_users include_context = not config_args.only_users
for database_config in config.database.databases: for database_config in config.database.databases:
@ -143,7 +155,7 @@ def main() -> None:
with make_conn(database_config, engine, "review_recent_signups") as db_conn: with make_conn(database_config, engine, "review_recent_signups") as db_conn:
# This generates a type of Cursor, not LoggingTransaction. # This generates a type of Cursor, not LoggingTransaction.
user_infos = get_recent_users(db_conn.cursor(), since_ms) # type: ignore[arg-type] user_infos = get_recent_users(db_conn.cursor(), since_ms, exclude_users_with_appservice) # type: ignore[arg-type]
for user_info in user_infos: for user_info in user_infos:
if exclude_users_with_email and user_info.emails: if exclude_users_with_email and user_info.emails:

View File

@ -71,6 +71,7 @@ class Auth:
self._auth_blocking = AuthBlocking(self.hs) self._auth_blocking = AuthBlocking(self.hs)
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
self._track_puppeted_user_ips = hs.config.api.track_puppeted_user_ips
self._macaroon_secret_key = hs.config.key.macaroon_secret_key self._macaroon_secret_key = hs.config.key.macaroon_secret_key
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
@ -246,6 +247,18 @@ class Auth:
user_agent=user_agent, user_agent=user_agent,
device_id=device_id, device_id=device_id,
) )
# Track also the puppeted user client IP if enabled and the user is puppeting
if (
user_info.user_id != user_info.token_owner
and self._track_puppeted_user_ips
):
await self.store.insert_client_ip(
user_id=user_info.user_id,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=device_id,
)
if is_guest and not allow_guest: if is_guest and not allow_guest:
raise AuthError( raise AuthError(

View File

@ -46,41 +46,41 @@ class RoomDisposition:
UNSTABLE = "unstable" UNSTABLE = "unstable"
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class RoomVersion: class RoomVersion:
"""An object which describes the unique attributes of a room version.""" """An object which describes the unique attributes of a room version."""
identifier = attr.ib(type=str) # the identifier for this version identifier: str # the identifier for this version
disposition = attr.ib(type=str) # one of the RoomDispositions disposition: str # one of the RoomDispositions
event_format = attr.ib(type=int) # one of the EventFormatVersions event_format: int # one of the EventFormatVersions
state_res = attr.ib(type=int) # one of the StateResolutionVersions state_res: int # one of the StateResolutionVersions
enforce_key_validity = attr.ib(type=bool) enforce_key_validity: bool
# Before MSC2432, m.room.aliases had special auth rules and redaction rules # Before MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth = attr.ib(type=bool) special_case_aliases_auth: bool
# Strictly enforce canonicaljson, do not allow: # Strictly enforce canonicaljson, do not allow:
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1] # * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
# * Floats # * Floats
# * NaN, Infinity, -Infinity # * NaN, Infinity, -Infinity
strict_canonicaljson = attr.ib(type=bool) strict_canonicaljson: bool
# MSC2209: Check 'notifications' key while verifying # MSC2209: Check 'notifications' key while verifying
# m.room.power_levels auth rules. # m.room.power_levels auth rules.
limit_notifications_power_levels = attr.ib(type=bool) limit_notifications_power_levels: bool
# MSC2174/MSC2176: Apply updated redaction rules algorithm. # MSC2174/MSC2176: Apply updated redaction rules algorithm.
msc2176_redaction_rules = attr.ib(type=bool) msc2176_redaction_rules: bool
# MSC3083: Support the 'restricted' join_rule. # MSC3083: Support the 'restricted' join_rule.
msc3083_join_rules = attr.ib(type=bool) msc3083_join_rules: bool
# MSC3375: Support for the proper redaction rules for MSC3083. This mustn't # MSC3375: Support for the proper redaction rules for MSC3083. This mustn't
# be enabled if MSC3083 is not. # be enabled if MSC3083 is not.
msc3375_redaction_rules = attr.ib(type=bool) msc3375_redaction_rules: bool
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending # MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
# m.room.membership event with membership 'knock'. # m.room.membership event with membership 'knock'.
msc2403_knocking = attr.ib(type=bool) msc2403_knocking: bool
# MSC2716: Adds m.room.power_levels -> content.historical field to control # MSC2716: Adds m.room.power_levels -> content.historical field to control
# whether "insertion", "chunk", "marker" events can be sent # whether "insertion", "chunk", "marker" events can be sent
msc2716_historical = attr.ib(type=bool) msc2716_historical: bool
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events # MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
msc2716_redactions = attr.ib(type=bool) msc2716_redactions: bool
class RoomVersions: class RoomVersions:

View File

@ -60,7 +60,7 @@ from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.events.third_party_rules import load_legacy_third_party_event_rules from synapse.events.third_party_rules import load_legacy_third_party_event_rules
from synapse.handlers.auth import load_legacy_password_auth_providers from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.logging.context import PreserveLoggingContext from synapse.logging.context import PreserveLoggingContext
from synapse.metrics import register_threadpool from synapse.metrics import install_gc_manager, register_threadpool
from synapse.metrics.background_process_metrics import wrap_as_background_process from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.types import ISynapseReactor from synapse.types import ISynapseReactor
@ -159,6 +159,7 @@ def start_reactor(
change_resource_limit(soft_file_limit) change_resource_limit(soft_file_limit)
if gc_thresholds: if gc_thresholds:
gc.set_threshold(*gc_thresholds) gc.set_threshold(*gc_thresholds)
install_gc_manager()
run_command() run_command()
# make sure that we run the reactor with the sentinel log context, # make sure that we run the reactor with the sentinel log context,

View File

@ -131,9 +131,18 @@ class SynapseHomeServer(HomeServer):
resources.update(self._module_web_resources) resources.update(self._module_web_resources)
self._module_web_resources_consumed = True self._module_web_resources_consumed = True
# try to find something useful to redirect '/' to # Try to find something useful to serve at '/':
if WEB_CLIENT_PREFIX in resources: #
root_resource: Resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX) # 1. Redirect to the web client if it is an HTTP(S) URL.
# 2. Redirect to the web client served via Synapse.
# 3. Redirect to the static "Synapse is running" page.
# 4. Do not redirect and use a blank resource.
if self.config.server.web_client_location_is_redirect:
root_resource: Resource = RootOptionsRedirectResource(
self.config.server.web_client_location
)
elif WEB_CLIENT_PREFIX in resources:
root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
elif STATIC_PREFIX in resources: elif STATIC_PREFIX in resources:
root_resource = RootOptionsRedirectResource(STATIC_PREFIX) root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
else: else:
@ -262,15 +271,15 @@ class SynapseHomeServer(HomeServer):
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self) resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
if name == "webclient": if name == "webclient":
# webclient listeners are deprecated as of Synapse v1.51.0, remove it
# in > v1.53.0.
webclient_loc = self.config.server.web_client_location webclient_loc = self.config.server.web_client_location
if webclient_loc is None: if webclient_loc is None:
logger.warning( logger.warning(
"Not enabling webclient resource, as web_client_location is unset." "Not enabling webclient resource, as web_client_location is unset."
) )
elif webclient_loc.startswith("http://") or webclient_loc.startswith( elif self.config.server.web_client_location_is_redirect:
"https://"
):
resources[WEB_CLIENT_PREFIX] = RootRedirect(webclient_loc) resources[WEB_CLIENT_PREFIX] = RootRedirect(webclient_loc)
else: else:
logger.warning( logger.warning(

View File

@ -29,6 +29,7 @@ class ApiConfig(Config):
def read_config(self, config: JsonDict, **kwargs): def read_config(self, config: JsonDict, **kwargs):
validate_config(_MAIN_SCHEMA, config, ()) validate_config(_MAIN_SCHEMA, config, ())
self.room_prejoin_state = list(self._get_prejoin_state_types(config)) self.room_prejoin_state = list(self._get_prejoin_state_types(config))
self.track_puppeted_user_ips = config.get("track_puppeted_user_ips", False)
def generate_config_section(cls, **kwargs) -> str: def generate_config_section(cls, **kwargs) -> str:
formatted_default_state_types = "\n".join( formatted_default_state_types = "\n".join(
@ -59,6 +60,21 @@ class ApiConfig(Config):
# #
#additional_event_types: #additional_event_types:
# - org.example.custom.event.type # - org.example.custom.event.type
# We record the IP address of clients used to access the API for various
# reasons, including displaying it to the user in the "Where you're signed in"
# dialog.
#
# By default, when puppeting another user via the admin API, the client IP
# address is recorded against the user who created the access token (ie, the
# admin user), and *not* the puppeted user.
#
# Uncomment the following to also record the IP address against the puppeted
# user. (This also means that the puppeted user will count as an "active" user
# for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
# above.)
#
#track_puppeted_user_ips: true
""" % { """ % {
"formatted_default_state_types": formatted_default_state_types "formatted_default_state_types": formatted_default_state_types
} }
@ -138,5 +154,8 @@ _MAIN_SCHEMA = {
"properties": { "properties": {
"room_prejoin_state": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA, "room_prejoin_state": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA,
"room_invite_state_types": _ROOM_INVITE_STATE_TYPES_SCHEMA, "room_invite_state_types": _ROOM_INVITE_STATE_TYPES_SCHEMA,
"track_puppeted_user_ips": {
"type": "boolean",
},
}, },
} }

View File

@ -55,19 +55,19 @@ https://matrix-org.github.io/synapse/latest/templates.html
---------------------------------------------------------------------------------------""" ---------------------------------------------------------------------------------------"""
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class EmailSubjectConfig: class EmailSubjectConfig:
message_from_person_in_room = attr.ib(type=str) message_from_person_in_room: str
message_from_person = attr.ib(type=str) message_from_person: str
messages_from_person = attr.ib(type=str) messages_from_person: str
messages_in_room = attr.ib(type=str) messages_in_room: str
messages_in_room_and_others = attr.ib(type=str) messages_in_room_and_others: str
messages_from_person_and_others = attr.ib(type=str) messages_from_person_and_others: str
invite_from_person = attr.ib(type=str) invite_from_person: str
invite_from_person_to_room = attr.ib(type=str) invite_from_person_to_room: str
invite_from_person_to_space = attr.ib(type=str) invite_from_person_to_space: str
password_reset = attr.ib(type=str) password_reset: str
email_validation = attr.ib(type=str) email_validation: str
class EmailConfig(Config): class EmailConfig(Config):

View File

@ -148,10 +148,13 @@ class OIDCConfig(Config):
# Defaults to false. Avoid this in production. # Defaults to false. Avoid this in production.
# #
# user_profile_method: Whether to fetch the user profile from the userinfo # user_profile_method: Whether to fetch the user profile from the userinfo
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'. # endpoint, or to rely on the data returned in the id_token from the
# token_endpoint.
# #
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is # Valid values are: 'auto' or 'userinfo_endpoint'.
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the #
# Defaults to 'auto', which uses the userinfo endpoint if 'openid' is
# not included in 'scopes'. Set to 'userinfo_endpoint' to always use the
# userinfo endpoint. # userinfo endpoint.
# #
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to # allow_existing_users: set to 'true' to allow a user logging in via OIDC to

View File

@ -200,8 +200,8 @@ class HttpListenerConfig:
"""Object describing the http-specific parts of the config of a listener""" """Object describing the http-specific parts of the config of a listener"""
x_forwarded: bool = False x_forwarded: bool = False
resources: List[HttpResourceConfig] = attr.ib(factory=list) resources: List[HttpResourceConfig] = attr.Factory(list)
additional_resources: Dict[str, dict] = attr.ib(factory=dict) additional_resources: Dict[str, dict] = attr.Factory(dict)
tag: Optional[str] = None tag: Optional[str] = None
@ -259,7 +259,6 @@ class ServerConfig(Config):
raise ConfigError(str(e)) raise ConfigError(str(e))
self.pid_file = self.abspath(config.get("pid_file")) self.pid_file = self.abspath(config.get("pid_file"))
self.web_client_location = config.get("web_client_location", None)
self.soft_file_limit = config.get("soft_file_limit", 0) self.soft_file_limit = config.get("soft_file_limit", 0)
self.daemonize = config.get("daemonize") self.daemonize = config.get("daemonize")
self.print_pidfile = config.get("print_pidfile") self.print_pidfile = config.get("print_pidfile")
@ -506,7 +505,16 @@ class ServerConfig(Config):
l2.append(listener) l2.append(listener)
self.listeners = l2 self.listeners = l2
if not self.web_client_location: self.web_client_location = config.get("web_client_location", None)
self.web_client_location_is_redirect = self.web_client_location and (
self.web_client_location.startswith("http://")
or self.web_client_location.startswith("https://")
)
# A non-HTTP(S) web client location is deprecated.
if self.web_client_location and not self.web_client_location_is_redirect:
logger.warning(NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING)
# Warn if webclient is configured for a worker.
_warn_if_webclient_configured(self.listeners) _warn_if_webclient_configured(self.listeners)
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None)) self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
@ -793,13 +801,7 @@ class ServerConfig(Config):
# #
pid_file: %(pid_file)s pid_file: %(pid_file)s
# The absolute URL to the web client which /_matrix/client will redirect # The absolute URL to the web client which / will redirect to.
# to if 'webclient' is configured under the 'listeners' configuration.
#
# This option can be also set to the filesystem path to the web client
# which will be served at /_matrix/client/ if 'webclient' is configured
# under the 'listeners' configuration, however this is a security risk:
# https://github.com/matrix-org/synapse#security-note
# #
#web_client_location: https://riot.example.com/ #web_client_location: https://riot.example.com/
@ -883,7 +885,7 @@ class ServerConfig(Config):
# The default room version for newly created rooms. # The default room version for newly created rooms.
# #
# Known room versions are listed here: # Known room versions are listed here:
# https://matrix.org/docs/spec/#complete-list-of-room-versions # https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions
# #
# For example, for room version 1, default_room_version should be set # For example, for room version 1, default_room_version should be set
# to "1". # to "1".
@ -1011,8 +1013,6 @@ class ServerConfig(Config):
# static: static resources under synapse/static (/_matrix/static). (Mostly # static: static resources under synapse/static (/_matrix/static). (Mostly
# useful for 'fallback authentication'.) # useful for 'fallback authentication'.)
# #
# webclient: A web client. Requires web_client_location to be set.
#
listeners: listeners:
# TLS-enabled listener: for when matrix traffic is sent directly to synapse. # TLS-enabled listener: for when matrix traffic is sent directly to synapse.
# #
@ -1349,9 +1349,15 @@ def parse_listener_def(listener: Any) -> ListenerConfig:
return ListenerConfig(port, bind_addresses, listener_type, tls, http_config) return ListenerConfig(port, bind_addresses, listener_type, tls, http_config)
NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING = """
Synapse no longer supports serving a web client. To remove this warning,
configure 'web_client_location' with an HTTP(S) URL.
"""
NO_MORE_WEB_CLIENT_WARNING = """ NO_MORE_WEB_CLIENT_WARNING = """
Synapse no longer includes a web client. To enable a web client, configure Synapse no longer includes a web client. To redirect the root resource to a web client, configure
web_client_location. To remove this warning, remove 'webclient' from the 'listeners' 'web_client_location'. To remove this warning, remove 'webclient' from the 'listeners'
configuration. configuration.
""" """

View File

@ -51,12 +51,12 @@ def _instance_to_list_converter(obj: Union[str, List[str]]) -> List[str]:
return obj return obj
@attr.s @attr.s(auto_attribs=True)
class InstanceLocationConfig: class InstanceLocationConfig:
"""The host and port to talk to an instance via HTTP replication.""" """The host and port to talk to an instance via HTTP replication."""
host = attr.ib(type=str) host: str
port = attr.ib(type=int) port: int
@attr.s @attr.s
@ -77,34 +77,28 @@ class WriterLocations:
can only be a single instance. can only be a single instance.
""" """
events = attr.ib( events: List[str] = attr.ib(
default=["master"], default=["master"],
type=List[str],
converter=_instance_to_list_converter, converter=_instance_to_list_converter,
) )
typing = attr.ib( typing: List[str] = attr.ib(
default=["master"], default=["master"],
type=List[str],
converter=_instance_to_list_converter, converter=_instance_to_list_converter,
) )
to_device = attr.ib( to_device: List[str] = attr.ib(
default=["master"], default=["master"],
type=List[str],
converter=_instance_to_list_converter, converter=_instance_to_list_converter,
) )
account_data = attr.ib( account_data: List[str] = attr.ib(
default=["master"], default=["master"],
type=List[str],
converter=_instance_to_list_converter, converter=_instance_to_list_converter,
) )
receipts = attr.ib( receipts: List[str] = attr.ib(
default=["master"], default=["master"],
type=List[str],
converter=_instance_to_list_converter, converter=_instance_to_list_converter,
) )
presence = attr.ib( presence: List[str] = attr.ib(
default=["master"], default=["master"],
type=List[str],
converter=_instance_to_list_converter, converter=_instance_to_list_converter,
) )

View File

@ -58,7 +58,7 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@attr.s(slots=True, cmp=False) @attr.s(slots=True, frozen=True, cmp=False, auto_attribs=True)
class VerifyJsonRequest: class VerifyJsonRequest:
""" """
A request to verify a JSON object. A request to verify a JSON object.
@ -78,10 +78,10 @@ class VerifyJsonRequest:
key_ids: The set of key_ids to that could be used to verify the JSON object key_ids: The set of key_ids to that could be used to verify the JSON object
""" """
server_name = attr.ib(type=str) server_name: str
get_json_object = attr.ib(type=Callable[[], JsonDict]) get_json_object: Callable[[], JsonDict]
minimum_valid_until_ts = attr.ib(type=int) minimum_valid_until_ts: int
key_ids = attr.ib(type=List[str]) key_ids: List[str]
@staticmethod @staticmethod
def from_json_object( def from_json_object(
@ -124,7 +124,7 @@ class KeyLookupError(ValueError):
pass pass
@attr.s(slots=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class _FetchKeyRequest: class _FetchKeyRequest:
"""A request for keys for a given server. """A request for keys for a given server.
@ -138,9 +138,9 @@ class _FetchKeyRequest:
key_ids: The IDs of the keys to attempt to fetch key_ids: The IDs of the keys to attempt to fetch
""" """
server_name = attr.ib(type=str) server_name: str
minimum_valid_until_ts = attr.ib(type=int) minimum_valid_until_ts: int
key_ids = attr.ib(type=List[str]) key_ids: List[str]
class Keyring: class Keyring:

View File

@ -28,7 +28,7 @@ if TYPE_CHECKING:
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class EventContext: class EventContext:
""" """
Holds information relevant to persisting an event Holds information relevant to persisting an event
@ -103,15 +103,15 @@ class EventContext:
accessed via get_prev_state_ids. accessed via get_prev_state_ids.
""" """
rejected = attr.ib(default=False, type=Union[bool, str]) rejected: Union[bool, str] = False
_state_group = attr.ib(default=None, type=Optional[int]) _state_group: Optional[int] = None
state_group_before_event = attr.ib(default=None, type=Optional[int]) state_group_before_event: Optional[int] = None
prev_group = attr.ib(default=None, type=Optional[int]) prev_group: Optional[int] = None
delta_ids = attr.ib(default=None, type=Optional[StateMap[str]]) delta_ids: Optional[StateMap[str]] = None
app_service = attr.ib(default=None, type=Optional[ApplicationService]) app_service: Optional[ApplicationService] = None
_current_state_ids = attr.ib(default=None, type=Optional[StateMap[str]]) _current_state_ids: Optional[StateMap[str]] = None
_prev_state_ids = attr.ib(default=None, type=Optional[StateMap[str]]) _prev_state_ids: Optional[StateMap[str]] = None
@staticmethod @staticmethod
def with_state( def with_state(

View File

@ -14,17 +14,7 @@
# limitations under the License. # limitations under the License.
import collections.abc import collections.abc
import re import re
from typing import ( from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Union
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Mapping,
Optional,
Union,
)
from frozendict import frozendict from frozendict import frozendict
@ -32,14 +22,10 @@ from synapse.api.constants import EventContentFields, EventTypes, RelationTypes
from synapse.api.errors import Codes, SynapseError from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion from synapse.api.room_versions import RoomVersion
from synapse.types import JsonDict from synapse.types import JsonDict
from synapse.util.async_helpers import yieldable_gather_results
from synapse.util.frozenutils import unfreeze from synapse.util.frozenutils import unfreeze
from . import EventBase from . import EventBase
if TYPE_CHECKING:
from synapse.server import HomeServer
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\' # Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
# (?<!stuff) matches if the current position in the string is not preceded # (?<!stuff) matches if the current position in the string is not preceded
# by a match for 'stuff'. # by a match for 'stuff'.
@ -385,17 +371,12 @@ class EventClientSerializer:
clients. clients.
""" """
def __init__(self, hs: "HomeServer"): def serialize_event(
self.store = hs.get_datastore()
self._msc1849_enabled = hs.config.experimental.msc1849_enabled
self._msc3440_enabled = hs.config.experimental.msc3440_enabled
async def serialize_event(
self, self,
event: Union[JsonDict, EventBase], event: Union[JsonDict, EventBase],
time_now: int, time_now: int,
*, *,
bundle_aggregations: bool = False, bundle_aggregations: Optional[Dict[str, JsonDict]] = None,
**kwargs: Any, **kwargs: Any,
) -> JsonDict: ) -> JsonDict:
"""Serializes a single event. """Serializes a single event.
@ -418,66 +399,41 @@ class EventClientSerializer:
serialized_event = serialize_event(event, time_now, **kwargs) serialized_event = serialize_event(event, time_now, **kwargs)
# Check if there are any bundled aggregations to include with the event. # Check if there are any bundled aggregations to include with the event.
# if bundle_aggregations:
# Do not bundle aggregations if any of the following at true: event_aggregations = bundle_aggregations.get(event.event_id)
# if event_aggregations:
# * Support is disabled via the configuration or the caller. self._inject_bundled_aggregations(
# * The event is a state event. event,
# * The event has been redacted. time_now,
if ( bundle_aggregations[event.event_id],
self._msc1849_enabled serialized_event,
and bundle_aggregations )
and not event.is_state()
and not event.internal_metadata.is_redacted()
):
await self._injected_bundled_aggregations(event, time_now, serialized_event)
return serialized_event return serialized_event
async def _injected_bundled_aggregations( def _inject_bundled_aggregations(
self, event: EventBase, time_now: int, serialized_event: JsonDict self,
event: EventBase,
time_now: int,
aggregations: JsonDict,
serialized_event: JsonDict,
) -> None: ) -> None:
"""Potentially injects bundled aggregations into the unsigned portion of the serialized event. """Potentially injects bundled aggregations into the unsigned portion of the serialized event.
Args: Args:
event: The event being serialized. event: The event being serialized.
time_now: The current time in milliseconds time_now: The current time in milliseconds
aggregations: The bundled aggregation to serialize.
serialized_event: The serialized event which may be modified. serialized_event: The serialized event which may be modified.
""" """
# Do not bundle aggregations for an event which represents an edit or an # Make a copy in-case the object is cached.
# annotation. It does not make sense for them to have related events. aggregations = aggregations.copy()
relates_to = event.content.get("m.relates_to")
if isinstance(relates_to, (dict, frozendict)):
relation_type = relates_to.get("rel_type")
if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE):
return
event_id = event.event_id if RelationTypes.REPLACE in aggregations:
room_id = event.room_id
# The bundled aggregations to include.
aggregations = {}
annotations = await self.store.get_aggregation_groups_for_event(
event_id, room_id
)
if annotations.chunk:
aggregations[RelationTypes.ANNOTATION] = annotations.to_dict()
references = await self.store.get_relations_for_event(
event_id, room_id, RelationTypes.REFERENCE, direction="f"
)
if references.chunk:
aggregations[RelationTypes.REFERENCE] = references.to_dict()
edit = None
if event.type == EventTypes.Message:
edit = await self.store.get_applicable_edit(event_id, room_id)
if edit:
# If there is an edit replace the content, preserving existing # If there is an edit replace the content, preserving existing
# relations. # relations.
edit = aggregations[RelationTypes.REPLACE]
# Ensure we take copies of the edit content, otherwise we risk modifying # Ensure we take copies of the edit content, otherwise we risk modifying
# the original event. # the original event.
@ -502,27 +458,19 @@ class EventClientSerializer:
} }
# If this event is the start of a thread, include a summary of the replies. # If this event is the start of a thread, include a summary of the replies.
if self._msc3440_enabled: if RelationTypes.THREAD in aggregations:
( # Serialize the latest thread event.
thread_count, latest_thread_event = aggregations[RelationTypes.THREAD]["latest_event"]
latest_thread_event,
) = await self.store.get_thread_summary(event_id, room_id)
if latest_thread_event:
aggregations[RelationTypes.THREAD] = {
# Don't bundle aggregations as this could recurse forever.
"latest_event": await self.serialize_event(
latest_thread_event, time_now, bundle_aggregations=False
),
"count": thread_count,
}
# If any bundled aggregations were found, include them. # Don't bundle aggregations as this could recurse forever.
if aggregations: aggregations[RelationTypes.THREAD]["latest_event"] = self.serialize_event(
serialized_event["unsigned"].setdefault("m.relations", {}).update( latest_thread_event, time_now, bundle_aggregations=None
aggregations
) )
async def serialize_events( # Include the bundled aggregations in the event.
serialized_event["unsigned"].setdefault("m.relations", {}).update(aggregations)
def serialize_events(
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
) -> List[JsonDict]: ) -> List[JsonDict]:
"""Serializes multiple events. """Serializes multiple events.
@ -535,9 +483,9 @@ class EventClientSerializer:
Returns: Returns:
The list of serialized events The list of serialized events
""" """
return await yieldable_gather_results( return [
self.serialize_event, events, time_now=time_now, **kwargs self.serialize_event(event, time_now=time_now, **kwargs) for event in events
) ]
def copy_power_levels_contents( def copy_power_levels_contents(

View File

@ -230,6 +230,10 @@ def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventB
# origin, etc etc) # origin, etc etc)
assert_params_in_dict(pdu_json, ("type", "depth")) assert_params_in_dict(pdu_json, ("type", "depth"))
# Strip any unauthorized values from "unsigned" if they exist
if "unsigned" in pdu_json:
_strip_unsigned_values(pdu_json)
depth = pdu_json["depth"] depth = pdu_json["depth"]
if not isinstance(depth, int): if not isinstance(depth, int):
raise SynapseError(400, "Depth %r not an intger" % (depth,), Codes.BAD_JSON) raise SynapseError(400, "Depth %r not an intger" % (depth,), Codes.BAD_JSON)
@ -245,3 +249,24 @@ def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventB
event = make_event_from_dict(pdu_json, room_version) event = make_event_from_dict(pdu_json, room_version)
return event return event
def _strip_unsigned_values(pdu_dict: JsonDict) -> None:
"""
Strip any unsigned values unless specifically allowed, as defined by the whitelist.
pdu: the json dict to strip values from. Note that the dict is mutated by this
function
"""
unsigned = pdu_dict["unsigned"]
if not isinstance(unsigned, dict):
pdu_dict["unsigned"] = {}
if pdu_dict["type"] == "m.room.member":
whitelist = ["knock_room_state", "invite_room_state", "age"]
else:
whitelist = ["age"]
filtered_unsigned = {k: v for k, v in unsigned.items() if k in whitelist}
pdu_dict["unsigned"] = filtered_unsigned

View File

@ -56,7 +56,6 @@ from synapse.api.room_versions import (
from synapse.events import EventBase, builder from synapse.events import EventBase, builder
from synapse.federation.federation_base import FederationBase, event_from_pdu_json from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.transport.client import SendJoinResponse from synapse.federation.transport.client import SendJoinResponse
from synapse.logging.utils import log_function
from synapse.types import JsonDict, get_domain_from_id from synapse.types import JsonDict, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache from synapse.util.caches.expiringcache import ExpiringCache
@ -119,7 +118,8 @@ class FederationClient(FederationBase):
# It is a map of (room ID, suggested-only) -> the response of # It is a map of (room ID, suggested-only) -> the response of
# get_room_hierarchy. # get_room_hierarchy.
self._get_room_hierarchy_cache: ExpiringCache[ self._get_room_hierarchy_cache: ExpiringCache[
Tuple[str, bool], Tuple[JsonDict, Sequence[JsonDict], Sequence[str]] Tuple[str, bool],
Tuple[JsonDict, Sequence[JsonDict], Sequence[JsonDict], Sequence[str]],
] = ExpiringCache( ] = ExpiringCache(
cache_name="get_room_hierarchy_cache", cache_name="get_room_hierarchy_cache",
clock=self._clock, clock=self._clock,
@ -144,7 +144,6 @@ class FederationClient(FederationBase):
if destination_dict: if destination_dict:
self.pdu_destination_tried[event_id] = destination_dict self.pdu_destination_tried[event_id] = destination_dict
@log_function
async def make_query( async def make_query(
self, self,
destination: str, destination: str,
@ -178,7 +177,6 @@ class FederationClient(FederationBase):
ignore_backoff=ignore_backoff, ignore_backoff=ignore_backoff,
) )
@log_function
async def query_client_keys( async def query_client_keys(
self, destination: str, content: JsonDict, timeout: int self, destination: str, content: JsonDict, timeout: int
) -> JsonDict: ) -> JsonDict:
@ -196,7 +194,6 @@ class FederationClient(FederationBase):
destination, content, timeout destination, content, timeout
) )
@log_function
async def query_user_devices( async def query_user_devices(
self, destination: str, user_id: str, timeout: int = 30000 self, destination: str, user_id: str, timeout: int = 30000
) -> JsonDict: ) -> JsonDict:
@ -208,7 +205,6 @@ class FederationClient(FederationBase):
destination, user_id, timeout destination, user_id, timeout
) )
@log_function
async def claim_client_keys( async def claim_client_keys(
self, destination: str, content: JsonDict, timeout: int self, destination: str, content: JsonDict, timeout: int
) -> JsonDict: ) -> JsonDict:
@ -1338,7 +1334,7 @@ class FederationClient(FederationBase):
destinations: Iterable[str], destinations: Iterable[str],
room_id: str, room_id: str,
suggested_only: bool, suggested_only: bool,
) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]: ) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[JsonDict], Sequence[str]]:
""" """
Call other servers to get a hierarchy of the given room. Call other servers to get a hierarchy of the given room.
@ -1353,7 +1349,8 @@ class FederationClient(FederationBase):
Returns: Returns:
A tuple of: A tuple of:
The room as a JSON dictionary. The room as a JSON dictionary, without a "children_state" key.
A list of `m.space.child` state events.
A list of children rooms, as JSON dictionaries. A list of children rooms, as JSON dictionaries.
A list of inaccessible children room IDs. A list of inaccessible children room IDs.
@ -1368,7 +1365,7 @@ class FederationClient(FederationBase):
async def send_request( async def send_request(
destination: str, destination: str,
) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]: ) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[JsonDict], Sequence[str]]:
try: try:
res = await self.transport_layer.get_room_hierarchy( res = await self.transport_layer.get_room_hierarchy(
destination=destination, destination=destination,
@ -1397,7 +1394,7 @@ class FederationClient(FederationBase):
raise InvalidResponseError("'room' must be a dict") raise InvalidResponseError("'room' must be a dict")
# Validate children_state of the room. # Validate children_state of the room.
children_state = room.get("children_state", []) children_state = room.pop("children_state", [])
if not isinstance(children_state, Sequence): if not isinstance(children_state, Sequence):
raise InvalidResponseError("'room.children_state' must be a list") raise InvalidResponseError("'room.children_state' must be a list")
if any(not isinstance(e, dict) for e in children_state): if any(not isinstance(e, dict) for e in children_state):
@ -1426,7 +1423,7 @@ class FederationClient(FederationBase):
"Invalid room ID in 'inaccessible_children' list" "Invalid room ID in 'inaccessible_children' list"
) )
return room, children, inaccessible_children return room, children_state, children, inaccessible_children
try: try:
result = await self._try_destination_list( result = await self._try_destination_list(
@ -1474,8 +1471,6 @@ class FederationClient(FederationBase):
if event.room_id == room_id: if event.room_id == room_id:
children_events.append(event.data) children_events.append(event.data)
children_room_ids.add(event.state_key) children_room_ids.add(event.state_key)
# And add them under the requested room.
requested_room["children_state"] = children_events
# Find the children rooms. # Find the children rooms.
children = [] children = []
@ -1485,7 +1480,7 @@ class FederationClient(FederationBase):
# It isn't clear from the response whether some of the rooms are # It isn't clear from the response whether some of the rooms are
# not accessible. # not accessible.
result = (requested_room, children, ()) result = (requested_room, children_events, children, ())
# Cache the result to avoid fetching data over federation every time. # Cache the result to avoid fetching data over federation every time.
self._get_room_hierarchy_cache[(room_id, suggested_only)] = result self._get_room_hierarchy_cache[(room_id, suggested_only)] = result

View File

@ -58,7 +58,6 @@ from synapse.logging.context import (
run_in_background, run_in_background,
) )
from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace
from synapse.logging.utils import log_function
from synapse.metrics.background_process_metrics import wrap_as_background_process from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.replication.http.federation import ( from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet, ReplicationFederationSendEduRestServlet,
@ -859,7 +858,6 @@ class FederationServer(FederationBase):
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]} res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
return 200, res return 200, res
@log_function
async def on_query_client_keys( async def on_query_client_keys(
self, origin: str, content: Dict[str, str] self, origin: str, content: Dict[str, str]
) -> Tuple[int, Dict[str, Any]]: ) -> Tuple[int, Dict[str, Any]]:
@ -940,7 +938,6 @@ class FederationServer(FederationBase):
return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]} return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
@log_function
async def on_openid_userinfo(self, token: str) -> Optional[str]: async def on_openid_userinfo(self, token: str) -> Optional[str]:
ts_now_ms = self._clock.time_msec() ts_now_ms = self._clock.time_msec()
return await self.store.get_user_id_for_open_id_token(token, ts_now_ms) return await self.store.get_user_id_for_open_id_token(token, ts_now_ms)

View File

@ -23,7 +23,6 @@ import logging
from typing import Optional, Tuple from typing import Optional, Tuple
from synapse.federation.units import Transaction from synapse.federation.units import Transaction
from synapse.logging.utils import log_function
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
from synapse.types import JsonDict from synapse.types import JsonDict
@ -36,7 +35,6 @@ class TransactionActions:
def __init__(self, datastore: DataStore): def __init__(self, datastore: DataStore):
self.store = datastore self.store = datastore
@log_function
async def have_responded( async def have_responded(
self, origin: str, transaction: Transaction self, origin: str, transaction: Transaction
) -> Optional[Tuple[int, JsonDict]]: ) -> Optional[Tuple[int, JsonDict]]:
@ -53,7 +51,6 @@ class TransactionActions:
return await self.store.get_received_txn_response(transaction_id, origin) return await self.store.get_received_txn_response(transaction_id, origin)
@log_function
async def set_response( async def set_response(
self, origin: str, transaction: Transaction, code: int, response: JsonDict self, origin: str, transaction: Transaction, code: int, response: JsonDict
) -> None: ) -> None:

View File

@ -607,18 +607,18 @@ class PerDestinationQueue:
self._pending_pdus = [] self._pending_pdus = []
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class _TransactionQueueManager: class _TransactionQueueManager:
"""A helper async context manager for pulling stuff off the queues and """A helper async context manager for pulling stuff off the queues and
tracking what was last successfully sent, etc. tracking what was last successfully sent, etc.
""" """
queue = attr.ib(type=PerDestinationQueue) queue: PerDestinationQueue
_device_stream_id = attr.ib(type=Optional[int], default=None) _device_stream_id: Optional[int] = None
_device_list_id = attr.ib(type=Optional[int], default=None) _device_list_id: Optional[int] = None
_last_stream_ordering = attr.ib(type=Optional[int], default=None) _last_stream_ordering: Optional[int] = None
_pdus = attr.ib(type=List[EventBase], factory=list) _pdus: List[EventBase] = attr.Factory(list)
async def __aenter__(self) -> Tuple[List[EventBase], List[Edu]]: async def __aenter__(self) -> Tuple[List[EventBase], List[Edu]]:
# First we calculate the EDUs we want to send, if any. # First we calculate the EDUs we want to send, if any.

View File

@ -35,6 +35,7 @@ if TYPE_CHECKING:
import synapse.server import synapse.server
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
issue_8631_logger = logging.getLogger("synapse.8631_debug")
last_pdu_ts_metric = Gauge( last_pdu_ts_metric = Gauge(
"synapse_federation_last_sent_pdu_time", "synapse_federation_last_sent_pdu_time",
@ -124,6 +125,17 @@ class TransactionManager:
len(pdus), len(pdus),
len(edus), len(edus),
) )
if issue_8631_logger.isEnabledFor(logging.DEBUG):
DEVICE_UPDATE_EDUS = {"m.device_list_update", "m.signing_key_update"}
device_list_updates = [
edu.content for edu in edus if edu.edu_type in DEVICE_UPDATE_EDUS
]
if device_list_updates:
issue_8631_logger.debug(
"about to send txn [%s] including device list updates: %s",
transaction.transaction_id,
device_list_updates,
)
# Actually send the transaction # Actually send the transaction

View File

@ -44,7 +44,6 @@ from synapse.api.urls import (
from synapse.events import EventBase, make_event_from_dict from synapse.events import EventBase, make_event_from_dict
from synapse.federation.units import Transaction from synapse.federation.units import Transaction
from synapse.http.matrixfederationclient import ByteParser from synapse.http.matrixfederationclient import ByteParser
from synapse.logging.utils import log_function
from synapse.types import JsonDict from synapse.types import JsonDict
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -62,7 +61,6 @@ class TransportLayerClient:
self.server_name = hs.hostname self.server_name = hs.hostname
self.client = hs.get_federation_http_client() self.client = hs.get_federation_http_client()
@log_function
async def get_room_state_ids( async def get_room_state_ids(
self, destination: str, room_id: str, event_id: str self, destination: str, room_id: str, event_id: str
) -> JsonDict: ) -> JsonDict:
@ -88,7 +86,6 @@ class TransportLayerClient:
try_trailing_slash_on_400=True, try_trailing_slash_on_400=True,
) )
@log_function
async def get_event( async def get_event(
self, destination: str, event_id: str, timeout: Optional[int] = None self, destination: str, event_id: str, timeout: Optional[int] = None
) -> JsonDict: ) -> JsonDict:
@ -111,7 +108,6 @@ class TransportLayerClient:
destination, path=path, timeout=timeout, try_trailing_slash_on_400=True destination, path=path, timeout=timeout, try_trailing_slash_on_400=True
) )
@log_function
async def backfill( async def backfill(
self, destination: str, room_id: str, event_tuples: Collection[str], limit: int self, destination: str, room_id: str, event_tuples: Collection[str], limit: int
) -> Optional[JsonDict]: ) -> Optional[JsonDict]:
@ -149,7 +145,6 @@ class TransportLayerClient:
destination, path=path, args=args, try_trailing_slash_on_400=True destination, path=path, args=args, try_trailing_slash_on_400=True
) )
@log_function
async def timestamp_to_event( async def timestamp_to_event(
self, destination: str, room_id: str, timestamp: int, direction: str self, destination: str, room_id: str, timestamp: int, direction: str
) -> Union[JsonDict, List]: ) -> Union[JsonDict, List]:
@ -185,7 +180,6 @@ class TransportLayerClient:
return remote_response return remote_response
@log_function
async def send_transaction( async def send_transaction(
self, self,
transaction: Transaction, transaction: Transaction,
@ -234,7 +228,6 @@ class TransportLayerClient:
try_trailing_slash_on_400=True, try_trailing_slash_on_400=True,
) )
@log_function
async def make_query( async def make_query(
self, self,
destination: str, destination: str,
@ -254,7 +247,6 @@ class TransportLayerClient:
ignore_backoff=ignore_backoff, ignore_backoff=ignore_backoff,
) )
@log_function
async def make_membership_event( async def make_membership_event(
self, self,
destination: str, destination: str,
@ -317,7 +309,6 @@ class TransportLayerClient:
ignore_backoff=ignore_backoff, ignore_backoff=ignore_backoff,
) )
@log_function
async def send_join_v1( async def send_join_v1(
self, self,
room_version: RoomVersion, room_version: RoomVersion,
@ -336,7 +327,6 @@ class TransportLayerClient:
max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN, max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN,
) )
@log_function
async def send_join_v2( async def send_join_v2(
self, self,
room_version: RoomVersion, room_version: RoomVersion,
@ -355,7 +345,6 @@ class TransportLayerClient:
max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN, max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN,
) )
@log_function
async def send_leave_v1( async def send_leave_v1(
self, destination: str, room_id: str, event_id: str, content: JsonDict self, destination: str, room_id: str, event_id: str, content: JsonDict
) -> Tuple[int, JsonDict]: ) -> Tuple[int, JsonDict]:
@ -372,7 +361,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def send_leave_v2( async def send_leave_v2(
self, destination: str, room_id: str, event_id: str, content: JsonDict self, destination: str, room_id: str, event_id: str, content: JsonDict
) -> JsonDict: ) -> JsonDict:
@ -389,7 +377,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def send_knock_v1( async def send_knock_v1(
self, self,
destination: str, destination: str,
@ -423,7 +410,6 @@ class TransportLayerClient:
destination=destination, path=path, data=content destination=destination, path=path, data=content
) )
@log_function
async def send_invite_v1( async def send_invite_v1(
self, destination: str, room_id: str, event_id: str, content: JsonDict self, destination: str, room_id: str, event_id: str, content: JsonDict
) -> Tuple[int, JsonDict]: ) -> Tuple[int, JsonDict]:
@ -433,7 +419,6 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
@log_function
async def send_invite_v2( async def send_invite_v2(
self, destination: str, room_id: str, event_id: str, content: JsonDict self, destination: str, room_id: str, event_id: str, content: JsonDict
) -> JsonDict: ) -> JsonDict:
@ -443,7 +428,6 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
@log_function
async def get_public_rooms( async def get_public_rooms(
self, self,
remote_server: str, remote_server: str,
@ -516,7 +500,6 @@ class TransportLayerClient:
return response return response
@log_function
async def exchange_third_party_invite( async def exchange_third_party_invite(
self, destination: str, room_id: str, event_dict: JsonDict self, destination: str, room_id: str, event_dict: JsonDict
) -> JsonDict: ) -> JsonDict:
@ -526,7 +509,6 @@ class TransportLayerClient:
destination=destination, path=path, data=event_dict destination=destination, path=path, data=event_dict
) )
@log_function
async def get_event_auth( async def get_event_auth(
self, destination: str, room_id: str, event_id: str self, destination: str, room_id: str, event_id: str
) -> JsonDict: ) -> JsonDict:
@ -534,7 +516,6 @@ class TransportLayerClient:
return await self.client.get_json(destination=destination, path=path) return await self.client.get_json(destination=destination, path=path)
@log_function
async def query_client_keys( async def query_client_keys(
self, destination: str, query_content: JsonDict, timeout: int self, destination: str, query_content: JsonDict, timeout: int
) -> JsonDict: ) -> JsonDict:
@ -576,7 +557,6 @@ class TransportLayerClient:
destination=destination, path=path, data=query_content, timeout=timeout destination=destination, path=path, data=query_content, timeout=timeout
) )
@log_function
async def query_user_devices( async def query_user_devices(
self, destination: str, user_id: str, timeout: int self, destination: str, user_id: str, timeout: int
) -> JsonDict: ) -> JsonDict:
@ -616,7 +596,6 @@ class TransportLayerClient:
destination=destination, path=path, timeout=timeout destination=destination, path=path, timeout=timeout
) )
@log_function
async def claim_client_keys( async def claim_client_keys(
self, destination: str, query_content: JsonDict, timeout: int self, destination: str, query_content: JsonDict, timeout: int
) -> JsonDict: ) -> JsonDict:
@ -655,7 +634,6 @@ class TransportLayerClient:
destination=destination, path=path, data=query_content, timeout=timeout destination=destination, path=path, data=query_content, timeout=timeout
) )
@log_function
async def get_missing_events( async def get_missing_events(
self, self,
destination: str, destination: str,
@ -680,7 +658,6 @@ class TransportLayerClient:
timeout=timeout, timeout=timeout,
) )
@log_function
async def get_group_profile( async def get_group_profile(
self, destination: str, group_id: str, requester_user_id: str self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict: ) -> JsonDict:
@ -694,7 +671,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def update_group_profile( async def update_group_profile(
self, destination: str, group_id: str, requester_user_id: str, content: JsonDict self, destination: str, group_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict: ) -> JsonDict:
@ -716,7 +692,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def get_group_summary( async def get_group_summary(
self, destination: str, group_id: str, requester_user_id: str self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict: ) -> JsonDict:
@ -730,7 +705,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def get_rooms_in_group( async def get_rooms_in_group(
self, destination: str, group_id: str, requester_user_id: str self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict: ) -> JsonDict:
@ -798,7 +772,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def get_users_in_group( async def get_users_in_group(
self, destination: str, group_id: str, requester_user_id: str self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict: ) -> JsonDict:
@ -812,7 +785,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def get_invited_users_in_group( async def get_invited_users_in_group(
self, destination: str, group_id: str, requester_user_id: str self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict: ) -> JsonDict:
@ -826,7 +798,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def accept_group_invite( async def accept_group_invite(
self, destination: str, group_id: str, user_id: str, content: JsonDict self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict: ) -> JsonDict:
@ -837,7 +808,6 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
@log_function
def join_group( def join_group(
self, destination: str, group_id: str, user_id: str, content: JsonDict self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> Awaitable[JsonDict]: ) -> Awaitable[JsonDict]:
@ -848,7 +818,6 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
@log_function
async def invite_to_group( async def invite_to_group(
self, self,
destination: str, destination: str,
@ -868,7 +837,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def invite_to_group_notification( async def invite_to_group_notification(
self, destination: str, group_id: str, user_id: str, content: JsonDict self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict: ) -> JsonDict:
@ -882,7 +850,6 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
@log_function
async def remove_user_from_group( async def remove_user_from_group(
self, self,
destination: str, destination: str,
@ -902,7 +869,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def remove_user_from_group_notification( async def remove_user_from_group_notification(
self, destination: str, group_id: str, user_id: str, content: JsonDict self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict: ) -> JsonDict:
@ -916,7 +882,6 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
@log_function
async def renew_group_attestation( async def renew_group_attestation(
self, destination: str, group_id: str, user_id: str, content: JsonDict self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict: ) -> JsonDict:
@ -930,7 +895,6 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True destination=destination, path=path, data=content, ignore_backoff=True
) )
@log_function
async def update_group_summary_room( async def update_group_summary_room(
self, self,
destination: str, destination: str,
@ -959,7 +923,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def delete_group_summary_room( async def delete_group_summary_room(
self, self,
destination: str, destination: str,
@ -986,7 +949,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def get_group_categories( async def get_group_categories(
self, destination: str, group_id: str, requester_user_id: str self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict: ) -> JsonDict:
@ -1000,7 +962,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def get_group_category( async def get_group_category(
self, destination: str, group_id: str, requester_user_id: str, category_id: str self, destination: str, group_id: str, requester_user_id: str, category_id: str
) -> JsonDict: ) -> JsonDict:
@ -1014,7 +975,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def update_group_category( async def update_group_category(
self, self,
destination: str, destination: str,
@ -1034,7 +994,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def delete_group_category( async def delete_group_category(
self, destination: str, group_id: str, requester_user_id: str, category_id: str self, destination: str, group_id: str, requester_user_id: str, category_id: str
) -> JsonDict: ) -> JsonDict:
@ -1048,7 +1007,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def get_group_roles( async def get_group_roles(
self, destination: str, group_id: str, requester_user_id: str self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict: ) -> JsonDict:
@ -1062,7 +1020,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def get_group_role( async def get_group_role(
self, destination: str, group_id: str, requester_user_id: str, role_id: str self, destination: str, group_id: str, requester_user_id: str, role_id: str
) -> JsonDict: ) -> JsonDict:
@ -1076,7 +1033,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def update_group_role( async def update_group_role(
self, self,
destination: str, destination: str,
@ -1096,7 +1052,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def delete_group_role( async def delete_group_role(
self, destination: str, group_id: str, requester_user_id: str, role_id: str self, destination: str, group_id: str, requester_user_id: str, role_id: str
) -> JsonDict: ) -> JsonDict:
@ -1110,7 +1065,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def update_group_summary_user( async def update_group_summary_user(
self, self,
destination: str, destination: str,
@ -1136,7 +1090,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def set_group_join_policy( async def set_group_join_policy(
self, destination: str, group_id: str, requester_user_id: str, content: JsonDict self, destination: str, group_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict: ) -> JsonDict:
@ -1151,7 +1104,6 @@ class TransportLayerClient:
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
async def delete_group_summary_user( async def delete_group_summary_user(
self, self,
destination: str, destination: str,

View File

@ -36,6 +36,7 @@ from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
issue_8631_logger = logging.getLogger("synapse.8631_debug")
class BaseFederationServerServlet(BaseFederationServlet): class BaseFederationServerServlet(BaseFederationServlet):
@ -95,6 +96,20 @@ class FederationSendServlet(BaseFederationServerServlet):
len(transaction_data.get("edus", [])), len(transaction_data.get("edus", [])),
) )
if issue_8631_logger.isEnabledFor(logging.DEBUG):
DEVICE_UPDATE_EDUS = {"m.device_list_update", "m.signing_key_update"}
device_list_updates = [
edu.content
for edu in transaction_data.get("edus", [])
if edu.edu_type in DEVICE_UPDATE_EDUS
]
if device_list_updates:
issue_8631_logger.debug(
"received transaction [%s] including device list updates: %s",
transaction_id,
device_list_updates,
)
except Exception as e: except Exception as e:
logger.exception(e) logger.exception(e)
return 400, {"error": "Invalid transaction"} return 400, {"error": "Invalid transaction"}

View File

@ -77,7 +77,7 @@ class AccountDataHandler:
async def add_account_data_for_user( async def add_account_data_for_user(
self, user_id: str, account_data_type: str, content: JsonDict self, user_id: str, account_data_type: str, content: JsonDict
) -> int: ) -> int:
"""Add some account_data to a room for a user. """Add some global account_data for a user.
Args: Args:
user_id: The user to add a tag for. user_id: The user to add a tag for.

View File

@ -55,8 +55,33 @@ class AdminHandler:
async def get_user(self, user: UserID) -> Optional[JsonDict]: async def get_user(self, user: UserID) -> Optional[JsonDict]:
"""Function to get user details""" """Function to get user details"""
ret = await self.store.get_user_by_id(user.to_string()) user_info_dict = await self.store.get_user_by_id(user.to_string())
if ret: if user_info_dict is None:
return None
# Restrict returned information to a known set of fields. This prevents additional
# fields added to get_user_by_id from modifying Synapse's external API surface.
user_info_to_return = {
"name",
"admin",
"deactivated",
"shadow_banned",
"creation_ts",
"appservice_id",
"consent_server_notice_sent",
"consent_version",
"user_type",
"is_guest",
}
# Restrict returned keys to a known set.
user_info_dict = {
key: value
for key, value in user_info_dict.items()
if key in user_info_to_return
}
# Add additional user metadata
profile = await self.store.get_profileinfo(user.localpart) profile = await self.store.get_profileinfo(user.localpart)
threepids = await self.store.user_get_threepids(user.to_string()) threepids = await self.store.user_get_threepids(user.to_string())
external_ids = [ external_ids = [
@ -65,11 +90,12 @@ class AdminHandler:
user.to_string() user.to_string()
) )
] ]
ret["displayname"] = profile.display_name user_info_dict["displayname"] = profile.display_name
ret["avatar_url"] = profile.avatar_url user_info_dict["avatar_url"] = profile.avatar_url
ret["threepids"] = threepids user_info_dict["threepids"] = threepids
ret["external_ids"] = external_ids user_info_dict["external_ids"] = external_ids
return ret
return user_info_dict
async def export_user_data(self, user_id: str, writer: "ExfiltrationWriter") -> Any: async def export_user_data(self, user_id: str, writer: "ExfiltrationWriter") -> Any:
"""Write all data we have on the user to the given writer. """Write all data we have on the user to the given writer.

View File

@ -168,25 +168,25 @@ def login_id_phone_to_thirdparty(identifier: JsonDict) -> Dict[str, str]:
} }
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class SsoLoginExtraAttributes: class SsoLoginExtraAttributes:
"""Data we track about SAML2 sessions""" """Data we track about SAML2 sessions"""
# time the session was created, in milliseconds # time the session was created, in milliseconds
creation_time = attr.ib(type=int) creation_time: int
extra_attributes = attr.ib(type=JsonDict) extra_attributes: JsonDict
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class LoginTokenAttributes: class LoginTokenAttributes:
"""Data we store in a short-term login token""" """Data we store in a short-term login token"""
user_id = attr.ib(type=str) user_id: str
auth_provider_id = attr.ib(type=str) auth_provider_id: str
"""The SSO Identity Provider that the user authenticated with, to get this token.""" """The SSO Identity Provider that the user authenticated with, to get this token."""
auth_provider_session_id = attr.ib(type=Optional[str]) auth_provider_session_id: Optional[str]
"""The session ID advertised by the SSO Identity Provider.""" """The session ID advertised by the SSO Identity Provider."""
@ -2281,7 +2281,7 @@ class PasswordAuthProvider:
# call all of the on_logged_out callbacks # call all of the on_logged_out callbacks
for callback in self.on_logged_out_callbacks: for callback in self.on_logged_out_callbacks:
try: try:
callback(user_id, device_id, access_token) await callback(user_id, device_id, access_token)
except Exception as e: except Exception as e:
logger.warning("Failed to run module API callback %s: %s", callback, e) logger.warning("Failed to run module API callback %s: %s", callback, e)
continue continue

View File

@ -948,8 +948,16 @@ class DeviceListUpdater:
devices = [] devices = []
ignore_devices = True ignore_devices = True
else: else:
prev_stream_id = await self.store.get_device_list_last_stream_id_for_remote(
user_id
)
cached_devices = await self.store.get_cached_devices_for_user(user_id) cached_devices = await self.store.get_cached_devices_for_user(user_id)
if cached_devices == {d["device_id"]: d for d in devices}:
# To ensure that a user with no devices is cached, we skip the resync only
# if we have a stream_id from previously writing a cache entry.
if prev_stream_id is not None and cached_devices == {
d["device_id"]: d for d in devices
}:
logging.info( logging.info(
"Skipping device list resync for %s, as our cache matches already", "Skipping device list resync for %s, as our cache matches already",
user_id, user_id,

View File

@ -1321,14 +1321,14 @@ def _one_time_keys_match(old_key_json: str, new_key: JsonDict) -> bool:
return old_key == new_key_copy return old_key == new_key_copy
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class SignatureListItem: class SignatureListItem:
"""An item in the signature list as used by upload_signatures_for_device_keys.""" """An item in the signature list as used by upload_signatures_for_device_keys."""
signing_key_id = attr.ib(type=str) signing_key_id: str
target_user_id = attr.ib(type=str) target_user_id: str
target_device_id = attr.ib(type=str) target_device_id: str
signature = attr.ib(type=JsonDict) signature: JsonDict
class SigningKeyEduUpdater: class SigningKeyEduUpdater:

View File

@ -20,7 +20,6 @@ from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.errors import AuthError, SynapseError from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase from synapse.events import EventBase
from synapse.handlers.presence import format_user_presence_state from synapse.handlers.presence import format_user_presence_state
from synapse.logging.utils import log_function
from synapse.streams.config import PaginationConfig from synapse.streams.config import PaginationConfig
from synapse.types import JsonDict, UserID from synapse.types import JsonDict, UserID
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
@ -43,7 +42,6 @@ class EventStreamHandler:
self._server_notices_sender = hs.get_server_notices_sender() self._server_notices_sender = hs.get_server_notices_sender()
self._event_serializer = hs.get_event_client_serializer() self._event_serializer = hs.get_event_client_serializer()
@log_function
async def get_stream( async def get_stream(
self, self,
auth_user_id: str, auth_user_id: str,
@ -119,7 +117,7 @@ class EventStreamHandler:
events.extend(to_add) events.extend(to_add)
chunks = await self._event_serializer.serialize_events( chunks = self._event_serializer.serialize_events(
events, events,
time_now, time_now,
as_client_event=as_client_event, as_client_event=as_client_event,

View File

@ -51,7 +51,6 @@ from synapse.logging.context import (
preserve_fn, preserve_fn,
run_in_background, run_in_background,
) )
from synapse.logging.utils import log_function
from synapse.replication.http.federation import ( from synapse.replication.http.federation import (
ReplicationCleanRoomRestServlet, ReplicationCleanRoomRestServlet,
ReplicationStoreRoomOnOutlierMembershipRestServlet, ReplicationStoreRoomOnOutlierMembershipRestServlet,
@ -556,7 +555,6 @@ class FederationHandler:
run_in_background(self._handle_queued_pdus, room_queue) run_in_background(self._handle_queued_pdus, room_queue)
@log_function
async def do_knock( async def do_knock(
self, self,
target_hosts: List[str], target_hosts: List[str],
@ -928,7 +926,6 @@ class FederationHandler:
return event return event
@log_function
async def on_make_knock_request( async def on_make_knock_request(
self, origin: str, room_id: str, user_id: str self, origin: str, room_id: str, user_id: str
) -> EventBase: ) -> EventBase:
@ -1039,7 +1036,6 @@ class FederationHandler:
else: else:
return [] return []
@log_function
async def on_backfill_request( async def on_backfill_request(
self, origin: str, room_id: str, pdu_list: List[str], limit: int self, origin: str, room_id: str, pdu_list: List[str], limit: int
) -> List[EventBase]: ) -> List[EventBase]:
@ -1056,7 +1052,6 @@ class FederationHandler:
return events return events
@log_function
async def get_persisted_pdu( async def get_persisted_pdu(
self, origin: str, event_id: str self, origin: str, event_id: str
) -> Optional[EventBase]: ) -> Optional[EventBase]:
@ -1118,7 +1113,6 @@ class FederationHandler:
return missing_events return missing_events
@log_function
async def exchange_third_party_invite( async def exchange_third_party_invite(
self, sender_user_id: str, target_user_id: str, room_id: str, signed: JsonDict self, sender_user_id: str, target_user_id: str, room_id: str, signed: JsonDict
) -> None: ) -> None:

View File

@ -56,7 +56,6 @@ from synapse.events import EventBase
from synapse.events.snapshot import EventContext from synapse.events.snapshot import EventContext
from synapse.federation.federation_client import InvalidResponseError from synapse.federation.federation_client import InvalidResponseError
from synapse.logging.context import nested_logging_context, run_in_background from synapse.logging.context import nested_logging_context, run_in_background
from synapse.logging.utils import log_function
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
from synapse.replication.http.federation import ( from synapse.replication.http.federation import (
@ -275,7 +274,6 @@ class FederationEventHandler:
await self._process_received_pdu(origin, pdu, state=None) await self._process_received_pdu(origin, pdu, state=None)
@log_function
async def on_send_membership_event( async def on_send_membership_event(
self, origin: str, event: EventBase self, origin: str, event: EventBase
) -> Tuple[EventBase, EventContext]: ) -> Tuple[EventBase, EventContext]:
@ -472,7 +470,6 @@ class FederationEventHandler:
return await self.persist_events_and_notify(room_id, [(event, context)]) return await self.persist_events_and_notify(room_id, [(event, context)])
@log_function
async def backfill( async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Collection[str] self, dest: str, room_id: str, limit: int, extremities: Collection[str]
) -> None: ) -> None:

View File

@ -170,7 +170,7 @@ class InitialSyncHandler:
d["inviter"] = event.sender d["inviter"] = event.sender
invite_event = await self.store.get_event(event.event_id) invite_event = await self.store.get_event(event.event_id)
d["invite"] = await self._event_serializer.serialize_event( d["invite"] = self._event_serializer.serialize_event(
invite_event, invite_event,
time_now, time_now,
as_client_event=as_client_event, as_client_event=as_client_event,
@ -222,7 +222,7 @@ class InitialSyncHandler:
d["messages"] = { d["messages"] = {
"chunk": ( "chunk": (
await self._event_serializer.serialize_events( self._event_serializer.serialize_events(
messages, messages,
time_now=time_now, time_now=time_now,
as_client_event=as_client_event, as_client_event=as_client_event,
@ -232,7 +232,7 @@ class InitialSyncHandler:
"end": await end_token.to_string(self.store), "end": await end_token.to_string(self.store),
} }
d["state"] = await self._event_serializer.serialize_events( d["state"] = self._event_serializer.serialize_events(
current_state.values(), current_state.values(),
time_now=time_now, time_now=time_now,
as_client_event=as_client_event, as_client_event=as_client_event,
@ -376,16 +376,14 @@ class InitialSyncHandler:
"messages": { "messages": {
"chunk": ( "chunk": (
# Don't bundle aggregations as this is a deprecated API. # Don't bundle aggregations as this is a deprecated API.
await self._event_serializer.serialize_events(messages, time_now) self._event_serializer.serialize_events(messages, time_now)
), ),
"start": await start_token.to_string(self.store), "start": await start_token.to_string(self.store),
"end": await end_token.to_string(self.store), "end": await end_token.to_string(self.store),
}, },
"state": ( "state": (
# Don't bundle aggregations as this is a deprecated API. # Don't bundle aggregations as this is a deprecated API.
await self._event_serializer.serialize_events( self._event_serializer.serialize_events(room_state.values(), time_now)
room_state.values(), time_now
)
), ),
"presence": [], "presence": [],
"receipts": [], "receipts": [],
@ -404,7 +402,7 @@ class InitialSyncHandler:
# TODO: These concurrently # TODO: These concurrently
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
# Don't bundle aggregations as this is a deprecated API. # Don't bundle aggregations as this is a deprecated API.
state = await self._event_serializer.serialize_events( state = self._event_serializer.serialize_events(
current_state.values(), time_now current_state.values(), time_now
) )
@ -480,7 +478,7 @@ class InitialSyncHandler:
"messages": { "messages": {
"chunk": ( "chunk": (
# Don't bundle aggregations as this is a deprecated API. # Don't bundle aggregations as this is a deprecated API.
await self._event_serializer.serialize_events(messages, time_now) self._event_serializer.serialize_events(messages, time_now)
), ),
"start": await start_token.to_string(self.store), "start": await start_token.to_string(self.store),
"end": await end_token.to_string(self.store), "end": await end_token.to_string(self.store),

View File

@ -246,7 +246,7 @@ class MessageHandler:
room_state = room_state_events[membership_event_id] room_state = room_state_events[membership_event_id]
now = self.clock.time_msec() now = self.clock.time_msec()
events = await self._event_serializer.serialize_events(room_state.values(), now) events = self._event_serializer.serialize_events(room_state.values(), now)
return events return events
async def get_joined_members(self, requester: Requester, room_id: str) -> dict: async def get_joined_members(self, requester: Requester, room_id: str) -> dict:

View File

@ -537,14 +537,16 @@ class PaginationHandler:
state_dict = await self.store.get_events(list(state_ids.values())) state_dict = await self.store.get_events(list(state_ids.values()))
state = state_dict.values() state = state_dict.values()
aggregations = await self.store.get_bundled_aggregations(events, user_id)
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
chunk = { chunk = {
"chunk": ( "chunk": (
await self._event_serializer.serialize_events( self._event_serializer.serialize_events(
events, events,
time_now, time_now,
bundle_aggregations=True, bundle_aggregations=aggregations,
as_client_event=as_client_event, as_client_event=as_client_event,
) )
), ),
@ -553,7 +555,7 @@ class PaginationHandler:
} }
if state: if state:
chunk["state"] = await self._event_serializer.serialize_events( chunk["state"] = self._event_serializer.serialize_events(
state, time_now, as_client_event=as_client_event state, time_now, as_client_event=as_client_event
) )

View File

@ -55,7 +55,6 @@ from synapse.api.presence import UserPresenceState
from synapse.appservice import ApplicationService from synapse.appservice import ApplicationService
from synapse.events.presence_router import PresenceRouter from synapse.events.presence_router import PresenceRouter
from synapse.logging.context import run_in_background from synapse.logging.context import run_in_background
from synapse.logging.utils import log_function
from synapse.metrics import LaterGauge from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.presence import ( from synapse.replication.http.presence import (
@ -1542,7 +1541,6 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.store = hs.get_datastore() self.store = hs.get_datastore()
@log_function
async def get_new_events( async def get_new_events(
self, self,
user: UserID, user: UserID,

View File

@ -393,7 +393,9 @@ class RoomCreationHandler:
user_id = requester.user.to_string() user_id = requester.user.to_string()
if not await self.spam_checker.user_may_create_room(user_id): if not await self.spam_checker.user_may_create_room(user_id):
raise SynapseError(403, "You are not permitted to create rooms") raise SynapseError(
403, "You are not permitted to create rooms", Codes.FORBIDDEN
)
creation_content: JsonDict = { creation_content: JsonDict = {
"room_version": new_room_version.identifier, "room_version": new_room_version.identifier,
@ -685,7 +687,9 @@ class RoomCreationHandler:
invite_3pid_list, invite_3pid_list,
) )
): ):
raise SynapseError(403, "You are not permitted to create rooms") raise SynapseError(
403, "You are not permitted to create rooms", Codes.FORBIDDEN
)
if ratelimit: if ratelimit:
await self.request_ratelimiter.ratelimit(requester) await self.request_ratelimiter.ratelimit(requester)
@ -1177,6 +1181,22 @@ class RoomContextHandler:
# `filtered` rather than the event we retrieved from the datastore. # `filtered` rather than the event we retrieved from the datastore.
results["event"] = filtered[0] results["event"] = filtered[0]
# Fetch the aggregations.
aggregations = await self.store.get_bundled_aggregations(
[results["event"]], user.to_string()
)
aggregations.update(
await self.store.get_bundled_aggregations(
results["events_before"], user.to_string()
)
)
aggregations.update(
await self.store.get_bundled_aggregations(
results["events_after"], user.to_string()
)
)
results["aggregations"] = aggregations
if results["events_after"]: if results["events_after"]:
last_event_id = results["events_after"][-1].event_id last_event_id = results["events_after"][-1].event_id
else: else:

View File

@ -153,6 +153,9 @@ class RoomSummaryHandler:
rooms_result: List[JsonDict] = [] rooms_result: List[JsonDict] = []
events_result: List[JsonDict] = [] events_result: List[JsonDict] = []
if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE:
max_rooms_per_space = MAX_ROOMS_PER_SPACE
while room_queue and len(rooms_result) < MAX_ROOMS: while room_queue and len(rooms_result) < MAX_ROOMS:
queue_entry = room_queue.popleft() queue_entry = room_queue.popleft()
room_id = queue_entry.room_id room_id = queue_entry.room_id
@ -167,7 +170,7 @@ class RoomSummaryHandler:
# The client-specified max_rooms_per_space limit doesn't apply to the # The client-specified max_rooms_per_space limit doesn't apply to the
# room_id specified in the request, so we ignore it if this is the # room_id specified in the request, so we ignore it if this is the
# first room we are processing. # first room we are processing.
max_children = max_rooms_per_space if processed_rooms else None max_children = max_rooms_per_space if processed_rooms else MAX_ROOMS
if is_in_room: if is_in_room:
room_entry = await self._summarize_local_room( room_entry = await self._summarize_local_room(
@ -209,7 +212,7 @@ class RoomSummaryHandler:
# Before returning to the client, remove the allowed_room_ids # Before returning to the client, remove the allowed_room_ids
# and allowed_spaces keys. # and allowed_spaces keys.
room.pop("allowed_room_ids", None) room.pop("allowed_room_ids", None)
room.pop("allowed_spaces", None) room.pop("allowed_spaces", None) # historical
rooms_result.append(room) rooms_result.append(room)
events.extend(room_entry.children_state_events) events.extend(room_entry.children_state_events)
@ -395,7 +398,7 @@ class RoomSummaryHandler:
None, None,
room_id, room_id,
suggested_only, suggested_only,
# TODO Handle max children. # Do not limit the maximum children.
max_children=None, max_children=None,
) )
@ -525,6 +528,10 @@ class RoomSummaryHandler:
rooms_result: List[JsonDict] = [] rooms_result: List[JsonDict] = []
events_result: List[JsonDict] = [] events_result: List[JsonDict] = []
# Set a limit on the number of rooms to return.
if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE:
max_rooms_per_space = MAX_ROOMS_PER_SPACE
while room_queue and len(rooms_result) < MAX_ROOMS: while room_queue and len(rooms_result) < MAX_ROOMS:
room_id = room_queue.popleft() room_id = room_queue.popleft()
if room_id in processed_rooms: if room_id in processed_rooms:
@ -583,7 +590,9 @@ class RoomSummaryHandler:
# Iterate through each child and potentially add it, but not its children, # Iterate through each child and potentially add it, but not its children,
# to the response. # to the response.
for child_room in root_room_entry.children_state_events: for child_room in itertools.islice(
root_room_entry.children_state_events, MAX_ROOMS_PER_SPACE
):
room_id = child_room.get("state_key") room_id = child_room.get("state_key")
assert isinstance(room_id, str) assert isinstance(room_id, str)
# If the room is unknown, skip it. # If the room is unknown, skip it.
@ -633,8 +642,8 @@ class RoomSummaryHandler:
suggested_only: True if only suggested children should be returned. suggested_only: True if only suggested children should be returned.
Otherwise, all children are returned. Otherwise, all children are returned.
max_children: max_children:
The maximum number of children rooms to include. This is capped The maximum number of children rooms to include. A value of None
to a server-set limit. means no limit.
Returns: Returns:
A room entry if the room should be returned. None, otherwise. A room entry if the room should be returned. None, otherwise.
@ -656,8 +665,13 @@ class RoomSummaryHandler:
# we only care about suggested children # we only care about suggested children
child_events = filter(_is_suggested_child_event, child_events) child_events = filter(_is_suggested_child_event, child_events)
if max_children is None or max_children > MAX_ROOMS_PER_SPACE: # TODO max_children is legacy code for the /spaces endpoint.
max_children = MAX_ROOMS_PER_SPACE if max_children is not None:
child_iter: Iterable[EventBase] = itertools.islice(
child_events, max_children
)
else:
child_iter = child_events
stripped_events: List[JsonDict] = [ stripped_events: List[JsonDict] = [
{ {
@ -668,7 +682,7 @@ class RoomSummaryHandler:
"sender": e.sender, "sender": e.sender,
"origin_server_ts": e.origin_server_ts, "origin_server_ts": e.origin_server_ts,
} }
for e in itertools.islice(child_events, max_children) for e in child_iter
] ]
return _RoomEntry(room_id, room_entry, stripped_events) return _RoomEntry(room_id, room_entry, stripped_events)
@ -766,6 +780,7 @@ class RoomSummaryHandler:
try: try:
( (
room_response, room_response,
children_state_events,
children, children,
inaccessible_children, inaccessible_children,
) = await self._federation_client.get_room_hierarchy( ) = await self._federation_client.get_room_hierarchy(
@ -790,7 +805,7 @@ class RoomSummaryHandler:
} }
return ( return (
_RoomEntry(room_id, room_response, room_response.pop("children_state", ())), _RoomEntry(room_id, room_response, children_state_events),
children_by_room_id, children_by_room_id,
set(inaccessible_children), set(inaccessible_children),
) )
@ -988,12 +1003,14 @@ class RoomSummaryHandler:
"canonical_alias": stats["canonical_alias"], "canonical_alias": stats["canonical_alias"],
"num_joined_members": stats["joined_members"], "num_joined_members": stats["joined_members"],
"avatar_url": stats["avatar"], "avatar_url": stats["avatar"],
# plural join_rules is a documentation error but kept for historical
# purposes. Should match /publicRooms.
"join_rules": stats["join_rules"], "join_rules": stats["join_rules"],
"join_rule": stats["join_rules"],
"world_readable": ( "world_readable": (
stats["history_visibility"] == HistoryVisibility.WORLD_READABLE stats["history_visibility"] == HistoryVisibility.WORLD_READABLE
), ),
"guest_can_join": stats["guest_access"] == "can_join", "guest_can_join": stats["guest_access"] == "can_join",
"creation_ts": create_event.origin_server_ts,
"room_type": create_event.content.get(EventContentFields.ROOM_TYPE), "room_type": create_event.content.get(EventContentFields.ROOM_TYPE),
} }

View File

@ -420,10 +420,10 @@ class SearchHandler:
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
for context in contexts.values(): for context in contexts.values():
context["events_before"] = await self._event_serializer.serialize_events( context["events_before"] = self._event_serializer.serialize_events(
context["events_before"], time_now context["events_before"], time_now
) )
context["events_after"] = await self._event_serializer.serialize_events( context["events_after"] = self._event_serializer.serialize_events(
context["events_after"], time_now context["events_after"], time_now
) )
@ -441,9 +441,7 @@ class SearchHandler:
results.append( results.append(
{ {
"rank": rank_map[e.event_id], "rank": rank_map[e.event_id],
"result": ( "result": self._event_serializer.serialize_event(e, time_now),
await self._event_serializer.serialize_event(e, time_now)
),
"context": contexts.get(e.event_id, {}), "context": contexts.get(e.event_id, {}),
} }
) )
@ -457,7 +455,7 @@ class SearchHandler:
if state_results: if state_results:
s = {} s = {}
for room_id, state_events in state_results.items(): for room_id, state_events in state_results.items():
s[room_id] = await self._event_serializer.serialize_events( s[room_id] = self._event_serializer.serialize_events(
state_events, time_now state_events, time_now
) )

View File

@ -126,45 +126,45 @@ class SsoIdentityProvider(Protocol):
raise NotImplementedError() raise NotImplementedError()
@attr.s @attr.s(auto_attribs=True)
class UserAttributes: class UserAttributes:
# the localpart of the mxid that the mapper has assigned to the user. # the localpart of the mxid that the mapper has assigned to the user.
# if `None`, the mapper has not picked a userid, and the user should be prompted to # if `None`, the mapper has not picked a userid, and the user should be prompted to
# enter one. # enter one.
localpart = attr.ib(type=Optional[str]) localpart: Optional[str]
display_name = attr.ib(type=Optional[str], default=None) display_name: Optional[str] = None
emails = attr.ib(type=Collection[str], default=attr.Factory(list)) emails: Collection[str] = attr.Factory(list)
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class UsernameMappingSession: class UsernameMappingSession:
"""Data we track about SSO sessions""" """Data we track about SSO sessions"""
# A unique identifier for this SSO provider, e.g. "oidc" or "saml". # A unique identifier for this SSO provider, e.g. "oidc" or "saml".
auth_provider_id = attr.ib(type=str) auth_provider_id: str
# user ID on the IdP server # user ID on the IdP server
remote_user_id = attr.ib(type=str) remote_user_id: str
# attributes returned by the ID mapper # attributes returned by the ID mapper
display_name = attr.ib(type=Optional[str]) display_name: Optional[str]
emails = attr.ib(type=Collection[str]) emails: Collection[str]
# An optional dictionary of extra attributes to be provided to the client in the # An optional dictionary of extra attributes to be provided to the client in the
# login response. # login response.
extra_login_attributes = attr.ib(type=Optional[JsonDict]) extra_login_attributes: Optional[JsonDict]
# where to redirect the client back to # where to redirect the client back to
client_redirect_url = attr.ib(type=str) client_redirect_url: str
# expiry time for the session, in milliseconds # expiry time for the session, in milliseconds
expiry_time_ms = attr.ib(type=int) expiry_time_ms: int
# choices made by the user # choices made by the user
chosen_localpart = attr.ib(type=Optional[str], default=None) chosen_localpart: Optional[str] = None
use_display_name = attr.ib(type=bool, default=True) use_display_name: bool = True
emails_to_use = attr.ib(type=Collection[str], default=()) emails_to_use: Collection[str] = ()
terms_accepted_version = attr.ib(type=Optional[str], default=None) terms_accepted_version: Optional[str] = None
# the HTTP cookie used to track the mapping session id # the HTTP cookie used to track the mapping session id

View File

@ -60,10 +60,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# Debug logger for https://github.com/matrix-org/synapse/issues/4422
issue4422_logger = logging.getLogger("synapse.handler.sync.4422_debug")
# Counts the number of times we returned a non-empty sync. `type` is one of # Counts the number of times we returned a non-empty sync. `type` is one of
# "initial_sync", "full_state_sync" or "incremental_sync", `lazy_loaded` is # "initial_sync", "full_state_sync" or "incremental_sync", `lazy_loaded` is
# "true" or "false" depending on if the request asked for lazy loaded members or # "true" or "false" depending on if the request asked for lazy loaded members or
@ -102,6 +98,9 @@ class TimelineBatch:
prev_batch: StreamToken prev_batch: StreamToken
events: List[EventBase] events: List[EventBase]
limited: bool limited: bool
# A mapping of event ID to the bundled aggregations for the above events.
# This is only calculated if limited is true.
bundled_aggregations: Optional[Dict[str, Dict[str, Any]]] = None
def __bool__(self) -> bool: def __bool__(self) -> bool:
"""Make the result appear empty if there are no updates. This is used """Make the result appear empty if there are no updates. This is used
@ -634,10 +633,19 @@ class SyncHandler:
prev_batch_token = now_token.copy_and_replace("room_key", room_key) prev_batch_token = now_token.copy_and_replace("room_key", room_key)
# Don't bother to bundle aggregations if the timeline is unlimited,
# as clients will have all the necessary information.
bundled_aggregations = None
if limited or newly_joined_room:
bundled_aggregations = await self.store.get_bundled_aggregations(
recents, sync_config.user.to_string()
)
return TimelineBatch( return TimelineBatch(
events=recents, events=recents,
prev_batch=prev_batch_token, prev_batch=prev_batch_token,
limited=limited or newly_joined_room, limited=limited or newly_joined_room,
bundled_aggregations=bundled_aggregations,
) )
async def get_state_after_event( async def get_state_after_event(
@ -1161,13 +1169,8 @@ class SyncHandler:
num_events = 0 num_events = 0
# debug for https://github.com/matrix-org/synapse/issues/4422 # debug for https://github.com/matrix-org/synapse/issues/9424
for joined_room in sync_result_builder.joined: for joined_room in sync_result_builder.joined:
room_id = joined_room.room_id
if room_id in newly_joined_rooms:
issue4422_logger.debug(
"Sync result for newly joined room %s: %r", room_id, joined_room
)
num_events += len(joined_room.timeline.events) num_events += len(joined_room.timeline.events)
log_kv( log_kv(
@ -1740,18 +1743,6 @@ class SyncHandler:
old_mem_ev_id, allow_none=True old_mem_ev_id, allow_none=True
) )
# debug for #4422
if has_join:
prev_membership = None
if old_mem_ev:
prev_membership = old_mem_ev.membership
issue4422_logger.debug(
"Previous membership for room %s with join: %s (event %s)",
room_id,
prev_membership,
old_mem_ev_id,
)
if not old_mem_ev or old_mem_ev.membership != Membership.JOIN: if not old_mem_ev or old_mem_ev.membership != Membership.JOIN:
newly_joined_rooms.append(room_id) newly_joined_rooms.append(room_id)
@ -1893,13 +1884,6 @@ class SyncHandler:
upto_token=since_token, upto_token=since_token,
) )
if newly_joined:
# debugging for https://github.com/matrix-org/synapse/issues/4422
issue4422_logger.debug(
"RoomSyncResultBuilder events for newly joined room %s: %r",
room_id,
entry.events,
)
room_entries.append(entry) room_entries.append(entry)
return _RoomChanges( return _RoomChanges(
@ -2077,14 +2061,6 @@ class SyncHandler:
# `_load_filtered_recents` can't find any events the user should see # `_load_filtered_recents` can't find any events the user should see
# (e.g. due to having ignored the sender of the last 50 events). # (e.g. due to having ignored the sender of the last 50 events).
if newly_joined:
# debug for https://github.com/matrix-org/synapse/issues/4422
issue4422_logger.debug(
"Timeline events after filtering in newly-joined room %s: %r",
room_id,
batch,
)
# When we join the room (or the client requests full_state), we should # When we join the room (or the client requests full_state), we should
# send down any existing tags. Usually the user won't have tags in a # send down any existing tags. Usually the user won't have tags in a
# newly joined room, unless either a) they've joined before or b) the # newly joined room, unless either a) they've joined before or b) the

View File

@ -32,9 +32,9 @@ class ProxyConnectError(ConnectError):
pass pass
@attr.s @attr.s(auto_attribs=True)
class ProxyCredentials: class ProxyCredentials:
username_password = attr.ib(type=bytes) username_password: bytes
def as_proxy_authorization_value(self) -> bytes: def as_proxy_authorization_value(self) -> bytes:
""" """

View File

@ -123,37 +123,37 @@ class ByteParser(ByteWriteable, Generic[T], abc.ABC):
pass pass
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class MatrixFederationRequest: class MatrixFederationRequest:
method = attr.ib(type=str) method: str
"""HTTP method """HTTP method
""" """
path = attr.ib(type=str) path: str
"""HTTP path """HTTP path
""" """
destination = attr.ib(type=str) destination: str
"""The remote server to send the HTTP request to. """The remote server to send the HTTP request to.
""" """
json = attr.ib(default=None, type=Optional[JsonDict]) json: Optional[JsonDict] = None
"""JSON to send in the body. """JSON to send in the body.
""" """
json_callback = attr.ib(default=None, type=Optional[Callable[[], JsonDict]]) json_callback: Optional[Callable[[], JsonDict]] = None
"""A callback to generate the JSON. """A callback to generate the JSON.
""" """
query = attr.ib(default=None, type=Optional[dict]) query: Optional[dict] = None
"""Query arguments. """Query arguments.
""" """
txn_id = attr.ib(default=None, type=Optional[str]) txn_id: Optional[str] = None
"""Unique ID for this request (for logging) """Unique ID for this request (for logging)
""" """
uri = attr.ib(init=False, type=bytes) uri: bytes = attr.ib(init=False)
"""The URI of this request """The URI of this request
""" """

View File

@ -534,9 +534,9 @@ class XForwardedForRequest(SynapseRequest):
@implementer(IAddress) @implementer(IAddress)
@attr.s(frozen=True, slots=True) @attr.s(frozen=True, slots=True, auto_attribs=True)
class _XForwardedForAddress: class _XForwardedForAddress:
host = attr.ib(type=str) host: str
class SynapseSite(Site): class SynapseSite(Site):

View File

@ -39,7 +39,7 @@ from twisted.python.failure import Failure
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@attr.s @attr.s(slots=True, auto_attribs=True)
@implementer(IPushProducer) @implementer(IPushProducer)
class LogProducer: class LogProducer:
""" """
@ -54,10 +54,10 @@ class LogProducer:
# This is essentially ITCPTransport, but that is missing certain fields # This is essentially ITCPTransport, but that is missing certain fields
# (connected and registerProducer) which are part of the implementation. # (connected and registerProducer) which are part of the implementation.
transport = attr.ib(type=Connection) transport: Connection
_format = attr.ib(type=Callable[[logging.LogRecord], str]) _format: Callable[[logging.LogRecord], str]
_buffer = attr.ib(type=deque) _buffer: Deque[logging.LogRecord]
_paused = attr.ib(default=False, type=bool, init=False) _paused: bool = attr.ib(default=False, init=False)
def pauseProducing(self): def pauseProducing(self):
self._paused = True self._paused = True

View File

@ -193,7 +193,7 @@ class ContextResourceUsage:
return res return res
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class ContextRequest: class ContextRequest:
""" """
A bundle of attributes from the SynapseRequest object. A bundle of attributes from the SynapseRequest object.
@ -205,15 +205,15 @@ class ContextRequest:
their children. their children.
""" """
request_id = attr.ib(type=str) request_id: str
ip_address = attr.ib(type=str) ip_address: str
site_tag = attr.ib(type=str) site_tag: str
requester = attr.ib(type=Optional[str]) requester: Optional[str]
authenticated_entity = attr.ib(type=Optional[str]) authenticated_entity: Optional[str]
method = attr.ib(type=str) method: str
url = attr.ib(type=str) url: str
protocol = attr.ib(type=str) protocol: str
user_agent = attr.ib(type=str) user_agent: str
LoggingContextOrSentinel = Union["LoggingContext", "_Sentinel"] LoggingContextOrSentinel = Union["LoggingContext", "_Sentinel"]

View File

@ -247,11 +247,11 @@ try:
class BaseReporter: # type: ignore[no-redef] class BaseReporter: # type: ignore[no-redef]
pass pass
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class _WrappedRustReporter(BaseReporter): class _WrappedRustReporter(BaseReporter):
"""Wrap the reporter to ensure `report_span` never throws.""" """Wrap the reporter to ensure `report_span` never throws."""
_reporter = attr.ib(type=Reporter, default=attr.Factory(Reporter)) _reporter: Reporter = attr.Factory(Reporter)
def set_process(self, *args, **kwargs): def set_process(self, *args, **kwargs):
return self._reporter.set_process(*args, **kwargs) return self._reporter.set_process(*args, **kwargs)

View File

@ -1,76 +0,0 @@
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from functools import wraps
from inspect import getcallargs
from typing import Callable, TypeVar, cast
_TIME_FUNC_ID = 0
def _log_debug_as_f(f, msg, msg_args):
name = f.__module__
logger = logging.getLogger(name)
if logger.isEnabledFor(logging.DEBUG):
lineno = f.__code__.co_firstlineno
pathname = f.__code__.co_filename
record = logger.makeRecord(
name=name,
level=logging.DEBUG,
fn=pathname,
lno=lineno,
msg=msg,
args=msg_args,
exc_info=None,
)
logger.handle(record)
F = TypeVar("F", bound=Callable)
def log_function(f: F) -> F:
"""Function decorator that logs every call to that function."""
func_name = f.__name__
@wraps(f)
def wrapped(*args, **kwargs):
name = f.__module__
logger = logging.getLogger(name)
level = logging.DEBUG
if logger.isEnabledFor(level):
bound_args = getcallargs(f, *args, **kwargs)
def format(value):
r = str(value)
if len(r) > 50:
r = r[:50] + "..."
return r
func_args = ["%s=%s" % (k, format(v)) for k, v in bound_args.items()]
msg_args = {"func_name": func_name, "args": ", ".join(func_args)}
_log_debug_as_f(f, "Invoked '%(func_name)s' with args: %(args)s", msg_args)
return f(*args, **kwargs)
wrapped.__name__ = func_name
return cast(F, wrapped)

View File

@ -12,16 +12,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import functools
import gc
import itertools import itertools
import logging import logging
import os import os
import platform import platform
import threading import threading
import time
from typing import ( from typing import (
Any,
Callable, Callable,
Dict, Dict,
Generic, Generic,
@ -34,35 +30,31 @@ from typing import (
Type, Type,
TypeVar, TypeVar,
Union, Union,
cast,
) )
import attr import attr
from prometheus_client import CollectorRegistry, Counter, Gauge, Histogram, Metric from prometheus_client import CollectorRegistry, Counter, Gauge, Histogram, Metric
from prometheus_client.core import ( from prometheus_client.core import (
REGISTRY, REGISTRY,
CounterMetricFamily,
GaugeHistogramMetricFamily, GaugeHistogramMetricFamily,
GaugeMetricFamily, GaugeMetricFamily,
) )
from twisted.internet import reactor
from twisted.internet.base import ReactorBase
from twisted.python.threadpool import ThreadPool from twisted.python.threadpool import ThreadPool
import synapse import synapse.metrics._reactor_metrics
from synapse.metrics._exposition import ( from synapse.metrics._exposition import (
MetricsResource, MetricsResource,
generate_latest, generate_latest,
start_http_server, start_http_server,
) )
from synapse.metrics._gc import MIN_TIME_BETWEEN_GCS, install_gc_manager
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
METRICS_PREFIX = "/_synapse/metrics" METRICS_PREFIX = "/_synapse/metrics"
running_on_pypy = platform.python_implementation() == "PyPy"
all_gauges: "Dict[str, Union[LaterGauge, InFlightGauge]]" = {} all_gauges: "Dict[str, Union[LaterGauge, InFlightGauge]]" = {}
HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat") HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
@ -76,19 +68,17 @@ class RegistryProxy:
yield metric yield metric
@attr.s(slots=True, hash=True) @attr.s(slots=True, hash=True, auto_attribs=True)
class LaterGauge: class LaterGauge:
name = attr.ib(type=str) name: str
desc = attr.ib(type=str) desc: str
labels = attr.ib(hash=False, type=Optional[Iterable[str]]) labels: Optional[Iterable[str]] = attr.ib(hash=False)
# callback: should either return a value (if there are no labels for this metric), # callback: should either return a value (if there are no labels for this metric),
# or dict mapping from a label tuple to a value # or dict mapping from a label tuple to a value
caller = attr.ib( caller: Callable[
type=Callable[
[], Union[Mapping[Tuple[str, ...], Union[int, float]], Union[int, float]] [], Union[Mapping[Tuple[str, ...], Union[int, float]], Union[int, float]]
] ]
)
def collect(self) -> Iterable[Metric]: def collect(self) -> Iterable[Metric]:
@ -157,7 +147,9 @@ class InFlightGauge(Generic[MetricsEntry]):
# Create a class which have the sub_metrics values as attributes, which # Create a class which have the sub_metrics values as attributes, which
# default to 0 on initialization. Used to pass to registered callbacks. # default to 0 on initialization. Used to pass to registered callbacks.
self._metrics_class: Type[MetricsEntry] = attr.make_class( self._metrics_class: Type[MetricsEntry] = attr.make_class(
"_MetricsEntry", attrs={x: attr.ib(0) for x in sub_metrics}, slots=True "_MetricsEntry",
attrs={x: attr.ib(default=0) for x in sub_metrics},
slots=True,
) )
# Counts number of in flight blocks for a given set of label values # Counts number of in flight blocks for a given set of label values
@ -369,136 +361,6 @@ class CPUMetrics:
REGISTRY.register(CPUMetrics()) REGISTRY.register(CPUMetrics())
#
# Python GC metrics
#
gc_unreachable = Gauge("python_gc_unreachable_total", "Unreachable GC objects", ["gen"])
gc_time = Histogram(
"python_gc_time",
"Time taken to GC (sec)",
["gen"],
buckets=[
0.0025,
0.005,
0.01,
0.025,
0.05,
0.10,
0.25,
0.50,
1.00,
2.50,
5.00,
7.50,
15.00,
30.00,
45.00,
60.00,
],
)
class GCCounts:
def collect(self) -> Iterable[Metric]:
cm = GaugeMetricFamily("python_gc_counts", "GC object counts", labels=["gen"])
for n, m in enumerate(gc.get_count()):
cm.add_metric([str(n)], m)
yield cm
if not running_on_pypy:
REGISTRY.register(GCCounts())
#
# PyPy GC / memory metrics
#
class PyPyGCStats:
def collect(self) -> Iterable[Metric]:
# @stats is a pretty-printer object with __str__() returning a nice table,
# plus some fields that contain data from that table.
# unfortunately, fields are pretty-printed themselves (i. e. '4.5MB').
stats = gc.get_stats(memory_pressure=False) # type: ignore
# @s contains same fields as @stats, but as actual integers.
s = stats._s # type: ignore
# also note that field naming is completely braindead
# and only vaguely correlates with the pretty-printed table.
# >>>> gc.get_stats(False)
# Total memory consumed:
# GC used: 8.7MB (peak: 39.0MB) # s.total_gc_memory, s.peak_memory
# in arenas: 3.0MB # s.total_arena_memory
# rawmalloced: 1.7MB # s.total_rawmalloced_memory
# nursery: 4.0MB # s.nursery_size
# raw assembler used: 31.0kB # s.jit_backend_used
# -----------------------------
# Total: 8.8MB # stats.memory_used_sum
#
# Total memory allocated:
# GC allocated: 38.7MB (peak: 41.1MB) # s.total_allocated_memory, s.peak_allocated_memory
# in arenas: 30.9MB # s.peak_arena_memory
# rawmalloced: 4.1MB # s.peak_rawmalloced_memory
# nursery: 4.0MB # s.nursery_size
# raw assembler allocated: 1.0MB # s.jit_backend_allocated
# -----------------------------
# Total: 39.7MB # stats.memory_allocated_sum
#
# Total time spent in GC: 0.073 # s.total_gc_time
pypy_gc_time = CounterMetricFamily(
"pypy_gc_time_seconds_total",
"Total time spent in PyPy GC",
labels=[],
)
pypy_gc_time.add_metric([], s.total_gc_time / 1000)
yield pypy_gc_time
pypy_mem = GaugeMetricFamily(
"pypy_memory_bytes",
"Memory tracked by PyPy allocator",
labels=["state", "class", "kind"],
)
# memory used by JIT assembler
pypy_mem.add_metric(["used", "", "jit"], s.jit_backend_used)
pypy_mem.add_metric(["allocated", "", "jit"], s.jit_backend_allocated)
# memory used by GCed objects
pypy_mem.add_metric(["used", "", "arenas"], s.total_arena_memory)
pypy_mem.add_metric(["allocated", "", "arenas"], s.peak_arena_memory)
pypy_mem.add_metric(["used", "", "rawmalloced"], s.total_rawmalloced_memory)
pypy_mem.add_metric(["allocated", "", "rawmalloced"], s.peak_rawmalloced_memory)
pypy_mem.add_metric(["used", "", "nursery"], s.nursery_size)
pypy_mem.add_metric(["allocated", "", "nursery"], s.nursery_size)
# totals
pypy_mem.add_metric(["used", "totals", "gc"], s.total_gc_memory)
pypy_mem.add_metric(["allocated", "totals", "gc"], s.total_allocated_memory)
pypy_mem.add_metric(["used", "totals", "gc_peak"], s.peak_memory)
pypy_mem.add_metric(["allocated", "totals", "gc_peak"], s.peak_allocated_memory)
yield pypy_mem
if running_on_pypy:
REGISTRY.register(PyPyGCStats())
#
# Twisted reactor metrics
#
tick_time = Histogram(
"python_twisted_reactor_tick_time",
"Tick time of the Twisted reactor (sec)",
buckets=[0.001, 0.002, 0.005, 0.01, 0.025, 0.05, 0.1, 0.2, 0.5, 1, 2, 5],
)
pending_calls_metric = Histogram(
"python_twisted_reactor_pending_calls",
"Pending calls",
buckets=[1, 2, 5, 10, 25, 50, 100, 250, 500, 1000],
)
# #
# Federation Metrics # Federation Metrics
@ -551,8 +413,6 @@ build_info.labels(
" ".join([platform.system(), platform.release()]), " ".join([platform.system(), platform.release()]),
).set(1) ).set(1)
last_ticked = time.time()
# 3PID send info # 3PID send info
threepid_send_requests = Histogram( threepid_send_requests = Histogram(
"synapse_threepid_send_requests_with_tries", "synapse_threepid_send_requests_with_tries",
@ -600,116 +460,6 @@ def register_threadpool(name: str, threadpool: ThreadPool) -> None:
) )
class ReactorLastSeenMetric:
def collect(self) -> Iterable[Metric]:
cm = GaugeMetricFamily(
"python_twisted_reactor_last_seen",
"Seconds since the Twisted reactor was last seen",
)
cm.add_metric([], time.time() - last_ticked)
yield cm
REGISTRY.register(ReactorLastSeenMetric())
# The minimum time in seconds between GCs for each generation, regardless of the current GC
# thresholds and counts.
MIN_TIME_BETWEEN_GCS = (1.0, 10.0, 30.0)
# The time (in seconds since the epoch) of the last time we did a GC for each generation.
_last_gc = [0.0, 0.0, 0.0]
F = TypeVar("F", bound=Callable[..., Any])
def runUntilCurrentTimer(reactor: ReactorBase, func: F) -> F:
@functools.wraps(func)
def f(*args: Any, **kwargs: Any) -> Any:
now = reactor.seconds()
num_pending = 0
# _newTimedCalls is one long list of *all* pending calls. Below loop
# is based off of impl of reactor.runUntilCurrent
for delayed_call in reactor._newTimedCalls:
if delayed_call.time > now:
break
if delayed_call.delayed_time > 0:
continue
num_pending += 1
num_pending += len(reactor.threadCallQueue)
start = time.time()
ret = func(*args, **kwargs)
end = time.time()
# record the amount of wallclock time spent running pending calls.
# This is a proxy for the actual amount of time between reactor polls,
# since about 25% of time is actually spent running things triggered by
# I/O events, but that is harder to capture without rewriting half the
# reactor.
tick_time.observe(end - start)
pending_calls_metric.observe(num_pending)
# Update the time we last ticked, for the metric to test whether
# Synapse's reactor has frozen
global last_ticked
last_ticked = end
if running_on_pypy:
return ret
# Check if we need to do a manual GC (since its been disabled), and do
# one if necessary. Note we go in reverse order as e.g. a gen 1 GC may
# promote an object into gen 2, and we don't want to handle the same
# object multiple times.
threshold = gc.get_threshold()
counts = gc.get_count()
for i in (2, 1, 0):
# We check if we need to do one based on a straightforward
# comparison between the threshold and count. We also do an extra
# check to make sure that we don't a GC too often.
if threshold[i] < counts[i] and MIN_TIME_BETWEEN_GCS[i] < end - _last_gc[i]:
if i == 0:
logger.debug("Collecting gc %d", i)
else:
logger.info("Collecting gc %d", i)
start = time.time()
unreachable = gc.collect(i)
end = time.time()
_last_gc[i] = end
gc_time.labels(i).observe(end - start)
gc_unreachable.labels(i).set(unreachable)
return ret
return cast(F, f)
try:
# Ensure the reactor has all the attributes we expect
reactor.seconds # type: ignore
reactor.runUntilCurrent # type: ignore
reactor._newTimedCalls # type: ignore
reactor.threadCallQueue # type: ignore
# runUntilCurrent is called when we have pending calls. It is called once
# per iteratation after fd polling.
reactor.runUntilCurrent = runUntilCurrentTimer(reactor, reactor.runUntilCurrent) # type: ignore
# We manually run the GC each reactor tick so that we can get some metrics
# about time spent doing GC,
if not running_on_pypy:
gc.disable()
except AttributeError:
pass
__all__ = [ __all__ = [
"MetricsResource", "MetricsResource",
"generate_latest", "generate_latest",
@ -717,4 +467,6 @@ __all__ = [
"LaterGauge", "LaterGauge",
"InFlightGauge", "InFlightGauge",
"GaugeBucketCollector", "GaugeBucketCollector",
"MIN_TIME_BETWEEN_GCS",
"install_gc_manager",
] ]

203
synapse/metrics/_gc.py Normal file
View File

@ -0,0 +1,203 @@
# Copyright 2015-2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import logging
import platform
import time
from typing import Iterable
from prometheus_client.core import (
REGISTRY,
CounterMetricFamily,
Gauge,
GaugeMetricFamily,
Histogram,
Metric,
)
from twisted.internet import task
"""Prometheus metrics for garbage collection"""
logger = logging.getLogger(__name__)
# The minimum time in seconds between GCs for each generation, regardless of the current GC
# thresholds and counts.
MIN_TIME_BETWEEN_GCS = (1.0, 10.0, 30.0)
running_on_pypy = platform.python_implementation() == "PyPy"
#
# Python GC metrics
#
gc_unreachable = Gauge("python_gc_unreachable_total", "Unreachable GC objects", ["gen"])
gc_time = Histogram(
"python_gc_time",
"Time taken to GC (sec)",
["gen"],
buckets=[
0.0025,
0.005,
0.01,
0.025,
0.05,
0.10,
0.25,
0.50,
1.00,
2.50,
5.00,
7.50,
15.00,
30.00,
45.00,
60.00,
],
)
class GCCounts:
def collect(self) -> Iterable[Metric]:
cm = GaugeMetricFamily("python_gc_counts", "GC object counts", labels=["gen"])
for n, m in enumerate(gc.get_count()):
cm.add_metric([str(n)], m)
yield cm
def install_gc_manager() -> None:
"""Disable automatic GC, and replace it with a task that runs every 100ms
This means that (a) we can limit how often GC runs; (b) we can get some metrics
about GC activity.
It does nothing on PyPy.
"""
if running_on_pypy:
return
REGISTRY.register(GCCounts())
gc.disable()
# The time (in seconds since the epoch) of the last time we did a GC for each generation.
_last_gc = [0.0, 0.0, 0.0]
def _maybe_gc() -> None:
# Check if we need to do a manual GC (since its been disabled), and do
# one if necessary. Note we go in reverse order as e.g. a gen 1 GC may
# promote an object into gen 2, and we don't want to handle the same
# object multiple times.
threshold = gc.get_threshold()
counts = gc.get_count()
end = time.time()
for i in (2, 1, 0):
# We check if we need to do one based on a straightforward
# comparison between the threshold and count. We also do an extra
# check to make sure that we don't a GC too often.
if threshold[i] < counts[i] and MIN_TIME_BETWEEN_GCS[i] < end - _last_gc[i]:
if i == 0:
logger.debug("Collecting gc %d", i)
else:
logger.info("Collecting gc %d", i)
start = time.time()
unreachable = gc.collect(i)
end = time.time()
_last_gc[i] = end
gc_time.labels(i).observe(end - start)
gc_unreachable.labels(i).set(unreachable)
gc_task = task.LoopingCall(_maybe_gc)
gc_task.start(0.1)
#
# PyPy GC / memory metrics
#
class PyPyGCStats:
def collect(self) -> Iterable[Metric]:
# @stats is a pretty-printer object with __str__() returning a nice table,
# plus some fields that contain data from that table.
# unfortunately, fields are pretty-printed themselves (i. e. '4.5MB').
stats = gc.get_stats(memory_pressure=False) # type: ignore
# @s contains same fields as @stats, but as actual integers.
s = stats._s # type: ignore
# also note that field naming is completely braindead
# and only vaguely correlates with the pretty-printed table.
# >>>> gc.get_stats(False)
# Total memory consumed:
# GC used: 8.7MB (peak: 39.0MB) # s.total_gc_memory, s.peak_memory
# in arenas: 3.0MB # s.total_arena_memory
# rawmalloced: 1.7MB # s.total_rawmalloced_memory
# nursery: 4.0MB # s.nursery_size
# raw assembler used: 31.0kB # s.jit_backend_used
# -----------------------------
# Total: 8.8MB # stats.memory_used_sum
#
# Total memory allocated:
# GC allocated: 38.7MB (peak: 41.1MB) # s.total_allocated_memory, s.peak_allocated_memory
# in arenas: 30.9MB # s.peak_arena_memory
# rawmalloced: 4.1MB # s.peak_rawmalloced_memory
# nursery: 4.0MB # s.nursery_size
# raw assembler allocated: 1.0MB # s.jit_backend_allocated
# -----------------------------
# Total: 39.7MB # stats.memory_allocated_sum
#
# Total time spent in GC: 0.073 # s.total_gc_time
pypy_gc_time = CounterMetricFamily(
"pypy_gc_time_seconds_total",
"Total time spent in PyPy GC",
labels=[],
)
pypy_gc_time.add_metric([], s.total_gc_time / 1000)
yield pypy_gc_time
pypy_mem = GaugeMetricFamily(
"pypy_memory_bytes",
"Memory tracked by PyPy allocator",
labels=["state", "class", "kind"],
)
# memory used by JIT assembler
pypy_mem.add_metric(["used", "", "jit"], s.jit_backend_used)
pypy_mem.add_metric(["allocated", "", "jit"], s.jit_backend_allocated)
# memory used by GCed objects
pypy_mem.add_metric(["used", "", "arenas"], s.total_arena_memory)
pypy_mem.add_metric(["allocated", "", "arenas"], s.peak_arena_memory)
pypy_mem.add_metric(["used", "", "rawmalloced"], s.total_rawmalloced_memory)
pypy_mem.add_metric(["allocated", "", "rawmalloced"], s.peak_rawmalloced_memory)
pypy_mem.add_metric(["used", "", "nursery"], s.nursery_size)
pypy_mem.add_metric(["allocated", "", "nursery"], s.nursery_size)
# totals
pypy_mem.add_metric(["used", "totals", "gc"], s.total_gc_memory)
pypy_mem.add_metric(["allocated", "totals", "gc"], s.total_allocated_memory)
pypy_mem.add_metric(["used", "totals", "gc_peak"], s.peak_memory)
pypy_mem.add_metric(["allocated", "totals", "gc_peak"], s.peak_allocated_memory)
yield pypy_mem
if running_on_pypy:
REGISTRY.register(PyPyGCStats())

View File

@ -0,0 +1,83 @@
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import select
import time
from typing import Any, Iterable, List, Tuple
from prometheus_client import Histogram, Metric
from prometheus_client.core import REGISTRY, GaugeMetricFamily
from twisted.internet import reactor
#
# Twisted reactor metrics
#
tick_time = Histogram(
"python_twisted_reactor_tick_time",
"Tick time of the Twisted reactor (sec)",
buckets=[0.001, 0.002, 0.005, 0.01, 0.025, 0.05, 0.1, 0.2, 0.5, 1, 2, 5],
)
class EpollWrapper:
"""a wrapper for an epoll object which records the time between polls"""
def __init__(self, poller: "select.epoll"): # type: ignore[name-defined]
self.last_polled = time.time()
self._poller = poller
def poll(self, *args, **kwargs) -> List[Tuple[int, int]]: # type: ignore[no-untyped-def]
# record the time since poll() was last called. This gives a good proxy for
# how long it takes to run everything in the reactor - ie, how long anything
# waiting for the next tick will have to wait.
tick_time.observe(time.time() - self.last_polled)
ret = self._poller.poll(*args, **kwargs)
self.last_polled = time.time()
return ret
def __getattr__(self, item: str) -> Any:
return getattr(self._poller, item)
class ReactorLastSeenMetric:
def __init__(self, epoll_wrapper: EpollWrapper):
self._epoll_wrapper = epoll_wrapper
def collect(self) -> Iterable[Metric]:
cm = GaugeMetricFamily(
"python_twisted_reactor_last_seen",
"Seconds since the Twisted reactor was last seen",
)
cm.add_metric([], time.time() - self._epoll_wrapper.last_polled)
yield cm
try:
# if the reactor has a `_poller` attribute, which is an `epoll` object
# (ie, it's an EPollReactor), we wrap the `epoll` with a thing that will
# measure the time between ticks
from select import epoll # type: ignore[attr-defined]
poller = reactor._poller # type: ignore[attr-defined]
except (AttributeError, ImportError):
pass
else:
if isinstance(poller, epoll):
poller = EpollWrapper(poller)
reactor._poller = poller # type: ignore[attr-defined]
REGISTRY.register(ReactorLastSeenMetric(poller))

View File

@ -40,7 +40,6 @@ from synapse.handlers.presence import format_user_presence_state
from synapse.logging import issue9533_logger from synapse.logging import issue9533_logger
from synapse.logging.context import PreserveLoggingContext from synapse.logging.context import PreserveLoggingContext
from synapse.logging.opentracing import log_kv, start_active_span from synapse.logging.opentracing import log_kv, start_active_span
from synapse.logging.utils import log_function
from synapse.metrics import LaterGauge from synapse.metrics import LaterGauge
from synapse.streams.config import PaginationConfig from synapse.streams.config import PaginationConfig
from synapse.types import ( from synapse.types import (
@ -193,15 +192,15 @@ class EventStreamResult:
return bool(self.events) return bool(self.events)
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class _PendingRoomEventEntry: class _PendingRoomEventEntry:
event_pos = attr.ib(type=PersistedEventPosition) event_pos: PersistedEventPosition
extra_users = attr.ib(type=Collection[UserID]) extra_users: Collection[UserID]
room_id = attr.ib(type=str) room_id: str
type = attr.ib(type=str) type: str
state_key = attr.ib(type=Optional[str]) state_key: Optional[str]
membership = attr.ib(type=Optional[str]) membership: Optional[str]
class Notifier: class Notifier:
@ -686,7 +685,6 @@ class Notifier:
else: else:
return False return False
@log_function
def remove_expired_streams(self) -> None: def remove_expired_streams(self) -> None:
time_now_ms = self.clock.time_msec() time_now_ms = self.clock.time_msec()
expired_streams = [] expired_streams = []
@ -700,7 +698,6 @@ class Notifier:
for expired_stream in expired_streams: for expired_stream in expired_streams:
expired_stream.remove(self) expired_stream.remove(self)
@log_function
def _register_with_keys(self, user_stream: _NotifierUserStream): def _register_with_keys(self, user_stream: _NotifierUserStream):
self.user_to_user_stream[user_stream.user_id] = user_stream self.user_to_user_stream[user_stream.user_id] = user_stream

View File

@ -23,25 +23,25 @@ if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class PusherConfig: class PusherConfig:
"""Parameters necessary to configure a pusher.""" """Parameters necessary to configure a pusher."""
id = attr.ib(type=Optional[str]) id: Optional[str]
user_name = attr.ib(type=str) user_name: str
access_token = attr.ib(type=Optional[int]) access_token: Optional[int]
profile_tag = attr.ib(type=str) profile_tag: str
kind = attr.ib(type=str) kind: str
app_id = attr.ib(type=str) app_id: str
app_display_name = attr.ib(type=str) app_display_name: str
device_display_name = attr.ib(type=str) device_display_name: str
pushkey = attr.ib(type=str) pushkey: str
ts = attr.ib(type=int) ts: int
lang = attr.ib(type=Optional[str]) lang: Optional[str]
data = attr.ib(type=Optional[JsonDict]) data: Optional[JsonDict]
last_stream_ordering = attr.ib(type=int) last_stream_ordering: int
last_success = attr.ib(type=Optional[int]) last_success: Optional[int]
failing_since = attr.ib(type=Optional[int]) failing_since: Optional[int]
def as_dict(self) -> Dict[str, Any]: def as_dict(self) -> Dict[str, Any]:
"""Information that can be retrieved about a pusher after creation.""" """Information that can be retrieved about a pusher after creation."""
@ -57,12 +57,12 @@ class PusherConfig:
} }
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class ThrottleParams: class ThrottleParams:
"""Parameters for controlling the rate of sending pushes via email.""" """Parameters for controlling the rate of sending pushes via email."""
last_sent_ts = attr.ib(type=int) last_sent_ts: int
throttle_ms = attr.ib(type=int) throttle_ms: int
class Pusher(metaclass=abc.ABCMeta): class Pusher(metaclass=abc.ABCMeta):

View File

@ -298,7 +298,7 @@ RulesByUser = Dict[str, List[Rule]]
StateGroup = Union[object, int] StateGroup = Union[object, int]
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class RulesForRoomData: class RulesForRoomData:
"""The data stored in the cache by `RulesForRoom`. """The data stored in the cache by `RulesForRoom`.
@ -307,29 +307,29 @@ class RulesForRoomData:
""" """
# event_id -> (user_id, state) # event_id -> (user_id, state)
member_map = attr.ib(type=MemberMap, factory=dict) member_map: MemberMap = attr.Factory(dict)
# user_id -> rules # user_id -> rules
rules_by_user = attr.ib(type=RulesByUser, factory=dict) rules_by_user: RulesByUser = attr.Factory(dict)
# The last state group we updated the caches for. If the state_group of # The last state group we updated the caches for. If the state_group of
# a new event comes along, we know that we can just return the cached # a new event comes along, we know that we can just return the cached
# result. # result.
# On invalidation of the rules themselves (if the user changes them), # On invalidation of the rules themselves (if the user changes them),
# we invalidate everything and set state_group to `object()` # we invalidate everything and set state_group to `object()`
state_group = attr.ib(type=StateGroup, factory=object) state_group: StateGroup = attr.Factory(object)
# A sequence number to keep track of when we're allowed to update the # A sequence number to keep track of when we're allowed to update the
# cache. We bump the sequence number when we invalidate the cache. If # cache. We bump the sequence number when we invalidate the cache. If
# the sequence number changes while we're calculating stuff we should # the sequence number changes while we're calculating stuff we should
# not update the cache with it. # not update the cache with it.
sequence = attr.ib(type=int, default=0) sequence: int = 0
# A cache of user_ids that we *know* aren't interesting, e.g. user_ids # A cache of user_ids that we *know* aren't interesting, e.g. user_ids
# owned by AS's, or remote users, etc. (I.e. users we will never need to # owned by AS's, or remote users, etc. (I.e. users we will never need to
# calculate push for) # calculate push for)
# These never need to be invalidated as we will never set up push for # These never need to be invalidated as we will never set up push for
# them. # them.
uninteresting_user_set = attr.ib(type=Set[str], factory=set) uninteresting_user_set: Set[str] = attr.Factory(set)
class RulesForRoom: class RulesForRoom:
@ -553,7 +553,7 @@ class RulesForRoom:
self.data.state_group = state_group self.data.state_group = state_group
@attr.attrs(slots=True, frozen=True) @attr.attrs(slots=True, frozen=True, auto_attribs=True)
class _Invalidation: class _Invalidation:
# _Invalidation is passed as an `on_invalidate` callback to bulk_get_push_rules, # _Invalidation is passed as an `on_invalidate` callback to bulk_get_push_rules,
# which means that it it is stored on the bulk_get_push_rules cache entry. In order # which means that it it is stored on the bulk_get_push_rules cache entry. In order
@ -564,8 +564,8 @@ class _Invalidation:
# attrs provides suitable __hash__ and __eq__ methods, provided we remember to # attrs provides suitable __hash__ and __eq__ methods, provided we remember to
# set `frozen=True`. # set `frozen=True`.
cache = attr.ib(type=LruCache) cache: LruCache
room_id = attr.ib(type=str) room_id: str
def __call__(self) -> None: def __call__(self) -> None:
rules_data = self.cache.get(self.room_id, None, update_metrics=False) rules_data = self.cache.get(self.room_id, None, update_metrics=False)

View File

@ -178,7 +178,7 @@ class Mailer:
await self.send_email( await self.send_email(
email_address, email_address,
self.email_subjects.email_validation self.email_subjects.email_validation
% {"server_name": self.hs.config.server.server_name}, % {"server_name": self.hs.config.server.server_name, "app": self.app_name},
template_vars, template_vars,
) )
@ -209,7 +209,7 @@ class Mailer:
await self.send_email( await self.send_email(
email_address, email_address,
self.email_subjects.email_validation self.email_subjects.email_validation
% {"server_name": self.hs.config.server.server_name}, % {"server_name": self.hs.config.server.server_name, "app": self.app_name},
template_vars, template_vars,
) )

View File

@ -50,12 +50,12 @@ data part are:
""" """
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class EventsStreamRow: class EventsStreamRow:
"""A parsed row from the events replication stream""" """A parsed row from the events replication stream"""
type = attr.ib() # str: the TypeId of one of the *EventsStreamRows type: str # the TypeId of one of the *EventsStreamRows
data = attr.ib() # BaseEventsStreamRow data: "BaseEventsStreamRow"
class BaseEventsStreamRow: class BaseEventsStreamRow:
@ -79,28 +79,28 @@ class BaseEventsStreamRow:
return cls(*data) return cls(*data)
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class EventsStreamEventRow(BaseEventsStreamRow): class EventsStreamEventRow(BaseEventsStreamRow):
TypeId = "ev" TypeId = "ev"
event_id = attr.ib(type=str) event_id: str
room_id = attr.ib(type=str) room_id: str
type = attr.ib(type=str) type: str
state_key = attr.ib(type=Optional[str]) state_key: Optional[str]
redacts = attr.ib(type=Optional[str]) redacts: Optional[str]
relates_to = attr.ib(type=Optional[str]) relates_to: Optional[str]
membership = attr.ib(type=Optional[str]) membership: Optional[str]
rejected = attr.ib(type=bool) rejected: bool
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class EventsStreamCurrentStateRow(BaseEventsStreamRow): class EventsStreamCurrentStateRow(BaseEventsStreamRow):
TypeId = "state" TypeId = "state"
room_id = attr.ib() # str room_id: str
type = attr.ib() # str type: str
state_key = attr.ib() # str state_key: str
event_id = attr.ib() # str, optional event_id: Optional[str]
_EventRows: Tuple[Type[BaseEventsStreamRow], ...] = ( _EventRows: Tuple[Type[BaseEventsStreamRow], ...] = (

View File

@ -123,34 +123,25 @@ class BackgroundUpdateStartJobRestServlet(RestServlet):
job_name = body["job_name"] job_name = body["job_name"]
if job_name == "populate_stats_process_rooms": if job_name == "populate_stats_process_rooms":
jobs = [ jobs = [("populate_stats_process_rooms", "{}", "")]
{
"update_name": "populate_stats_process_rooms",
"progress_json": "{}",
},
]
elif job_name == "regenerate_directory": elif job_name == "regenerate_directory":
jobs = [ jobs = [
{ ("populate_user_directory_createtables", "{}", ""),
"update_name": "populate_user_directory_createtables", (
"progress_json": "{}", "populate_user_directory_process_rooms",
"depends_on": "", "{}",
}, "populate_user_directory_createtables",
{ ),
"update_name": "populate_user_directory_process_rooms", (
"progress_json": "{}", "populate_user_directory_process_users",
"depends_on": "populate_user_directory_createtables", "{}",
}, "populate_user_directory_process_rooms",
{ ),
"update_name": "populate_user_directory_process_users", (
"progress_json": "{}", "populate_user_directory_cleanup",
"depends_on": "populate_user_directory_process_rooms", "{}",
}, "populate_user_directory_process_users",
{ ),
"update_name": "populate_user_directory_cleanup",
"progress_json": "{}",
"depends_on": "populate_user_directory_process_users",
},
] ]
else: else:
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid job_name") raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid job_name")
@ -158,6 +149,7 @@ class BackgroundUpdateStartJobRestServlet(RestServlet):
try: try:
await self._store.db_pool.simple_insert_many( await self._store.db_pool.simple_insert_many(
table="background_updates", table="background_updates",
keys=("update_name", "progress_json", "depends_on"),
values=jobs, values=jobs,
desc=f"admin_api_run_{job_name}", desc=f"admin_api_run_{job_name}",
) )

View File

@ -111,25 +111,37 @@ class DestinationsRestServlet(RestServlet):
) -> Tuple[int, JsonDict]: ) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self._auth, request) await assert_requester_is_admin(self._auth, request)
if not await self._store.is_destination_known(destination):
raise NotFoundError("Unknown destination")
destination_retry_timings = await self._store.get_destination_retry_timings( destination_retry_timings = await self._store.get_destination_retry_timings(
destination destination
) )
if not destination_retry_timings:
raise NotFoundError("Unknown destination")
last_successful_stream_ordering = ( last_successful_stream_ordering = (
await self._store.get_destination_last_successful_stream_ordering( await self._store.get_destination_last_successful_stream_ordering(
destination destination
) )
) )
response = { response: JsonDict = {
"destination": destination, "destination": destination,
"failure_ts": destination_retry_timings.failure_ts,
"retry_last_ts": destination_retry_timings.retry_last_ts,
"retry_interval": destination_retry_timings.retry_interval,
"last_successful_stream_ordering": last_successful_stream_ordering, "last_successful_stream_ordering": last_successful_stream_ordering,
} }
if destination_retry_timings:
response = {
**response,
"failure_ts": destination_retry_timings.failure_ts,
"retry_last_ts": destination_retry_timings.retry_last_ts,
"retry_interval": destination_retry_timings.retry_interval,
}
else:
response = {
**response,
"failure_ts": None,
"retry_last_ts": 0,
"retry_interval": 0,
}
return HTTPStatus.OK, response return HTTPStatus.OK, response

View File

@ -466,7 +466,7 @@ class UserMediaRestServlet(RestServlet):
) )
deleted_media, total = await self.media_repository.delete_local_media_ids( deleted_media, total = await self.media_repository.delete_local_media_ids(
([row["media_id"] for row in media]) [row["media_id"] for row in media]
) )
return HTTPStatus.OK, {"deleted_media": deleted_media, "total": total} return HTTPStatus.OK, {"deleted_media": deleted_media, "total": total}

View File

@ -424,7 +424,7 @@ class RoomStateRestServlet(RestServlet):
event_ids = await self.store.get_current_state_ids(room_id) event_ids = await self.store.get_current_state_ids(room_id)
events = await self.store.get_events(event_ids.values()) events = await self.store.get_events(event_ids.values())
now = self.clock.time_msec() now = self.clock.time_msec()
room_state = await self._event_serializer.serialize_events(events.values(), now) room_state = self._event_serializer.serialize_events(events.values(), now)
ret = {"state": room_state} ret = {"state": room_state}
return HTTPStatus.OK, ret return HTTPStatus.OK, ret
@ -744,22 +744,17 @@ class RoomEventContextServlet(RestServlet):
) )
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
results["events_before"] = await self._event_serializer.serialize_events( aggregations = results.pop("aggregations", None)
results["events_before"], results["events_before"] = self._event_serializer.serialize_events(
time_now, results["events_before"], time_now, bundle_aggregations=aggregations
bundle_aggregations=True,
) )
results["event"] = await self._event_serializer.serialize_event( results["event"] = self._event_serializer.serialize_event(
results["event"], results["event"], time_now, bundle_aggregations=aggregations
time_now,
bundle_aggregations=True,
) )
results["events_after"] = await self._event_serializer.serialize_events( results["events_after"] = self._event_serializer.serialize_events(
results["events_after"], results["events_after"], time_now, bundle_aggregations=aggregations
time_now,
bundle_aggregations=True,
) )
results["state"] = await self._event_serializer.serialize_events( results["state"] = self._event_serializer.serialize_events(
results["state"], time_now results["state"], time_now
) )

View File

@ -173,12 +173,11 @@ class UserRestServletV2(RestServlet):
if not self.hs.is_mine(target_user): if not self.hs.is_mine(target_user):
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only look up local users") raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only look up local users")
ret = await self.admin_handler.get_user(target_user) user_info_dict = await self.admin_handler.get_user(target_user)
if not user_info_dict:
if not ret:
raise NotFoundError("User not found") raise NotFoundError("User not found")
return HTTPStatus.OK, ret return HTTPStatus.OK, user_info_dict
async def on_PUT( async def on_PUT(
self, request: SynapseRequest, user_id: str self, request: SynapseRequest, user_id: str
@ -399,10 +398,10 @@ class UserRestServletV2(RestServlet):
target_user, requester, body["avatar_url"], True target_user, requester, body["avatar_url"], True
) )
user = await self.admin_handler.get_user(target_user) user_info_dict = await self.admin_handler.get_user(target_user)
assert user is not None assert user_info_dict is not None
return 201, user return HTTPStatus.CREATED, user_info_dict
class UserRegisterServlet(RestServlet): class UserRegisterServlet(RestServlet):

View File

@ -91,7 +91,7 @@ class EventRestServlet(RestServlet):
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
if event: if event:
result = await self._event_serializer.serialize_event(event, time_now) result = self._event_serializer.serialize_event(event, time_now)
return 200, result return 200, result
else: else:
return 404, "Event not found." return 404, "Event not found."

View File

@ -72,7 +72,7 @@ class NotificationsServlet(RestServlet):
"actions": pa.actions, "actions": pa.actions,
"ts": pa.received_ts, "ts": pa.received_ts,
"event": ( "event": (
await self._event_serializer.serialize_event( self._event_serializer.serialize_event(
notif_events[pa.event_id], notif_events[pa.event_id],
self.clock.time_msec(), self.clock.time_msec(),
event_format=format_event_for_client_v2_without_room_id, event_format=format_event_for_client_v2_without_room_id,

View File

@ -19,28 +19,20 @@ any time to reflect changes in the MSC.
""" """
import logging import logging
from typing import TYPE_CHECKING, Awaitable, Optional, Tuple from typing import TYPE_CHECKING, Optional, Tuple
from synapse.api.constants import EventTypes, RelationTypes from synapse.api.constants import RelationTypes
from synapse.api.errors import ShadowBanError, SynapseError from synapse.api.errors import SynapseError
from synapse.http.server import HttpServer from synapse.http.server import HttpServer
from synapse.http.servlet import ( from synapse.http.servlet import RestServlet, parse_integer, parse_string
RestServlet,
parse_integer,
parse_json_object_from_request,
parse_string,
)
from synapse.http.site import SynapseRequest from synapse.http.site import SynapseRequest
from synapse.rest.client.transactions import HttpTransactionCache from synapse.rest.client._base import client_patterns
from synapse.storage.relations import ( from synapse.storage.relations import (
AggregationPaginationToken, AggregationPaginationToken,
PaginationChunk, PaginationChunk,
RelationPaginationToken, RelationPaginationToken,
) )
from synapse.types import JsonDict from synapse.types import JsonDict
from synapse.util.stringutils import random_string
from ._base import client_patterns
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -48,112 +40,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class RelationSendServlet(RestServlet):
"""Helper API for sending events that have relation data.
Example API shape to send a 👍 reaction to a room:
POST /rooms/!foo/send_relation/$bar/m.annotation/m.reaction?key=%F0%9F%91%8D
{}
{
"event_id": "$foobar"
}
"""
PATTERN = (
"/rooms/(?P<room_id>[^/]*)/send_relation"
"/(?P<parent_id>[^/]*)/(?P<relation_type>[^/]*)/(?P<event_type>[^/]*)"
)
def __init__(self, hs: "HomeServer"):
super().__init__()
self.auth = hs.get_auth()
self.event_creation_handler = hs.get_event_creation_handler()
self.txns = HttpTransactionCache(hs)
def register(self, http_server: HttpServer) -> None:
http_server.register_paths(
"POST",
client_patterns(self.PATTERN + "$", releases=()),
self.on_PUT_or_POST,
self.__class__.__name__,
)
http_server.register_paths(
"PUT",
client_patterns(self.PATTERN + "/(?P<txn_id>[^/]*)$", releases=()),
self.on_PUT,
self.__class__.__name__,
)
def on_PUT(
self,
request: SynapseRequest,
room_id: str,
parent_id: str,
relation_type: str,
event_type: str,
txn_id: Optional[str] = None,
) -> Awaitable[Tuple[int, JsonDict]]:
return self.txns.fetch_or_execute_request(
request,
self.on_PUT_or_POST,
request,
room_id,
parent_id,
relation_type,
event_type,
txn_id,
)
async def on_PUT_or_POST(
self,
request: SynapseRequest,
room_id: str,
parent_id: str,
relation_type: str,
event_type: str,
txn_id: Optional[str] = None,
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request, allow_guest=True)
if event_type == EventTypes.Member:
# Add relations to a membership is meaningless, so we just deny it
# at the CS API rather than trying to handle it correctly.
raise SynapseError(400, "Cannot send member events with relations")
content = parse_json_object_from_request(request)
aggregation_key = parse_string(request, "key", encoding="utf-8")
content["m.relates_to"] = {
"event_id": parent_id,
"rel_type": relation_type,
}
if aggregation_key is not None:
content["m.relates_to"]["key"] = aggregation_key
event_dict = {
"type": event_type,
"content": content,
"room_id": room_id,
"sender": requester.user.to_string(),
}
try:
(
event,
_,
) = await self.event_creation_handler.create_and_send_nonmember_event(
requester, event_dict=event_dict, txn_id=txn_id
)
event_id = event.event_id
except ShadowBanError:
event_id = "$" + random_string(43)
return 200, {"event_id": event_id}
class RelationPaginationServlet(RestServlet): class RelationPaginationServlet(RestServlet):
"""API to paginate relations on an event by topological ordering, optionally """API to paginate relations on an event by topological ordering, optionally
filtered by relation type and event type. filtered by relation type and event type.
@ -227,13 +113,16 @@ class RelationPaginationServlet(RestServlet):
now = self.clock.time_msec() now = self.clock.time_msec()
# Do not bundle aggregations when retrieving the original event because # Do not bundle aggregations when retrieving the original event because
# we want the content before relations are applied to it. # we want the content before relations are applied to it.
original_event = await self._event_serializer.serialize_event( original_event = self._event_serializer.serialize_event(
event, now, bundle_aggregations=False event, now, bundle_aggregations=None
) )
# The relations returned for the requested event do include their # The relations returned for the requested event do include their
# bundled aggregations. # bundled aggregations.
serialized_events = await self._event_serializer.serialize_events( aggregations = await self.store.get_bundled_aggregations(
events, now, bundle_aggregations=True events, requester.user.to_string()
)
serialized_events = self._event_serializer.serialize_events(
events, now, bundle_aggregations=aggregations
) )
return_value = pagination_chunk.to_dict() return_value = pagination_chunk.to_dict()
@ -422,7 +311,7 @@ class RelationAggregationGroupPaginationServlet(RestServlet):
) )
now = self.clock.time_msec() now = self.clock.time_msec()
serialized_events = await self._event_serializer.serialize_events(events, now) serialized_events = self._event_serializer.serialize_events(events, now)
return_value = result.to_dict() return_value = result.to_dict()
return_value["chunk"] = serialized_events return_value["chunk"] = serialized_events
@ -431,7 +320,6 @@ class RelationAggregationGroupPaginationServlet(RestServlet):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None: def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
RelationSendServlet(hs).register(http_server)
RelationPaginationServlet(hs).register(http_server) RelationPaginationServlet(hs).register(http_server)
RelationAggregationPaginationServlet(hs).register(http_server) RelationAggregationPaginationServlet(hs).register(http_server)
RelationAggregationGroupPaginationServlet(hs).register(http_server) RelationAggregationGroupPaginationServlet(hs).register(http_server)

View File

@ -642,6 +642,7 @@ class RoomEventServlet(RestServlet):
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
super().__init__() super().__init__()
self.clock = hs.get_clock() self.clock = hs.get_clock()
self._store = hs.get_datastore()
self.event_handler = hs.get_event_handler() self.event_handler = hs.get_event_handler()
self._event_serializer = hs.get_event_client_serializer() self._event_serializer = hs.get_event_client_serializer()
self.auth = hs.get_auth() self.auth = hs.get_auth()
@ -660,10 +661,15 @@ class RoomEventServlet(RestServlet):
# https://matrix.org/docs/spec/client_server/r0.5.0#get-matrix-client-r0-rooms-roomid-event-eventid # https://matrix.org/docs/spec/client_server/r0.5.0#get-matrix-client-r0-rooms-roomid-event-eventid
raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND) raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND)
time_now = self.clock.time_msec()
if event: if event:
event_dict = await self._event_serializer.serialize_event( # Ensure there are bundled aggregations available.
event, time_now, bundle_aggregations=True aggregations = await self._store.get_bundled_aggregations(
[event], requester.user.to_string()
)
time_now = self.clock.time_msec()
event_dict = self._event_serializer.serialize_event(
event, time_now, bundle_aggregations=aggregations
) )
return 200, event_dict return 200, event_dict
@ -708,16 +714,17 @@ class RoomEventContextServlet(RestServlet):
raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND) raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND)
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
results["events_before"] = await self._event_serializer.serialize_events( aggregations = results.pop("aggregations", None)
results["events_before"], time_now, bundle_aggregations=True results["events_before"] = self._event_serializer.serialize_events(
results["events_before"], time_now, bundle_aggregations=aggregations
) )
results["event"] = await self._event_serializer.serialize_event( results["event"] = self._event_serializer.serialize_event(
results["event"], time_now, bundle_aggregations=True results["event"], time_now, bundle_aggregations=aggregations
) )
results["events_after"] = await self._event_serializer.serialize_events( results["events_after"] = self._event_serializer.serialize_events(
results["events_after"], time_now, bundle_aggregations=True results["events_after"], time_now, bundle_aggregations=aggregations
) )
results["state"] = await self._event_serializer.serialize_events( results["state"] = self._event_serializer.serialize_events(
results["state"], time_now results["state"], time_now
) )

View File

@ -17,7 +17,6 @@ from collections import defaultdict
from typing import ( from typing import (
TYPE_CHECKING, TYPE_CHECKING,
Any, Any,
Awaitable,
Callable, Callable,
Dict, Dict,
Iterable, Iterable,
@ -395,7 +394,7 @@ class SyncRestServlet(RestServlet):
""" """
invited = {} invited = {}
for room in rooms: for room in rooms:
invite = await self._event_serializer.serialize_event( invite = self._event_serializer.serialize_event(
room.invite, room.invite,
time_now, time_now,
token_id=token_id, token_id=token_id,
@ -432,7 +431,7 @@ class SyncRestServlet(RestServlet):
""" """
knocked = {} knocked = {}
for room in rooms: for room in rooms:
knock = await self._event_serializer.serialize_event( knock = self._event_serializer.serialize_event(
room.knock, room.knock,
time_now, time_now,
token_id=token_id, token_id=token_id,
@ -525,21 +524,14 @@ class SyncRestServlet(RestServlet):
The room, encoded in our response format The room, encoded in our response format
""" """
def serialize(events: Iterable[EventBase]) -> Awaitable[List[JsonDict]]: def serialize(
events: Iterable[EventBase],
aggregations: Optional[Dict[str, Dict[str, Any]]] = None,
) -> List[JsonDict]:
return self._event_serializer.serialize_events( return self._event_serializer.serialize_events(
events, events,
time_now=time_now, time_now=time_now,
# Don't bother to bundle aggregations if the timeline is unlimited, bundle_aggregations=aggregations,
# as clients will have all the necessary information.
# bundle_aggregations=room.timeline.limited,
#
# richvdh 2021-12-15: disable this temporarily as it has too high an
# overhead for initialsyncs. We need to figure out a way that the
# bundling can be done *before* the events are stored in the
# SyncResponseCache so that this part can be synchronous.
#
# Ensure to re-enable the test at tests/rest/client/test_relations.py::RelationsTestCase.test_bundled_aggregations.
bundle_aggregations=False,
token_id=token_id, token_id=token_id,
event_format=event_formatter, event_format=event_formatter,
only_event_fields=only_fields, only_event_fields=only_fields,
@ -561,8 +553,10 @@ class SyncRestServlet(RestServlet):
event.room_id, event.room_id,
) )
serialized_state = await serialize(state_events) serialized_state = serialize(state_events)
serialized_timeline = await serialize(timeline_events) serialized_timeline = serialize(
timeline_events, room.timeline.bundled_aggregations
)
account_data = room.account_data account_data = room.account_data

View File

@ -343,7 +343,7 @@ class SpamMediaException(NotFoundError):
""" """
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class ReadableFileWrapper: class ReadableFileWrapper:
"""Wrapper that allows reading a file in chunks, yielding to the reactor, """Wrapper that allows reading a file in chunks, yielding to the reactor,
and writing to a callback. and writing to a callback.
@ -354,8 +354,8 @@ class ReadableFileWrapper:
CHUNK_SIZE = 2 ** 14 CHUNK_SIZE = 2 ** 14
clock = attr.ib(type=Clock) clock: Clock
path = attr.ib(type=str) path: str
async def write_chunks_to(self, callback: Callable[[bytes], None]) -> None: async def write_chunks_to(self, callback: Callable[[bytes], None]) -> None:
"""Reads the file in chunks and calls the callback with each chunk.""" """Reads the file in chunks and calls the callback with each chunk."""

View File

@ -33,6 +33,8 @@ logger = logging.getLogger(__name__)
class OEmbedResult: class OEmbedResult:
# The Open Graph result (converted from the oEmbed result). # The Open Graph result (converted from the oEmbed result).
open_graph_result: JsonDict open_graph_result: JsonDict
# The author_name of the oEmbed result
author_name: Optional[str]
# Number of milliseconds to cache the content, according to the oEmbed response. # Number of milliseconds to cache the content, according to the oEmbed response.
# #
# This will be None if no cache-age is provided in the oEmbed response (or # This will be None if no cache-age is provided in the oEmbed response (or
@ -154,11 +156,12 @@ class OEmbedProvider:
"og:url": url, "og:url": url,
} }
# Use either title or author's name as the title. title = oembed.get("title")
title = oembed.get("title") or oembed.get("author_name")
if title: if title:
open_graph_response["og:title"] = title open_graph_response["og:title"] = title
author_name = oembed.get("author_name")
# Use the provider name and as the site. # Use the provider name and as the site.
provider_name = oembed.get("provider_name") provider_name = oembed.get("provider_name")
if provider_name: if provider_name:
@ -193,9 +196,10 @@ class OEmbedProvider:
# Trap any exception and let the code follow as usual. # Trap any exception and let the code follow as usual.
logger.warning("Error parsing oEmbed metadata from %s: %r", url, e) logger.warning("Error parsing oEmbed metadata from %s: %r", url, e)
open_graph_response = {} open_graph_response = {}
author_name = None
cache_age = None cache_age = None
return OEmbedResult(open_graph_response, cache_age) return OEmbedResult(open_graph_response, author_name, cache_age)
def _fetch_urls(tree: "etree.Element", tag_name: str) -> List[str]: def _fetch_urls(tree: "etree.Element", tag_name: str) -> List[str]:

View File

@ -262,6 +262,7 @@ class PreviewUrlResource(DirectServeJsonResource):
# The number of milliseconds that the response should be considered valid. # The number of milliseconds that the response should be considered valid.
expiration_ms = media_info.expires expiration_ms = media_info.expires
author_name: Optional[str] = None
if _is_media(media_info.media_type): if _is_media(media_info.media_type):
file_id = media_info.filesystem_id file_id = media_info.filesystem_id
@ -294,17 +295,25 @@ class PreviewUrlResource(DirectServeJsonResource):
# Check if this HTML document points to oEmbed information and # Check if this HTML document points to oEmbed information and
# defer to that. # defer to that.
oembed_url = self._oembed.autodiscover_from_html(tree) oembed_url = self._oembed.autodiscover_from_html(tree)
og = {} og_from_oembed: JsonDict = {}
if oembed_url: if oembed_url:
oembed_info = await self._download_url(oembed_url, user) oembed_info = await self._download_url(oembed_url, user)
og, expiration_ms = await self._handle_oembed_response( (
og_from_oembed,
author_name,
expiration_ms,
) = await self._handle_oembed_response(
url, oembed_info, expiration_ms url, oembed_info, expiration_ms
) )
# If there was no oEmbed URL (or oEmbed parsing failed), attempt # Parse Open Graph information from the HTML in case the oEmbed
# to generate the Open Graph information from the HTML. # response failed or is incomplete.
if not oembed_url or not og: og_from_html = parse_html_to_open_graph(tree, media_info.uri)
og = parse_html_to_open_graph(tree, media_info.uri)
# Compile the Open Graph response by using the scraped
# information from the HTML and overlaying any information
# from the oEmbed response.
og = {**og_from_html, **og_from_oembed}
await self._precache_image_url(user, media_info, og) await self._precache_image_url(user, media_info, og)
else: else:
@ -312,7 +321,7 @@ class PreviewUrlResource(DirectServeJsonResource):
elif oembed_url: elif oembed_url:
# Handle the oEmbed information. # Handle the oEmbed information.
og, expiration_ms = await self._handle_oembed_response( og, author_name, expiration_ms = await self._handle_oembed_response(
url, media_info, expiration_ms url, media_info, expiration_ms
) )
await self._precache_image_url(user, media_info, og) await self._precache_image_url(user, media_info, og)
@ -321,6 +330,11 @@ class PreviewUrlResource(DirectServeJsonResource):
logger.warning("Failed to find any OG data in %s", url) logger.warning("Failed to find any OG data in %s", url)
og = {} og = {}
# If we don't have a title but we have author_name, copy it as
# title
if not og.get("og:title") and author_name:
og["og:title"] = author_name
# filter out any stupidly long values # filter out any stupidly long values
keys_to_remove = [] keys_to_remove = []
for k, v in og.items(): for k, v in og.items():
@ -484,7 +498,7 @@ class PreviewUrlResource(DirectServeJsonResource):
async def _handle_oembed_response( async def _handle_oembed_response(
self, url: str, media_info: MediaInfo, expiration_ms: int self, url: str, media_info: MediaInfo, expiration_ms: int
) -> Tuple[JsonDict, int]: ) -> Tuple[JsonDict, Optional[str], int]:
""" """
Parse the downloaded oEmbed info. Parse the downloaded oEmbed info.
@ -497,11 +511,12 @@ class PreviewUrlResource(DirectServeJsonResource):
Returns: Returns:
A tuple of: A tuple of:
The Open Graph dictionary, if the oEmbed info can be parsed. The Open Graph dictionary, if the oEmbed info can be parsed.
The author name if it could be retrieved from oEmbed.
The (possibly updated) length of time, in milliseconds, the media is valid for. The (possibly updated) length of time, in milliseconds, the media is valid for.
""" """
# If JSON was not returned, there's nothing to do. # If JSON was not returned, there's nothing to do.
if not _is_json(media_info.media_type): if not _is_json(media_info.media_type):
return {}, expiration_ms return {}, None, expiration_ms
with open(media_info.filename, "rb") as file: with open(media_info.filename, "rb") as file:
body = file.read() body = file.read()
@ -513,7 +528,7 @@ class PreviewUrlResource(DirectServeJsonResource):
if open_graph_result and oembed_response.cache_age is not None: if open_graph_result and oembed_response.cache_age is not None:
expiration_ms = oembed_response.cache_age expiration_ms = oembed_response.cache_age
return open_graph_result, expiration_ms return open_graph_result, oembed_response.author_name, expiration_ms
def _start_expire_url_cache_data(self) -> Deferred: def _start_expire_url_cache_data(self) -> Deferred:
return run_as_background_process( return run_as_background_process(

View File

@ -759,7 +759,7 @@ class HomeServer(metaclass=abc.ABCMeta):
@cache_in_self @cache_in_self
def get_event_client_serializer(self) -> EventClientSerializer: def get_event_client_serializer(self) -> EventClientSerializer:
return EventClientSerializer(self) return EventClientSerializer()
@cache_in_self @cache_in_self
def get_password_policy_handler(self) -> PasswordPolicyHandler: def get_password_policy_handler(self) -> PasswordPolicyHandler:

View File

@ -45,7 +45,6 @@ from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, StateResolutionVersio
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.snapshot import EventContext from synapse.events.snapshot import EventContext
from synapse.logging.context import ContextResourceUsage from synapse.logging.context import ContextResourceUsage
from synapse.logging.utils import log_function
from synapse.state import v1, v2 from synapse.state import v1, v2
from synapse.storage.databases.main.events_worker import EventRedactBehaviour from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.storage.roommember import ProfileInfo from synapse.storage.roommember import ProfileInfo
@ -450,19 +449,19 @@ class StateHandler:
return {key: state_map[ev_id] for key, ev_id in new_state.items()} return {key: state_map[ev_id] for key, ev_id in new_state.items()}
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class _StateResMetrics: class _StateResMetrics:
"""Keeps track of some usage metrics about state res.""" """Keeps track of some usage metrics about state res."""
# System and User CPU time, in seconds # System and User CPU time, in seconds
cpu_time = attr.ib(type=float, default=0.0) cpu_time: float = 0.0
# time spent on database transactions (excluding scheduling time). This roughly # time spent on database transactions (excluding scheduling time). This roughly
# corresponds to the amount of work done on the db server, excluding event fetches. # corresponds to the amount of work done on the db server, excluding event fetches.
db_time = attr.ib(type=float, default=0.0) db_time: float = 0.0
# number of events fetched from the db. # number of events fetched from the db.
db_events = attr.ib(type=int, default=0) db_events: int = 0
_biggest_room_by_cpu_counter = Counter( _biggest_room_by_cpu_counter = Counter(
@ -512,7 +511,6 @@ class StateResolutionHandler:
self.clock.looping_call(self._report_metrics, 120 * 1000) self.clock.looping_call(self._report_metrics, 120 * 1000)
@log_function
async def resolve_state_groups( async def resolve_state_groups(
self, self,
room_id: str, room_id: str,

View File

@ -143,7 +143,7 @@ def make_conn(
return db_conn return db_conn
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class LoggingDatabaseConnection: class LoggingDatabaseConnection:
"""A wrapper around a database connection that returns `LoggingTransaction` """A wrapper around a database connection that returns `LoggingTransaction`
as its cursor class. as its cursor class.
@ -151,9 +151,9 @@ class LoggingDatabaseConnection:
This is mainly used on startup to ensure that queries get logged correctly This is mainly used on startup to ensure that queries get logged correctly
""" """
conn = attr.ib(type=Connection) conn: Connection
engine = attr.ib(type=BaseDatabaseEngine) engine: BaseDatabaseEngine
default_txn_name = attr.ib(type=str) default_txn_name: str
def cursor( def cursor(
self, *, txn_name=None, after_callbacks=None, exception_callbacks=None self, *, txn_name=None, after_callbacks=None, exception_callbacks=None
@ -934,56 +934,6 @@ class DatabasePool:
txn.execute(sql, vals) txn.execute(sql, vals)
async def simple_insert_many( async def simple_insert_many(
self, table: str, values: List[Dict[str, Any]], desc: str
) -> None:
"""Executes an INSERT query on the named table.
The input is given as a list of dicts, with one dict per row.
Generally simple_insert_many_values should be preferred for new code.
Args:
table: string giving the table name
values: dict of new column names and values for them
desc: description of the transaction, for logging and metrics
"""
await self.runInteraction(desc, self.simple_insert_many_txn, table, values)
@staticmethod
def simple_insert_many_txn(
txn: LoggingTransaction, table: str, values: List[Dict[str, Any]]
) -> None:
"""Executes an INSERT query on the named table.
The input is given as a list of dicts, with one dict per row.
Generally simple_insert_many_values_txn should be preferred for new code.
Args:
txn: The transaction to use.
table: string giving the table name
values: dict of new column names and values for them
"""
if not values:
return
# This is a *slight* abomination to get a list of tuples of key names
# and a list of tuples of value names.
#
# i.e. [{"a": 1, "b": 2}, {"c": 3, "d": 4}]
# => [("a", "b",), ("c", "d",)] and [(1, 2,), (3, 4,)]
#
# The sort is to ensure that we don't rely on dictionary iteration
# order.
keys, vals = zip(
*(zip(*(sorted(i.items(), key=lambda kv: kv[0]))) for i in values if i)
)
for k in keys:
if k != keys[0]:
raise RuntimeError("All items must have the same keys")
return DatabasePool.simple_insert_many_values_txn(txn, table, keys[0], vals)
async def simple_insert_many_values(
self, self,
table: str, table: str,
keys: Collection[str], keys: Collection[str],
@ -1002,11 +952,11 @@ class DatabasePool:
desc: description of the transaction, for logging and metrics desc: description of the transaction, for logging and metrics
""" """
await self.runInteraction( await self.runInteraction(
desc, self.simple_insert_many_values_txn, table, keys, values desc, self.simple_insert_many_txn, table, keys, values
) )
@staticmethod @staticmethod
def simple_insert_many_values_txn( def simple_insert_many_txn(
txn: LoggingTransaction, txn: LoggingTransaction,
table: str, table: str,
keys: Collection[str], keys: Collection[str],

View File

@ -450,7 +450,7 @@ class AccountDataWorkerStore(CacheInvalidationWorkerStore):
async def add_account_data_for_user( async def add_account_data_for_user(
self, user_id: str, account_data_type: str, content: JsonDict self, user_id: str, account_data_type: str, content: JsonDict
) -> int: ) -> int:
"""Add some account_data to a room for a user. """Add some global account_data for a user.
Args: Args:
user_id: The user to add a tag for. user_id: The user to add a tag for.
@ -536,9 +536,9 @@ class AccountDataWorkerStore(CacheInvalidationWorkerStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="ignored_users", table="ignored_users",
keys=("ignorer_user_id", "ignored_user_id"),
values=[ values=[
{"ignorer_user_id": user_id, "ignored_user_id": u} (user_id, u) for u in currently_ignored_users - previously_ignored_users
for u in currently_ignored_users - previously_ignored_users
], ],
) )

View File

@ -432,14 +432,21 @@ class DeviceInboxWorkerStore(SQLBaseStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="device_federation_outbox", table="device_federation_outbox",
keys=(
"destination",
"stream_id",
"queued_ts",
"messages_json",
"instance_name",
),
values=[ values=[
{ (
"destination": destination, destination,
"stream_id": stream_id, stream_id,
"queued_ts": now_ms, now_ms,
"messages_json": json_encoder.encode(edu), json_encoder.encode(edu),
"instance_name": self._instance_name, self._instance_name,
} )
for destination, edu in remote_messages_by_destination.items() for destination, edu in remote_messages_by_destination.items()
], ],
) )
@ -571,14 +578,9 @@ class DeviceInboxWorkerStore(SQLBaseStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="device_inbox", table="device_inbox",
keys=("user_id", "device_id", "stream_id", "message_json", "instance_name"),
values=[ values=[
{ (user_id, device_id, stream_id, message_json, self._instance_name)
"user_id": user_id,
"device_id": device_id,
"stream_id": stream_id,
"message_json": message_json,
"instance_name": self._instance_name,
}
for user_id, messages_by_device in local_by_user_then_device.items() for user_id, messages_by_device in local_by_user_then_device.items()
for device_id, message_json in messages_by_device.items() for device_id, message_json in messages_by_device.items()
], ],

View File

@ -53,6 +53,7 @@ if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
issue_8631_logger = logging.getLogger("synapse.8631_debug")
DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES = ( DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES = (
"drop_device_list_streams_non_unique_indexes" "drop_device_list_streams_non_unique_indexes"
@ -229,6 +230,12 @@ class DeviceWorkerStore(SQLBaseStore):
if not updates: if not updates:
return now_stream_id, [] return now_stream_id, []
if issue_8631_logger.isEnabledFor(logging.DEBUG):
data = {(user, device): stream_id for user, device, stream_id, _ in updates}
issue_8631_logger.debug(
"device updates need to be sent to %s: %s", destination, data
)
# get the cross-signing keys of the users in the list, so that we can # get the cross-signing keys of the users in the list, so that we can
# determine which of the device changes were cross-signing keys # determine which of the device changes were cross-signing keys
users = {r[0] for r in updates} users = {r[0] for r in updates}
@ -365,6 +372,17 @@ class DeviceWorkerStore(SQLBaseStore):
# and remove the length budgeting above. # and remove the length budgeting above.
results.append(("org.matrix.signing_key_update", result)) results.append(("org.matrix.signing_key_update", result))
if issue_8631_logger.isEnabledFor(logging.DEBUG):
for (user_id, edu) in results:
issue_8631_logger.debug(
"device update to %s for %s from %s to %s: %s",
destination,
user_id,
from_stream_id,
last_processed_stream_id,
edu,
)
return last_processed_stream_id, results return last_processed_stream_id, results
def _get_device_updates_by_remote_txn( def _get_device_updates_by_remote_txn(
@ -781,7 +799,7 @@ class DeviceWorkerStore(SQLBaseStore):
@cached(max_entries=10000) @cached(max_entries=10000)
async def get_device_list_last_stream_id_for_remote( async def get_device_list_last_stream_id_for_remote(
self, user_id: str self, user_id: str
) -> Optional[Any]: ) -> Optional[str]:
"""Get the last stream_id we got for a user. May be None if we haven't """Get the last stream_id we got for a user. May be None if we haven't
got any information for them. got any information for them.
""" """
@ -797,7 +815,9 @@ class DeviceWorkerStore(SQLBaseStore):
cached_method_name="get_device_list_last_stream_id_for_remote", cached_method_name="get_device_list_last_stream_id_for_remote",
list_name="user_ids", list_name="user_ids",
) )
async def get_device_list_last_stream_id_for_remotes(self, user_ids: Iterable[str]): async def get_device_list_last_stream_id_for_remotes(
self, user_ids: Iterable[str]
) -> Dict[str, Optional[str]]:
rows = await self.db_pool.simple_select_many_batch( rows = await self.db_pool.simple_select_many_batch(
table="device_lists_remote_extremeties", table="device_lists_remote_extremeties",
column="user_id", column="user_id",
@ -1384,6 +1404,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
content: JsonDict, content: JsonDict,
stream_id: str, stream_id: str,
) -> None: ) -> None:
"""Delete, update or insert a cache entry for this (user, device) pair."""
if content.get("deleted"): if content.get("deleted"):
self.db_pool.simple_delete_txn( self.db_pool.simple_delete_txn(
txn, txn,
@ -1443,6 +1464,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
def _update_remote_device_list_cache_txn( def _update_remote_device_list_cache_txn(
self, txn: LoggingTransaction, user_id: str, devices: List[dict], stream_id: int self, txn: LoggingTransaction, user_id: str, devices: List[dict], stream_id: int
) -> None: ) -> None:
"""Replace the list of cached devices for this user with the given list."""
self.db_pool.simple_delete_txn( self.db_pool.simple_delete_txn(
txn, table="device_lists_remote_cache", keyvalues={"user_id": user_id} txn, table="device_lists_remote_cache", keyvalues={"user_id": user_id}
) )
@ -1450,12 +1472,9 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="device_lists_remote_cache", table="device_lists_remote_cache",
keys=("user_id", "device_id", "content"),
values=[ values=[
{ (user_id, content["device_id"], json_encoder.encode(content))
"user_id": user_id,
"device_id": content["device_id"],
"content": json_encoder.encode(content),
}
for content in devices for content in devices
], ],
) )
@ -1543,8 +1562,9 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="device_lists_stream", table="device_lists_stream",
keys=("stream_id", "user_id", "device_id"),
values=[ values=[
{"stream_id": stream_id, "user_id": user_id, "device_id": device_id} (stream_id, user_id, device_id)
for stream_id, device_id in zip(stream_ids, device_ids) for stream_id, device_id in zip(stream_ids, device_ids)
], ],
) )
@ -1571,18 +1591,27 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="device_lists_outbound_pokes", table="device_lists_outbound_pokes",
keys=(
"destination",
"stream_id",
"user_id",
"device_id",
"sent",
"ts",
"opentracing_context",
),
values=[ values=[
{ (
"destination": destination, destination,
"stream_id": next(next_stream_id), next(next_stream_id),
"user_id": user_id, user_id,
"device_id": device_id, device_id,
"sent": False, False,
"ts": now, now,
"opentracing_context": json_encoder.encode(context) json_encoder.encode(context)
if whitelisted_homeserver(destination) if whitelisted_homeserver(destination)
else "{}", else "{}",
} )
for destination in hosts for destination in hosts
for device_id in device_ids for device_id in device_ids
], ],

View File

@ -112,10 +112,8 @@ class DirectoryWorkerStore(CacheInvalidationWorkerStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="room_alias_servers", table="room_alias_servers",
values=[ keys=("room_alias", "server"),
{"room_alias": room_alias.to_string(), "server": server} values=[(room_alias.to_string(), server) for server in servers],
for server in servers
],
) )
self._invalidate_cache_and_stream( self._invalidate_cache_and_stream(

View File

@ -110,16 +110,16 @@ class EndToEndRoomKeyStore(SQLBaseStore):
values = [] values = []
for (room_id, session_id, room_key) in room_keys: for (room_id, session_id, room_key) in room_keys:
values.append( values.append(
{ (
"user_id": user_id, user_id,
"version": version_int, version_int,
"room_id": room_id, room_id,
"session_id": session_id, session_id,
"first_message_index": room_key["first_message_index"], room_key["first_message_index"],
"forwarded_count": room_key["forwarded_count"], room_key["forwarded_count"],
"is_verified": room_key["is_verified"], room_key["is_verified"],
"session_data": json_encoder.encode(room_key["session_data"]), json_encoder.encode(room_key["session_data"]),
} )
) )
log_kv( log_kv(
{ {
@ -131,7 +131,19 @@ class EndToEndRoomKeyStore(SQLBaseStore):
) )
await self.db_pool.simple_insert_many( await self.db_pool.simple_insert_many(
table="e2e_room_keys", values=values, desc="add_e2e_room_keys" table="e2e_room_keys",
keys=(
"user_id",
"version",
"room_id",
"session_id",
"first_message_index",
"forwarded_count",
"is_verified",
"session_data",
),
values=values,
desc="add_e2e_room_keys",
) )
@trace @trace

View File

@ -50,16 +50,16 @@ if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class DeviceKeyLookupResult: class DeviceKeyLookupResult:
"""The type returned by get_e2e_device_keys_and_signatures""" """The type returned by get_e2e_device_keys_and_signatures"""
display_name = attr.ib(type=Optional[str]) display_name: Optional[str]
# the key data from e2e_device_keys_json. Typically includes fields like # the key data from e2e_device_keys_json. Typically includes fields like
# "algorithm", "keys" (including the curve25519 identity key and the ed25519 signing # "algorithm", "keys" (including the curve25519 identity key and the ed25519 signing
# key) and "signatures" (a map from (user id) to (key id/device_id) to signature.) # key) and "signatures" (a map from (user id) to (key id/device_id) to signature.)
keys = attr.ib(type=Optional[JsonDict]) keys: Optional[JsonDict]
class EndToEndKeyBackgroundStore(SQLBaseStore): class EndToEndKeyBackgroundStore(SQLBaseStore):
@ -387,15 +387,16 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="e2e_one_time_keys_json", table="e2e_one_time_keys_json",
keys=(
"user_id",
"device_id",
"algorithm",
"key_id",
"ts_added_ms",
"key_json",
),
values=[ values=[
{ (user_id, device_id, algorithm, key_id, time_now, json_bytes)
"user_id": user_id,
"device_id": device_id,
"algorithm": algorithm,
"key_id": key_id,
"ts_added_ms": time_now,
"key_json": json_bytes,
}
for algorithm, key_id, json_bytes in new_keys for algorithm, key_id, json_bytes in new_keys
], ],
) )
@ -1186,15 +1187,22 @@ class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
""" """
await self.db_pool.simple_insert_many( await self.db_pool.simple_insert_many(
"e2e_cross_signing_signatures", "e2e_cross_signing_signatures",
[ keys=(
{ "user_id",
"user_id": user_id, "key_id",
"key_id": item.signing_key_id, "target_user_id",
"target_user_id": item.target_user_id, "target_device_id",
"target_device_id": item.target_device_id, "signature",
"signature": item.signature, ),
} values=[
(
user_id,
item.signing_key_id,
item.target_user_id,
item.target_device_id,
item.signature,
)
for item in signatures for item in signatures
], ],
"add_e2e_signing_key", desc="add_e2e_signing_key",
) )

View File

@ -875,14 +875,21 @@ class EventPushActionsWorkerStore(SQLBaseStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="event_push_summary", table="event_push_summary",
keys=(
"user_id",
"room_id",
"notif_count",
"unread_count",
"stream_ordering",
),
values=[ values=[
{ (
"user_id": user_id, user_id,
"room_id": room_id, room_id,
"notif_count": summary.notif_count, summary.notif_count,
"unread_count": summary.unread_count, summary.unread_count,
"stream_ordering": summary.stream_ordering, summary.stream_ordering,
} )
for ((user_id, room_id), summary) in summaries.items() for ((user_id, room_id), summary) in summaries.items()
if summary.old_user_id is None if summary.old_user_id is None
], ],

View File

@ -39,7 +39,6 @@ from synapse.api.room_versions import RoomVersions
from synapse.crypto.event_signing import compute_event_reference_hash from synapse.crypto.event_signing import compute_event_reference_hash
from synapse.events import EventBase # noqa: F401 from synapse.events import EventBase # noqa: F401
from synapse.events.snapshot import EventContext # noqa: F401 from synapse.events.snapshot import EventContext # noqa: F401
from synapse.logging.utils import log_function
from synapse.storage._base import db_to_json, make_in_list_sql_clause from synapse.storage._base import db_to_json, make_in_list_sql_clause
from synapse.storage.database import ( from synapse.storage.database import (
DatabasePool, DatabasePool,
@ -69,7 +68,7 @@ event_counter = Counter(
) )
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class DeltaState: class DeltaState:
"""Deltas to use to update the `current_state_events` table. """Deltas to use to update the `current_state_events` table.
@ -80,9 +79,9 @@ class DeltaState:
should e.g. be removed from `current_state_events` table. should e.g. be removed from `current_state_events` table.
""" """
to_delete = attr.ib(type=List[Tuple[str, str]]) to_delete: List[Tuple[str, str]]
to_insert = attr.ib(type=StateMap[str]) to_insert: StateMap[str]
no_longer_in_room = attr.ib(type=bool, default=False) no_longer_in_room: bool = False
class PersistEventsStore: class PersistEventsStore:
@ -328,7 +327,6 @@ class PersistEventsStore:
return existing_prevs return existing_prevs
@log_function
def _persist_events_txn( def _persist_events_txn(
self, self,
txn: LoggingTransaction, txn: LoggingTransaction,
@ -442,12 +440,9 @@ class PersistEventsStore:
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="event_auth", table="event_auth",
keys=("event_id", "room_id", "auth_id"),
values=[ values=[
{ (event.event_id, event.room_id, auth_id)
"event_id": event.event_id,
"room_id": event.room_id,
"auth_id": auth_id,
}
for event in events for event in events
for auth_id in event.auth_event_ids() for auth_id in event.auth_event_ids()
if event.is_state() if event.is_state()
@ -675,8 +670,9 @@ class PersistEventsStore:
db_pool.simple_insert_many_txn( db_pool.simple_insert_many_txn(
txn, txn,
table="event_auth_chains", table="event_auth_chains",
keys=("event_id", "chain_id", "sequence_number"),
values=[ values=[
{"event_id": event_id, "chain_id": c_id, "sequence_number": seq} (event_id, c_id, seq)
for event_id, (c_id, seq) in new_chain_tuples.items() for event_id, (c_id, seq) in new_chain_tuples.items()
], ],
) )
@ -782,13 +778,14 @@ class PersistEventsStore:
db_pool.simple_insert_many_txn( db_pool.simple_insert_many_txn(
txn, txn,
table="event_auth_chain_links", table="event_auth_chain_links",
keys=(
"origin_chain_id",
"origin_sequence_number",
"target_chain_id",
"target_sequence_number",
),
values=[ values=[
{ (source_id, source_seq, target_id, target_seq)
"origin_chain_id": source_id,
"origin_sequence_number": source_seq,
"target_chain_id": target_id,
"target_sequence_number": target_seq,
}
for ( for (
source_id, source_id,
source_seq, source_seq,
@ -943,20 +940,28 @@ class PersistEventsStore:
txn_id = getattr(event.internal_metadata, "txn_id", None) txn_id = getattr(event.internal_metadata, "txn_id", None)
if token_id and txn_id: if token_id and txn_id:
to_insert.append( to_insert.append(
{ (
"event_id": event.event_id, event.event_id,
"room_id": event.room_id, event.room_id,
"user_id": event.sender, event.sender,
"token_id": token_id, token_id,
"txn_id": txn_id, txn_id,
"inserted_ts": self._clock.time_msec(), self._clock.time_msec(),
} )
) )
if to_insert: if to_insert:
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="event_txn_id", table="event_txn_id",
keys=(
"event_id",
"room_id",
"user_id",
"token_id",
"txn_id",
"inserted_ts",
),
values=to_insert, values=to_insert,
) )
@ -1161,8 +1166,9 @@ class PersistEventsStore:
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="event_forward_extremities", table="event_forward_extremities",
keys=("event_id", "room_id"),
values=[ values=[
{"event_id": ev_id, "room_id": room_id} (ev_id, room_id)
for room_id, new_extrem in new_forward_extremities.items() for room_id, new_extrem in new_forward_extremities.items()
for ev_id in new_extrem for ev_id in new_extrem
], ],
@ -1174,12 +1180,9 @@ class PersistEventsStore:
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="stream_ordering_to_exterm", table="stream_ordering_to_exterm",
keys=("room_id", "event_id", "stream_ordering"),
values=[ values=[
{ (room_id, event_id, max_stream_order)
"room_id": room_id,
"event_id": event_id,
"stream_ordering": max_stream_order,
}
for room_id, new_extrem in new_forward_extremities.items() for room_id, new_extrem in new_forward_extremities.items()
for event_id in new_extrem for event_id in new_extrem
], ],
@ -1251,20 +1254,22 @@ class PersistEventsStore:
for room_id, depth in depth_updates.items(): for room_id, depth in depth_updates.items():
self._update_min_depth_for_room_txn(txn, room_id, depth) self._update_min_depth_for_room_txn(txn, room_id, depth)
def _update_outliers_txn(self, txn, events_and_contexts): def _update_outliers_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
) -> List[Tuple[EventBase, EventContext]]:
"""Update any outliers with new event info. """Update any outliers with new event info.
This turns outliers into ex-outliers (unless the new event was This turns outliers into ex-outliers (unless the new event was rejected), and
rejected). also removes any other events we have already seen from the list.
Args: Args:
txn (twisted.enterprise.adbapi.Connection): db connection txn: db connection
events_and_contexts (list[(EventBase, EventContext)]): events events_and_contexts: events we are persisting
we are persisting
Returns: Returns:
list[(EventBase, EventContext)] new list, without events which new list, without events which are already in the events table.
are already in the events table.
""" """
txn.execute( txn.execute(
"SELECT event_id, outlier FROM events WHERE event_id in (%s)" "SELECT event_id, outlier FROM events WHERE event_id in (%s)"
@ -1272,7 +1277,9 @@ class PersistEventsStore:
[event.event_id for event, _ in events_and_contexts], [event.event_id for event, _ in events_and_contexts],
) )
have_persisted = {event_id: outlier for event_id, outlier in txn} have_persisted: Dict[str, bool] = {
event_id: outlier for event_id, outlier in txn
}
to_remove = set() to_remove = set()
for event, context in events_and_contexts: for event, context in events_and_contexts:
@ -1282,15 +1289,22 @@ class PersistEventsStore:
to_remove.add(event) to_remove.add(event)
if context.rejected: if context.rejected:
# If the event is rejected then we don't care if the event # If the incoming event is rejected then we don't care if the event
# was an outlier or not. # was an outlier or not - what we have is at least as good.
continue continue
outlier_persisted = have_persisted[event.event_id] outlier_persisted = have_persisted[event.event_id]
if not event.internal_metadata.is_outlier() and outlier_persisted: if not event.internal_metadata.is_outlier() and outlier_persisted:
# We received a copy of an event that we had already stored as # We received a copy of an event that we had already stored as
# an outlier in the database. We now have some state at that # an outlier in the database. We now have some state at that event
# so we need to update the state_groups table with that state. # so we need to update the state_groups table with that state.
#
# Note that we do not update the stream_ordering of the event in this
# scenario. XXX: does this cause bugs? It will mean we won't send such
# events down /sync. In general they will be historical events, so that
# doesn't matter too much, but that is not always the case.
logger.info("Updating state for ex-outlier event %s", event.event_id)
# insert into event_to_state_groups. # insert into event_to_state_groups.
try: try:
@ -1342,7 +1356,7 @@ class PersistEventsStore:
d.pop("redacted_because", None) d.pop("redacted_because", None)
return d return d
self.db_pool.simple_insert_many_values_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="event_json", table="event_json",
keys=("event_id", "room_id", "internal_metadata", "json", "format_version"), keys=("event_id", "room_id", "internal_metadata", "json", "format_version"),
@ -1358,7 +1372,7 @@ class PersistEventsStore:
), ),
) )
self.db_pool.simple_insert_many_values_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="events", table="events",
keys=( keys=(
@ -1412,7 +1426,7 @@ class PersistEventsStore:
) )
txn.execute(sql + clause, [False] + args) txn.execute(sql + clause, [False] + args)
self.db_pool.simple_insert_many_values_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="state_events", table="state_events",
keys=("event_id", "room_id", "type", "state_key"), keys=("event_id", "room_id", "type", "state_key"),
@ -1622,14 +1636,9 @@ class PersistEventsStore:
return self.db_pool.simple_insert_many_txn( return self.db_pool.simple_insert_many_txn(
txn=txn, txn=txn,
table="event_labels", table="event_labels",
keys=("event_id", "label", "room_id", "topological_ordering"),
values=[ values=[
{ (event_id, label, room_id, topological_ordering) for label in labels
"event_id": event_id,
"label": label,
"room_id": room_id,
"topological_ordering": topological_ordering,
}
for label in labels
], ],
) )
@ -1657,16 +1666,13 @@ class PersistEventsStore:
vals = [] vals = []
for event in events: for event in events:
ref_alg, ref_hash_bytes = compute_event_reference_hash(event) ref_alg, ref_hash_bytes = compute_event_reference_hash(event)
vals.append( vals.append((event.event_id, ref_alg, memoryview(ref_hash_bytes)))
{
"event_id": event.event_id,
"algorithm": ref_alg,
"hash": memoryview(ref_hash_bytes),
}
)
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, table="event_reference_hashes", values=vals txn,
table="event_reference_hashes",
keys=("event_id", "algorithm", "hash"),
values=vals,
) )
def _store_room_members_txn( def _store_room_members_txn(
@ -1689,18 +1695,25 @@ class PersistEventsStore:
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="room_memberships", table="room_memberships",
values=[ keys=(
{ "event_id",
"event_id": event.event_id, "user_id",
"user_id": event.state_key, "sender",
"sender": event.user_id, "room_id",
"room_id": event.room_id, "membership",
"membership": event.membership, "display_name",
"display_name": non_null_str_or_none( "avatar_url",
event.content.get("displayname")
), ),
"avatar_url": non_null_str_or_none(event.content.get("avatar_url")), values=[
} (
event.event_id,
event.state_key,
event.user_id,
event.room_id,
event.membership,
non_null_str_or_none(event.content.get("displayname")),
non_null_str_or_none(event.content.get("avatar_url")),
)
for event in events for event in events
], ],
) )
@ -1791,6 +1804,13 @@ class PersistEventsStore:
txn.call_after( txn.call_after(
self.store.get_thread_summary.invalidate, (parent_id, event.room_id) self.store.get_thread_summary.invalidate, (parent_id, event.room_id)
) )
# It should be safe to only invalidate the cache if the user has not
# previously participated in the thread, but that's difficult (and
# potentially error-prone) so it is always invalidated.
txn.call_after(
self.store.get_thread_participated.invalidate,
(parent_id, event.room_id, event.sender),
)
def _handle_insertion_event(self, txn: LoggingTransaction, event: EventBase): def _handle_insertion_event(self, txn: LoggingTransaction, event: EventBase):
"""Handles keeping track of insertion events and edges/connections. """Handles keeping track of insertion events and edges/connections.
@ -2163,13 +2183,9 @@ class PersistEventsStore:
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="event_edges", table="event_edges",
keys=("event_id", "prev_event_id", "room_id", "is_state"),
values=[ values=[
{ (ev.event_id, e_id, ev.room_id, False)
"event_id": ev.event_id,
"prev_event_id": e_id,
"room_id": ev.room_id,
"is_state": False,
}
for ev in events for ev in events
for e_id in ev.prev_event_ids() for e_id in ev.prev_event_ids()
], ],
@ -2226,17 +2242,17 @@ class PersistEventsStore:
) )
@attr.s(slots=True) @attr.s(slots=True, auto_attribs=True)
class _LinkMap: class _LinkMap:
"""A helper type for tracking links between chains.""" """A helper type for tracking links between chains."""
# Stores the set of links as nested maps: source chain ID -> target chain ID # Stores the set of links as nested maps: source chain ID -> target chain ID
# -> source sequence number -> target sequence number. # -> source sequence number -> target sequence number.
maps = attr.ib(type=Dict[int, Dict[int, Dict[int, int]]], factory=dict) maps: Dict[int, Dict[int, Dict[int, int]]] = attr.Factory(dict)
# Stores the links that have been added (with new set to true), as tuples of # Stores the links that have been added (with new set to true), as tuples of
# `(source chain ID, source sequence no, target chain ID, target sequence no.)` # `(source chain ID, source sequence no, target chain ID, target sequence no.)`
additions = attr.ib(type=Set[Tuple[int, int, int, int]], factory=set) additions: Set[Tuple[int, int, int, int]] = attr.Factory(set)
def add_link( def add_link(
self, self,

View File

@ -65,22 +65,22 @@ class _BackgroundUpdates:
REPLACE_STREAM_ORDERING_COLUMN = "replace_stream_ordering_column" REPLACE_STREAM_ORDERING_COLUMN = "replace_stream_ordering_column"
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class _CalculateChainCover: class _CalculateChainCover:
"""Return value for _calculate_chain_cover_txn.""" """Return value for _calculate_chain_cover_txn."""
# The last room_id/depth/stream processed. # The last room_id/depth/stream processed.
room_id = attr.ib(type=str) room_id: str
depth = attr.ib(type=int) depth: int
stream = attr.ib(type=int) stream: int
# Number of rows processed # Number of rows processed
processed_count = attr.ib(type=int) processed_count: int
# Map from room_id to last depth/stream processed for each room that we have # Map from room_id to last depth/stream processed for each room that we have
# processed all events for (i.e. the rooms we can flip the # processed all events for (i.e. the rooms we can flip the
# `has_auth_chain_index` for) # `has_auth_chain_index` for)
finished_room_map = attr.ib(type=Dict[str, Tuple[int, int]]) finished_room_map: Dict[str, Tuple[int, int]]
class EventsBackgroundUpdatesStore(SQLBaseStore): class EventsBackgroundUpdatesStore(SQLBaseStore):
@ -684,13 +684,14 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn=txn, txn=txn,
table="event_labels", table="event_labels",
keys=("event_id", "label", "room_id", "topological_ordering"),
values=[ values=[
{ (
"event_id": event_id, event_id,
"label": label, label,
"room_id": event_json["room_id"], event_json["room_id"],
"topological_ordering": event_json["depth"], event_json["depth"],
} )
for label in event_json["content"].get( for label in event_json["content"].get(
EventContentFields.LABELS, [] EventContentFields.LABELS, []
) )
@ -803,29 +804,19 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
if not has_state: if not has_state:
state_events.append( state_events.append(
{ (event.event_id, event.room_id, event.type, event.state_key)
"event_id": event.event_id,
"room_id": event.room_id,
"type": event.type,
"state_key": event.state_key,
}
) )
if not has_event_auth: if not has_event_auth:
# Old, dodgy, events may have duplicate auth events, which we # Old, dodgy, events may have duplicate auth events, which we
# need to deduplicate as we have a unique constraint. # need to deduplicate as we have a unique constraint.
for auth_id in set(event.auth_event_ids()): for auth_id in set(event.auth_event_ids()):
auth_events.append( auth_events.append((event.event_id, event.room_id, auth_id))
{
"room_id": event.room_id,
"event_id": event.event_id,
"auth_id": auth_id,
}
)
if state_events: if state_events:
await self.db_pool.simple_insert_many( await self.db_pool.simple_insert_many(
table="state_events", table="state_events",
keys=("event_id", "room_id", "type", "state_key"),
values=state_events, values=state_events,
desc="_rejected_events_metadata_state_events", desc="_rejected_events_metadata_state_events",
) )
@ -833,6 +824,7 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
if auth_events: if auth_events:
await self.db_pool.simple_insert_many( await self.db_pool.simple_insert_many(
table="event_auth", table="event_auth",
keys=("event_id", "room_id", "auth_id"),
values=auth_events, values=auth_events,
desc="_rejected_events_metadata_event_auth", desc="_rejected_events_metadata_event_auth",
) )

View File

@ -129,18 +129,29 @@ class PresenceStore(PresenceBackgroundUpdateStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="presence_stream", table="presence_stream",
keys=(
"stream_id",
"user_id",
"state",
"last_active_ts",
"last_federation_update_ts",
"last_user_sync_ts",
"status_msg",
"currently_active",
"instance_name",
),
values=[ values=[
{ (
"stream_id": stream_id, stream_id,
"user_id": state.user_id, state.user_id,
"state": state.state, state.state,
"last_active_ts": state.last_active_ts, state.last_active_ts,
"last_federation_update_ts": state.last_federation_update_ts, state.last_federation_update_ts,
"last_user_sync_ts": state.last_user_sync_ts, state.last_user_sync_ts,
"status_msg": state.status_msg, state.status_msg,
"currently_active": state.currently_active, state.currently_active,
"instance_name": self._instance_name, self._instance_name,
} )
for stream_id, state in zip(stream_orderings, presence_states) for stream_id, state in zip(stream_orderings, presence_states)
], ],
) )

View File

@ -561,13 +561,9 @@ class PusherStore(PusherWorkerStore):
self.db_pool.simple_insert_many_txn( self.db_pool.simple_insert_many_txn(
txn, txn,
table="deleted_pushers", table="deleted_pushers",
keys=("stream_id", "app_id", "pushkey", "user_id"),
values=[ values=[
{ (stream_id, pusher.app_id, pusher.pushkey, user_id)
"stream_id": stream_id,
"app_id": pusher.app_id,
"pushkey": pusher.pushkey,
"user_id": user_id,
}
for stream_id, pusher in zip(stream_ids, pushers) for stream_id, pusher in zip(stream_ids, pushers)
], ],
) )

View File

@ -51,7 +51,7 @@ class ExternalIDReuseException(Exception):
pass pass
@attr.s(frozen=True, slots=True) @attr.s(frozen=True, slots=True, auto_attribs=True)
class TokenLookupResult: class TokenLookupResult:
"""Result of looking up an access token. """Result of looking up an access token.
@ -69,14 +69,14 @@ class TokenLookupResult:
cached. cached.
""" """
user_id = attr.ib(type=str) user_id: str
is_guest = attr.ib(type=bool, default=False) is_guest: bool = False
shadow_banned = attr.ib(type=bool, default=False) shadow_banned: bool = False
token_id = attr.ib(type=Optional[int], default=None) token_id: Optional[int] = None
device_id = attr.ib(type=Optional[str], default=None) device_id: Optional[str] = None
valid_until_ms = attr.ib(type=Optional[int], default=None) valid_until_ms: Optional[int] = None
token_owner = attr.ib(type=str) token_owner: str = attr.ib()
token_used = attr.ib(type=bool, default=False) token_used: bool = False
# Make the token owner default to the user ID, which is the common case. # Make the token owner default to the user ID, which is the common case.
@token_owner.default @token_owner.default

View File

@ -13,14 +13,30 @@
# limitations under the License. # limitations under the License.
import logging import logging
from typing import List, Optional, Tuple, Union, cast from typing import (
TYPE_CHECKING,
Any,
Dict,
Iterable,
List,
Optional,
Tuple,
Union,
cast,
)
import attr import attr
from frozendict import frozendict
from synapse.api.constants import RelationTypes from synapse.api.constants import EventTypes, RelationTypes
from synapse.events import EventBase from synapse.events import EventBase
from synapse.storage._base import SQLBaseStore from synapse.storage._base import SQLBaseStore
from synapse.storage.database import LoggingTransaction, make_in_list_sql_clause from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
make_in_list_sql_clause,
)
from synapse.storage.databases.main.stream import generate_pagination_where_clause from synapse.storage.databases.main.stream import generate_pagination_where_clause
from synapse.storage.relations import ( from synapse.storage.relations import (
AggregationPaginationToken, AggregationPaginationToken,
@ -29,10 +45,24 @@ from synapse.storage.relations import (
) )
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class RelationsWorkerStore(SQLBaseStore): class RelationsWorkerStore(SQLBaseStore):
def __init__(
self,
database: DatabasePool,
db_conn: LoggingDatabaseConnection,
hs: "HomeServer",
):
super().__init__(database, db_conn, hs)
self._msc1849_enabled = hs.config.experimental.msc1849_enabled
self._msc3440_enabled = hs.config.experimental.msc3440_enabled
@cached(tree=True) @cached(tree=True)
async def get_relations_for_event( async def get_relations_for_event(
self, self,
@ -354,8 +384,7 @@ class RelationsWorkerStore(SQLBaseStore):
async def get_thread_summary( async def get_thread_summary(
self, event_id: str, room_id: str self, event_id: str, room_id: str
) -> Tuple[int, Optional[EventBase]]: ) -> Tuple[int, Optional[EventBase]]:
"""Get the number of threaded replies, the senders of those replies, and """Get the number of threaded replies and the latest reply (if any) for the given event.
the latest reply (if any) for the given event.
Args: Args:
event_id: Summarize the thread related to this event ID. event_id: Summarize the thread related to this event ID.
@ -368,7 +397,7 @@ class RelationsWorkerStore(SQLBaseStore):
def _get_thread_summary_txn( def _get_thread_summary_txn(
txn: LoggingTransaction, txn: LoggingTransaction,
) -> Tuple[int, Optional[str]]: ) -> Tuple[int, Optional[str]]:
# Fetch the count of threaded events and the latest event ID. # Fetch the latest event ID in the thread.
# TODO Should this only allow m.room.message events. # TODO Should this only allow m.room.message events.
sql = """ sql = """
SELECT event_id SELECT event_id
@ -389,6 +418,7 @@ class RelationsWorkerStore(SQLBaseStore):
latest_event_id = row[0] latest_event_id = row[0]
# Fetch the number of threaded replies.
sql = """ sql = """
SELECT COUNT(event_id) SELECT COUNT(event_id)
FROM event_relations FROM event_relations
@ -413,6 +443,44 @@ class RelationsWorkerStore(SQLBaseStore):
return count, latest_event return count, latest_event
@cached()
async def get_thread_participated(
self, event_id: str, room_id: str, user_id: str
) -> bool:
"""Get whether the requesting user participated in a thread.
This is separate from get_thread_summary since that can be cached across
all users while this value is specific to the requeser.
Args:
event_id: The thread related to this event ID.
room_id: The room the event belongs to.
user_id: The user requesting the summary.
Returns:
True if the requesting user participated in the thread, otherwise false.
"""
def _get_thread_summary_txn(txn: LoggingTransaction) -> bool:
# Fetch whether the requester has participated or not.
sql = """
SELECT 1
FROM event_relations
INNER JOIN events USING (event_id)
WHERE
relates_to_id = ?
AND room_id = ?
AND relation_type = ?
AND sender = ?
"""
txn.execute(sql, (event_id, room_id, RelationTypes.THREAD, user_id))
return bool(txn.fetchone())
return await self.db_pool.runInteraction(
"get_thread_summary", _get_thread_summary_txn
)
async def events_have_relations( async def events_have_relations(
self, self,
parent_ids: List[str], parent_ids: List[str],
@ -515,6 +583,104 @@ class RelationsWorkerStore(SQLBaseStore):
"get_if_user_has_annotated_event", _get_if_user_has_annotated_event "get_if_user_has_annotated_event", _get_if_user_has_annotated_event
) )
async def _get_bundled_aggregation_for_event(
self, event: EventBase, user_id: str
) -> Optional[Dict[str, Any]]:
"""Generate bundled aggregations for an event.
Note that this does not use a cache, but depends on cached methods.
Args:
event: The event to calculate bundled aggregations for.
user_id: The user requesting the bundled aggregations.
Returns:
The bundled aggregations for an event, if bundled aggregations are
enabled and the event can have bundled aggregations.
"""
# State events and redacted events do not get bundled aggregations.
if event.is_state() or event.internal_metadata.is_redacted():
return None
# Do not bundle aggregations for an event which represents an edit or an
# annotation. It does not make sense for them to have related events.
relates_to = event.content.get("m.relates_to")
if isinstance(relates_to, (dict, frozendict)):
relation_type = relates_to.get("rel_type")
if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE):
return None
event_id = event.event_id
room_id = event.room_id
# The bundled aggregations to include, a mapping of relation type to a
# type-specific value. Some types include the direct return type here
# while others need more processing during serialization.
aggregations: Dict[str, Any] = {}
annotations = await self.get_aggregation_groups_for_event(event_id, room_id)
if annotations.chunk:
aggregations[RelationTypes.ANNOTATION] = annotations.to_dict()
references = await self.get_relations_for_event(
event_id, room_id, RelationTypes.REFERENCE, direction="f"
)
if references.chunk:
aggregations[RelationTypes.REFERENCE] = references.to_dict()
edit = None
if event.type == EventTypes.Message:
edit = await self.get_applicable_edit(event_id, room_id)
if edit:
aggregations[RelationTypes.REPLACE] = edit
# If this event is the start of a thread, include a summary of the replies.
if self._msc3440_enabled:
thread_count, latest_thread_event = await self.get_thread_summary(
event_id, room_id
)
participated = await self.get_thread_participated(
event_id, room_id, user_id
)
if latest_thread_event:
aggregations[RelationTypes.THREAD] = {
"latest_event": latest_thread_event,
"count": thread_count,
"current_user_participated": participated,
}
# Store the bundled aggregations in the event metadata for later use.
return aggregations
async def get_bundled_aggregations(
self,
events: Iterable[EventBase],
user_id: str,
) -> Dict[str, Dict[str, Any]]:
"""Generate bundled aggregations for events.
Args:
events: The iterable of events to calculate bundled aggregations for.
user_id: The user requesting the bundled aggregations.
Returns:
A map of event ID to the bundled aggregation for the event. Not all
events may have bundled aggregations in the results.
"""
# If bundled aggregations are disabled, nothing to do.
if not self._msc1849_enabled:
return {}
# TODO Parallelize.
results = {}
for event in events:
event_result = await self._get_bundled_aggregation_for_event(event, user_id)
if event_result is not None:
results[event.event_id] = event_result
return results
class RelationsStore(RelationsWorkerStore): class RelationsStore(RelationsWorkerStore):
pass pass

Some files were not shown because too many files have changed in this diff Show More