mirror of
https://git.anonymousland.org/anonymousland/synapse-product.git
synced 2025-05-03 15:54:49 -04:00
Merge remote-tracking branch 'upstream/release-v1.48'
This commit is contained in:
commit
9f4fa40b64
175 changed files with 6413 additions and 1993 deletions
5
.github/workflows/docker.yml
vendored
5
.github/workflows/docker.yml
vendored
|
@ -5,7 +5,7 @@ name: Build docker images
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
tags: ["v*"]
|
tags: ["v*"]
|
||||||
branches: [ master, main ]
|
branches: [ master, main, develop ]
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
|
@ -38,6 +38,9 @@ jobs:
|
||||||
id: set-tag
|
id: set-tag
|
||||||
run: |
|
run: |
|
||||||
case "${GITHUB_REF}" in
|
case "${GITHUB_REF}" in
|
||||||
|
refs/heads/develop)
|
||||||
|
tag=develop
|
||||||
|
;;
|
||||||
refs/heads/master|refs/heads/main)
|
refs/heads/master|refs/heads/main)
|
||||||
tag=latest
|
tag=latest
|
||||||
;;
|
;;
|
||||||
|
|
115
CHANGES.md
115
CHANGES.md
|
@ -1,3 +1,110 @@
|
||||||
|
Synapse 1.48.0rc1 (2021-11-25)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
This release removes support for the long-deprecated `trust_identity_server_for_password_resets` configuration flag.
|
||||||
|
|
||||||
|
This release also fixes some performance issues with some background database updates introduced in Synapse 1.47.0.
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Experimental support for the thread relation defined in [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11161](https://github.com/matrix-org/synapse/issues/11161))
|
||||||
|
- Support filtering by relation senders & types per [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11236](https://github.com/matrix-org/synapse/issues/11236))
|
||||||
|
- Add support for the `/_matrix/client/v3` and `/_matrix/media/v3` APIs from Matrix v1.1. ([\#11318](https://github.com/matrix-org/synapse/issues/11318), [\#11371](https://github.com/matrix-org/synapse/issues/11371))
|
||||||
|
- Support the stable version of [MSC2778](https://github.com/matrix-org/matrix-doc/pull/2778): the `m.login.application_service` login type. Contributed by @tulir. ([\#11335](https://github.com/matrix-org/synapse/issues/11335))
|
||||||
|
- Add a new version of delete room admin API `DELETE /_synapse/admin/v2/rooms/<room_id>` to run it in the background. Contributed by @dklimpel. ([\#11223](https://github.com/matrix-org/synapse/issues/11223))
|
||||||
|
- Allow the admin [Delete Room API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#delete-room-api) to block a room without the need to join it. ([\#11228](https://github.com/matrix-org/synapse/issues/11228))
|
||||||
|
- Add an admin API to un-shadow-ban a user. ([\#11347](https://github.com/matrix-org/synapse/issues/11347))
|
||||||
|
- Add an admin API to run background database schema updates. ([\#11352](https://github.com/matrix-org/synapse/issues/11352))
|
||||||
|
- Add an admin API for blocking a room. ([\#11324](https://github.com/matrix-org/synapse/issues/11324))
|
||||||
|
- Update the JWT login type to support custom a `sub` claim. ([\#11361](https://github.com/matrix-org/synapse/issues/11361))
|
||||||
|
- Store and allow querying of arbitrary event relations. ([\#11391](https://github.com/matrix-org/synapse/issues/11391))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix a long-standing bug wherein display names or avatar URLs containing null bytes cause an internal server error when stored in the DB. ([\#11230](https://github.com/matrix-org/synapse/issues/11230))
|
||||||
|
- Prevent [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) historical state events from being pushed to an application service via `/transactions`. ([\#11265](https://github.com/matrix-org/synapse/issues/11265))
|
||||||
|
- Fix a long-standing bug where uploading extremely thin images (e.g. 1000x1) would fail. Contributed by @Neeeflix. ([\#11288](https://github.com/matrix-org/synapse/issues/11288))
|
||||||
|
- Fix a bug, introduced in Synapse 1.46.0, which caused the `check_3pid_auth` and `on_logged_out` callbacks in legacy password authentication provider modules to not be registered. Modules using the generic module interface were not affected. ([\#11340](https://github.com/matrix-org/synapse/issues/11340))
|
||||||
|
- Fix a bug introduced in 1.41.0 where space hierarchy responses would be incorrectly reused if multiple users were to make the same request at the same time. ([\#11355](https://github.com/matrix-org/synapse/issues/11355))
|
||||||
|
- Fix a bug introduced in 1.45.0 where the `read_templates` method of the module API would error. ([\#11377](https://github.com/matrix-org/synapse/issues/11377))
|
||||||
|
- Fix an issue introduced in 1.47.0 which prevented servers re-joining rooms they had previously left, if their signing keys were replaced. ([\#11379](https://github.com/matrix-org/synapse/issues/11379))
|
||||||
|
- Fix a bug introduced in 1.13.0 where creating and publishing a room could cause errors if `room_list_publication_rules` is configured. ([\#11392](https://github.com/matrix-org/synapse/issues/11392))
|
||||||
|
- Improve performance of various background database updates. ([\#11421](https://github.com/matrix-org/synapse/issues/11421), [\#11422](https://github.com/matrix-org/synapse/issues/11422))
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Suggest users of the Debian packages add configuration to `/etc/matrix-synapse/conf.d/` to prevent, upon upgrade, being asked to choose between their configuration and the maintainer's. ([\#11281](https://github.com/matrix-org/synapse/issues/11281))
|
||||||
|
- Fix typos in the documentation for the `username_available` admin API. Contributed by Stanislav Motylkov. ([\#11286](https://github.com/matrix-org/synapse/issues/11286))
|
||||||
|
- Add Single Sign-On, SAML and CAS pages to the documentation. ([\#11298](https://github.com/matrix-org/synapse/issues/11298))
|
||||||
|
- Change the word 'Home server' as one word 'homeserver' in documentation. ([\#11320](https://github.com/matrix-org/synapse/issues/11320))
|
||||||
|
- Fix missing quotes for wildcard domains in `federation_certificate_verification_whitelist`. ([\#11381](https://github.com/matrix-org/synapse/issues/11381))
|
||||||
|
|
||||||
|
|
||||||
|
Deprecations and Removals
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
- Remove deprecated `trust_identity_server_for_password_resets` configuration flag. ([\#11395](https://github.com/matrix-org/synapse/issues/11395))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Add type annotations to `synapse.metrics`. ([\#10847](https://github.com/matrix-org/synapse/issues/10847))
|
||||||
|
- Split out federated PDU retrieval function into a non-cached version. ([\#11242](https://github.com/matrix-org/synapse/issues/11242))
|
||||||
|
- Clean up code relating to to-device messages and sending ephemeral events to application services. ([\#11247](https://github.com/matrix-org/synapse/issues/11247))
|
||||||
|
- Fix a small typo in the error response when a relation type other than 'm.annotation' is passed to `GET /rooms/{room_id}/aggregations/{event_id}`. ([\#11278](https://github.com/matrix-org/synapse/issues/11278))
|
||||||
|
- Drop unused database tables `room_stats_historical` and `user_stats_historical`. ([\#11280](https://github.com/matrix-org/synapse/issues/11280))
|
||||||
|
- Require all files in synapse/ and tests/ to pass mypy unless specifically excluded. ([\#11282](https://github.com/matrix-org/synapse/issues/11282), [\#11285](https://github.com/matrix-org/synapse/issues/11285), [\#11359](https://github.com/matrix-org/synapse/issues/11359))
|
||||||
|
- Add missing type hints to `synapse.app`. ([\#11287](https://github.com/matrix-org/synapse/issues/11287))
|
||||||
|
- Remove unused parameters on `FederationEventHandler._check_event_auth`. ([\#11292](https://github.com/matrix-org/synapse/issues/11292))
|
||||||
|
- Add type hints to `synapse._scripts`. ([\#11297](https://github.com/matrix-org/synapse/issues/11297))
|
||||||
|
- Fix an issue which prevented the `remove_deleted_devices_from_device_inbox` background database schema update from running when updating from a recent Synapse version. ([\#11303](https://github.com/matrix-org/synapse/issues/11303))
|
||||||
|
- Add type hints to storage classes. ([\#11307](https://github.com/matrix-org/synapse/issues/11307), [\#11310](https://github.com/matrix-org/synapse/issues/11310), [\#11311](https://github.com/matrix-org/synapse/issues/11311), [\#11312](https://github.com/matrix-org/synapse/issues/11312), [\#11313](https://github.com/matrix-org/synapse/issues/11313), [\#11314](https://github.com/matrix-org/synapse/issues/11314), [\#11316](https://github.com/matrix-org/synapse/issues/11316), [\#11322](https://github.com/matrix-org/synapse/issues/11322), [\#11332](https://github.com/matrix-org/synapse/issues/11332), [\#11339](https://github.com/matrix-org/synapse/issues/11339), [\#11342](https://github.com/matrix-org/synapse/issues/11342))
|
||||||
|
- Add type hints to `synapse.util`. ([\#11321](https://github.com/matrix-org/synapse/issues/11321), [\#11328](https://github.com/matrix-org/synapse/issues/11328))
|
||||||
|
- Improve type annotations in Synapse's test suite. ([\#11323](https://github.com/matrix-org/synapse/issues/11323), [\#11330](https://github.com/matrix-org/synapse/issues/11330))
|
||||||
|
- Test that room alias deletion works as intended. ([\#11327](https://github.com/matrix-org/synapse/issues/11327))
|
||||||
|
- Remove deprecated `trust_identity_server_for_password_resets` configuration flag. ([\#11333](https://github.com/matrix-org/synapse/issues/11333))
|
||||||
|
- Add type annotations for some methods and properties in the module API. ([\#11341](https://github.com/matrix-org/synapse/issues/11341))
|
||||||
|
- Fix running `scripts-dev/complement.sh`, which was broken in v1.47.0rc1. ([\#11368](https://github.com/matrix-org/synapse/issues/11368))
|
||||||
|
- Rename internal functions for token generation to better reflect what they do. ([\#11369](https://github.com/matrix-org/synapse/issues/11369), [\#11370](https://github.com/matrix-org/synapse/issues/11370))
|
||||||
|
- Add type hints to configuration classes. ([\#11377](https://github.com/matrix-org/synapse/issues/11377))
|
||||||
|
- Publish a `develop` image to Docker Hub. ([\#11380](https://github.com/matrix-org/synapse/issues/11380))
|
||||||
|
- Keep fallback key marked as used if it's re-uploaded. ([\#11382](https://github.com/matrix-org/synapse/issues/11382))
|
||||||
|
- Use `auto_attribs` on the `attrs` class `RefreshTokenLookupResult`. ([\#11386](https://github.com/matrix-org/synapse/issues/11386))
|
||||||
|
- Rename unstable `access_token_lifetime` configuration option to `refreshable_access_token_lifetime` to make it clear it only concerns refreshable access tokens. ([\#11388](https://github.com/matrix-org/synapse/issues/11388))
|
||||||
|
- Do not run the broken MSC2716 tests when running `scripts-dev/complement.sh`. ([\#11389](https://github.com/matrix-org/synapse/issues/11389))
|
||||||
|
- Remove dead code from supporting ACME. ([\#11393](https://github.com/matrix-org/synapse/issues/11393))
|
||||||
|
- Refactor including the bundled relations when serializing an event. ([\#11408](https://github.com/matrix-org/synapse/issues/11408))
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 1.47.1 (2021-11-23)
|
||||||
|
===========================
|
||||||
|
|
||||||
|
This release fixes a security issue in the media store, affecting all prior releases of Synapse. Server administrators are encouraged to update Synapse as soon as possible. We are not aware of these vulnerabilities being exploited in the wild.
|
||||||
|
|
||||||
|
Server administrators who are unable to update Synapse may use the workarounds described in the linked GitHub Security Advisory below.
|
||||||
|
|
||||||
|
Security advisory
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
The following issue is fixed in 1.47.1.
|
||||||
|
|
||||||
|
- **[GHSA-3hfw-x7gx-437c](https://github.com/matrix-org/synapse/security/advisories/GHSA-3hfw-x7gx-437c) / [CVE-2021-41281](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41281): Path traversal when downloading remote media.**
|
||||||
|
|
||||||
|
Synapse instances with the media repository enabled can be tricked into downloading a file from a remote server into an arbitrary directory, potentially outside the media store directory.
|
||||||
|
|
||||||
|
The last two directories and file name of the path are chosen randomly by Synapse and cannot be controlled by an attacker, which limits the impact.
|
||||||
|
|
||||||
|
Homeservers with the media repository disabled are unaffected. Homeservers configured with a federation whitelist are also unaffected.
|
||||||
|
|
||||||
|
Fixed by [91f2bd090](https://github.com/matrix-org/synapse/commit/91f2bd090).
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.47.0 (2021-11-17)
|
Synapse 1.47.0 (2021-11-17)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
|
@ -8694,14 +8801,14 @@ General:
|
||||||
|
|
||||||
Federation:
|
Federation:
|
||||||
|
|
||||||
- Add key distribution mechanisms for fetching public keys of unavailable remote home servers. See [Retrieving Server Keys](https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys) in the spec.
|
- Add key distribution mechanisms for fetching public keys of unavailable remote homeservers. See [Retrieving Server Keys](https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys) in the spec.
|
||||||
|
|
||||||
Configuration:
|
Configuration:
|
||||||
|
|
||||||
- Add support for multiple config files.
|
- Add support for multiple config files.
|
||||||
- Add support for dictionaries in config files.
|
- Add support for dictionaries in config files.
|
||||||
- Remove support for specifying config options on the command line, except for:
|
- Remove support for specifying config options on the command line, except for:
|
||||||
- `--daemonize` - Daemonize the home server.
|
- `--daemonize` - Daemonize the homeserver.
|
||||||
- `--manhole` - Turn on the twisted telnet manhole service on the given port.
|
- `--manhole` - Turn on the twisted telnet manhole service on the given port.
|
||||||
- `--database-path` - The path to a sqlite database to use.
|
- `--database-path` - The path to a sqlite database to use.
|
||||||
- `--verbose` - The verbosity level.
|
- `--verbose` - The verbosity level.
|
||||||
|
@ -8906,7 +9013,7 @@ This version adds support for using a TURN server. See docs/turn-howto.rst on ho
|
||||||
Homeserver:
|
Homeserver:
|
||||||
|
|
||||||
- Add support for redaction of messages.
|
- Add support for redaction of messages.
|
||||||
- Fix bug where inviting a user on a remote home server could take up to 20-30s.
|
- Fix bug where inviting a user on a remote homeserver could take up to 20-30s.
|
||||||
- Implement a get current room state API.
|
- Implement a get current room state API.
|
||||||
- Add support specifying and retrieving turn server configuration.
|
- Add support specifying and retrieving turn server configuration.
|
||||||
|
|
||||||
|
@ -8996,7 +9103,7 @@ Changes in synapse 0.2.3 (2014-09-12)
|
||||||
|
|
||||||
Homeserver:
|
Homeserver:
|
||||||
|
|
||||||
- Fix bug where we stopped sending events to remote home servers if a user from that home server left, even if there were some still in the room.
|
- Fix bug where we stopped sending events to remote homeservers if a user from that homeserver left, even if there were some still in the room.
|
||||||
- Fix bugs in the state conflict resolution where it was incorrectly rejecting events.
|
- Fix bugs in the state conflict resolution where it was incorrectly rejecting events.
|
||||||
|
|
||||||
Webclient:
|
Webclient:
|
||||||
|
|
12
debian/changelog
vendored
12
debian/changelog
vendored
|
@ -1,3 +1,15 @@
|
||||||
|
matrix-synapse-py3 (1.48.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.48.0~rc1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Nov 2021 15:56:03 +0000
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.47.1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.47.1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Fri, 19 Nov 2021 13:44:32 +0000
|
||||||
|
|
||||||
matrix-synapse-py3 (1.47.0) stable; urgency=medium
|
matrix-synapse-py3 (1.47.0) stable; urgency=medium
|
||||||
|
|
||||||
* New synapse release 1.47.0.
|
* New synapse release 1.47.0.
|
||||||
|
|
|
@ -148,14 +148,6 @@ bcrypt_rounds: 12
|
||||||
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
|
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
|
||||||
enable_group_creation: true
|
enable_group_creation: true
|
||||||
|
|
||||||
# The list of identity servers trusted to verify third party
|
|
||||||
# identifiers by this server.
|
|
||||||
#
|
|
||||||
# Also defines the ID server which will be called when an account is
|
|
||||||
# deactivated (one will be picked arbitrarily).
|
|
||||||
trusted_third_party_id_servers:
|
|
||||||
- matrix.org
|
|
||||||
- vector.im
|
|
||||||
|
|
||||||
## Metrics ###
|
## Metrics ###
|
||||||
|
|
||||||
|
|
|
@ -48,7 +48,7 @@ WORKERS_CONFIG = {
|
||||||
"app": "synapse.app.user_dir",
|
"app": "synapse.app.user_dir",
|
||||||
"listener_resources": ["client"],
|
"listener_resources": ["client"],
|
||||||
"endpoint_patterns": [
|
"endpoint_patterns": [
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$"
|
||||||
],
|
],
|
||||||
"shared_extra_conf": {"update_user_directory": False},
|
"shared_extra_conf": {"update_user_directory": False},
|
||||||
"worker_extra_conf": "",
|
"worker_extra_conf": "",
|
||||||
|
@ -85,10 +85,10 @@ WORKERS_CONFIG = {
|
||||||
"app": "synapse.app.generic_worker",
|
"app": "synapse.app.generic_worker",
|
||||||
"listener_resources": ["client"],
|
"listener_resources": ["client"],
|
||||||
"endpoint_patterns": [
|
"endpoint_patterns": [
|
||||||
"^/_matrix/client/(v2_alpha|r0)/sync$",
|
"^/_matrix/client/(v2_alpha|r0|v3)/sync$",
|
||||||
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
|
"^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$",
|
||||||
"^/_matrix/client/(api/v1|r0)/initialSync$",
|
"^/_matrix/client/(api/v1|r0|v3)/initialSync$",
|
||||||
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
|
"^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$",
|
||||||
],
|
],
|
||||||
"shared_extra_conf": {},
|
"shared_extra_conf": {},
|
||||||
"worker_extra_conf": "",
|
"worker_extra_conf": "",
|
||||||
|
@ -146,11 +146,11 @@ WORKERS_CONFIG = {
|
||||||
"app": "synapse.app.generic_worker",
|
"app": "synapse.app.generic_worker",
|
||||||
"listener_resources": ["client"],
|
"listener_resources": ["client"],
|
||||||
"endpoint_patterns": [
|
"endpoint_patterns": [
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact",
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send",
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/join/",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/join/",
|
||||||
"^/_matrix/client/(api/v1|r0|unstable)/profile/",
|
"^/_matrix/client/(api/v1|r0|v3|unstable)/profile/",
|
||||||
],
|
],
|
||||||
"shared_extra_conf": {},
|
"shared_extra_conf": {},
|
||||||
"worker_extra_conf": "",
|
"worker_extra_conf": "",
|
||||||
|
@ -158,7 +158,7 @@ WORKERS_CONFIG = {
|
||||||
"frontend_proxy": {
|
"frontend_proxy": {
|
||||||
"app": "synapse.app.frontend_proxy",
|
"app": "synapse.app.frontend_proxy",
|
||||||
"listener_resources": ["client", "replication"],
|
"listener_resources": ["client", "replication"],
|
||||||
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|unstable)/keys/upload"],
|
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload"],
|
||||||
"shared_extra_conf": {},
|
"shared_extra_conf": {},
|
||||||
"worker_extra_conf": (
|
"worker_extra_conf": (
|
||||||
"worker_main_http_uri: http://127.0.0.1:%d"
|
"worker_main_http_uri: http://127.0.0.1:%d"
|
||||||
|
|
|
@ -50,8 +50,10 @@ build the documentation with:
|
||||||
mdbook build
|
mdbook build
|
||||||
```
|
```
|
||||||
|
|
||||||
The rendered contents will be outputted to a new `book/` directory at the root of the repository. You can
|
The rendered contents will be outputted to a new `book/` directory at the root of the repository. Please note that
|
||||||
browse the book by opening `book/index.html` in a web browser.
|
index.html is not built by default, it is created by copying over the file `welcome_and_overview.html` to `index.html`
|
||||||
|
during deployment. Thus, when running `mdbook serve` locally the book will initially show a 404 in place of the index
|
||||||
|
due to the above. Do not be alarmed!
|
||||||
|
|
||||||
You can also have mdbook host the docs on a local webserver with hot-reload functionality via:
|
You can also have mdbook host the docs on a local webserver with hot-reload functionality via:
|
||||||
|
|
||||||
|
|
|
@ -23,10 +23,10 @@
|
||||||
- [Structured Logging](structured_logging.md)
|
- [Structured Logging](structured_logging.md)
|
||||||
- [Templates](templates.md)
|
- [Templates](templates.md)
|
||||||
- [User Authentication](usage/configuration/user_authentication/README.md)
|
- [User Authentication](usage/configuration/user_authentication/README.md)
|
||||||
- [Single-Sign On]()
|
- [Single-Sign On](usage/configuration/user_authentication/single_sign_on/README.md)
|
||||||
- [OpenID Connect](openid.md)
|
- [OpenID Connect](openid.md)
|
||||||
- [SAML]()
|
- [SAML](usage/configuration/user_authentication/single_sign_on/saml.md)
|
||||||
- [CAS]()
|
- [CAS](usage/configuration/user_authentication/single_sign_on/cas.md)
|
||||||
- [SSO Mapping Providers](sso_mapping_providers.md)
|
- [SSO Mapping Providers](sso_mapping_providers.md)
|
||||||
- [Password Auth Providers](password_auth_providers.md)
|
- [Password Auth Providers](password_auth_providers.md)
|
||||||
- [JSON Web Tokens](jwt.md)
|
- [JSON Web Tokens](jwt.md)
|
||||||
|
|
|
@ -70,6 +70,8 @@ This API returns a JSON body like the following:
|
||||||
|
|
||||||
The status will be one of `active`, `complete`, or `failed`.
|
The status will be one of `active`, `complete`, or `failed`.
|
||||||
|
|
||||||
|
If `status` is `failed` there will be a string `error` with the error message.
|
||||||
|
|
||||||
## Reclaim disk space (Postgres)
|
## Reclaim disk space (Postgres)
|
||||||
|
|
||||||
To reclaim the disk space and return it to the operating system, you need to run
|
To reclaim the disk space and return it to the operating system, you need to run
|
||||||
|
|
|
@ -3,7 +3,11 @@
|
||||||
- [Room Details API](#room-details-api)
|
- [Room Details API](#room-details-api)
|
||||||
- [Room Members API](#room-members-api)
|
- [Room Members API](#room-members-api)
|
||||||
- [Room State API](#room-state-api)
|
- [Room State API](#room-state-api)
|
||||||
|
- [Block Room API](#block-room-api)
|
||||||
- [Delete Room API](#delete-room-api)
|
- [Delete Room API](#delete-room-api)
|
||||||
|
* [Version 1 (old version)](#version-1-old-version)
|
||||||
|
* [Version 2 (new version)](#version-2-new-version)
|
||||||
|
* [Status of deleting rooms](#status-of-deleting-rooms)
|
||||||
* [Undoing room shutdowns](#undoing-room-shutdowns)
|
* [Undoing room shutdowns](#undoing-room-shutdowns)
|
||||||
- [Make Room Admin API](#make-room-admin-api)
|
- [Make Room Admin API](#make-room-admin-api)
|
||||||
- [Forward Extremities Admin API](#forward-extremities-admin-api)
|
- [Forward Extremities Admin API](#forward-extremities-admin-api)
|
||||||
|
@ -383,6 +387,83 @@ A response body like the following is returned:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Block Room API
|
||||||
|
The Block Room admin API allows server admins to block and unblock rooms,
|
||||||
|
and query to see if a given room is blocked.
|
||||||
|
This API can be used to pre-emptively block a room, even if it's unknown to this
|
||||||
|
homeserver. Users will be prevented from joining a blocked room.
|
||||||
|
|
||||||
|
## Block or unblock a room
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
PUT /_synapse/admin/v1/rooms/<room_id>/block
|
||||||
|
```
|
||||||
|
|
||||||
|
with a body of:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"block": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"block": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following parameters should be set in the URL:
|
||||||
|
|
||||||
|
- `room_id` - The ID of the room.
|
||||||
|
|
||||||
|
The following JSON body parameters are available:
|
||||||
|
|
||||||
|
- `block` - If `true` the room will be blocked and if `false` the room will be unblocked.
|
||||||
|
|
||||||
|
**Response**
|
||||||
|
|
||||||
|
The following fields are possible in the JSON response body:
|
||||||
|
|
||||||
|
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
|
||||||
|
|
||||||
|
## Get block status
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /_synapse/admin/v1/rooms/<room_id>/block
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"block": true,
|
||||||
|
"user_id": "<user_id>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following parameters should be set in the URL:
|
||||||
|
|
||||||
|
- `room_id` - The ID of the room.
|
||||||
|
|
||||||
|
**Response**
|
||||||
|
|
||||||
|
The following fields are possible in the JSON response body:
|
||||||
|
|
||||||
|
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
|
||||||
|
- `user_id` - An optional string. If the room is blocked (`block` is `true`) shows
|
||||||
|
the user who has add the room to blocking list. Otherwise it is not displayed.
|
||||||
|
|
||||||
# Delete Room API
|
# Delete Room API
|
||||||
|
|
||||||
The Delete Room admin API allows server admins to remove rooms from the server
|
The Delete Room admin API allows server admins to remove rooms from the server
|
||||||
|
@ -396,18 +477,33 @@ The new room will be created with the user specified by the `new_room_user_id` p
|
||||||
as room administrator and will contain a message explaining what happened. Users invited
|
as room administrator and will contain a message explaining what happened. Users invited
|
||||||
to the new room will have power level `-10` by default, and thus be unable to speak.
|
to the new room will have power level `-10` by default, and thus be unable to speak.
|
||||||
|
|
||||||
If `block` is `True` it prevents new joins to the old room.
|
If `block` is `true`, users will be prevented from joining the old room.
|
||||||
|
This option can in [Version 1](#version-1-old-version) also be used to pre-emptively
|
||||||
|
block a room, even if it's unknown to this homeserver. In this case, the room will be
|
||||||
|
blocked, and no further action will be taken. If `block` is `false`, attempting to
|
||||||
|
delete an unknown room is invalid and will be rejected as a bad request.
|
||||||
|
|
||||||
This API will remove all trace of the old room from your database after removing
|
This API will remove all trace of the old room from your database after removing
|
||||||
all local users. If `purge` is `true` (the default), all traces of the old room will
|
all local users. If `purge` is `true` (the default), all traces of the old room will
|
||||||
be removed from your database after removing all local users. If you do not want
|
be removed from your database after removing all local users. If you do not want
|
||||||
this to happen, set `purge` to `false`.
|
this to happen, set `purge` to `false`.
|
||||||
Depending on the amount of history being purged a call to the API may take
|
Depending on the amount of history being purged, a call to the API may take
|
||||||
several minutes or longer.
|
several minutes or longer.
|
||||||
|
|
||||||
The local server will only have the power to move local user and room aliases to
|
The local server will only have the power to move local user and room aliases to
|
||||||
the new room. Users on other servers will be unaffected.
|
the new room. Users on other servers will be unaffected.
|
||||||
|
|
||||||
|
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||||
|
server admin: see [Admin API](../usage/administration/admin_api).
|
||||||
|
|
||||||
|
## Version 1 (old version)
|
||||||
|
|
||||||
|
This version works synchronously. That means you only get the response once the server has
|
||||||
|
finished the action, which may take a long time. If you request the same action
|
||||||
|
a second time, and the server has not finished the first one, the second request will block.
|
||||||
|
This is fixed in version 2 of this API. The parameters are the same in both APIs.
|
||||||
|
This API will become deprecated in the future.
|
||||||
|
|
||||||
The API is:
|
The API is:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -426,9 +522,6 @@ with a body of:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
|
||||||
server admin: see [Admin API](../usage/administration/admin_api).
|
|
||||||
|
|
||||||
A response body like the following is returned:
|
A response body like the following is returned:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
@ -445,6 +538,44 @@ A response body like the following is returned:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The parameters and response values have the same format as
|
||||||
|
[version 2](#version-2-new-version) of the API.
|
||||||
|
|
||||||
|
## Version 2 (new version)
|
||||||
|
|
||||||
|
**Note**: This API is new, experimental and "subject to change".
|
||||||
|
|
||||||
|
This version works asynchronously, meaning you get the response from server immediately
|
||||||
|
while the server works on that task in background. You can then request the status of the action
|
||||||
|
to check if it has completed.
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
DELETE /_synapse/admin/v2/rooms/<room_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
with a body of:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"new_room_user_id": "@someuser:example.com",
|
||||||
|
"room_name": "Content Violation Notification",
|
||||||
|
"message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
|
||||||
|
"block": true,
|
||||||
|
"purge": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The API starts the shut down and purge running, and returns immediately with a JSON body with
|
||||||
|
a purge id:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"delete_id": "<opaque id>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
The following parameters should be set in the URL:
|
The following parameters should be set in the URL:
|
||||||
|
@ -464,8 +595,10 @@ The following JSON body parameters are available:
|
||||||
`new_room_user_id` in the new room. Ideally this will clearly convey why the
|
`new_room_user_id` in the new room. Ideally this will clearly convey why the
|
||||||
original room was shut down. Defaults to `Sharing illegal content on this server
|
original room was shut down. Defaults to `Sharing illegal content on this server
|
||||||
is not permitted and rooms in violation will be blocked.`
|
is not permitted and rooms in violation will be blocked.`
|
||||||
* `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing
|
* `block` - Optional. If set to `true`, this room will be added to a blocking list,
|
||||||
future attempts to join the room. Defaults to `false`.
|
preventing future attempts to join the room. Rooms can be blocked
|
||||||
|
even if they're not yet known to the homeserver (only with
|
||||||
|
[Version 1](#version-1-old-version) of the API). Defaults to `false`.
|
||||||
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
|
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
|
||||||
Defaults to `true`.
|
Defaults to `true`.
|
||||||
* `force_purge` - Optional, and ignored unless `purge` is `true`. If set to `true`, it
|
* `force_purge` - Optional, and ignored unless `purge` is `true`. If set to `true`, it
|
||||||
|
@ -475,16 +608,124 @@ The following JSON body parameters are available:
|
||||||
|
|
||||||
The JSON body must not be empty. The body must be at least `{}`.
|
The JSON body must not be empty. The body must be at least `{}`.
|
||||||
|
|
||||||
**Response**
|
## Status of deleting rooms
|
||||||
|
|
||||||
|
**Note**: This API is new, experimental and "subject to change".
|
||||||
|
|
||||||
|
It is possible to query the status of the background task for deleting rooms.
|
||||||
|
The status can be queried up to 24 hours after completion of the task,
|
||||||
|
or until Synapse is restarted (whichever happens first).
|
||||||
|
|
||||||
|
### Query by `room_id`
|
||||||
|
|
||||||
|
With this API you can get the status of all active deletion tasks, and all those completed in the last 24h,
|
||||||
|
for the given `room_id`.
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /_synapse/admin/v2/rooms/<room_id>/delete_status
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"delete_id": "delete_id1",
|
||||||
|
"status": "failed",
|
||||||
|
"error": "error message",
|
||||||
|
"shutdown_room": {
|
||||||
|
"kicked_users": [],
|
||||||
|
"failed_to_kick_users": [],
|
||||||
|
"local_aliases": [],
|
||||||
|
"new_room_id": null
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
"delete_id": "delete_id2",
|
||||||
|
"status": "purging",
|
||||||
|
"shutdown_room": {
|
||||||
|
"kicked_users": [
|
||||||
|
"@foobar:example.com"
|
||||||
|
],
|
||||||
|
"failed_to_kick_users": [],
|
||||||
|
"local_aliases": [
|
||||||
|
"#badroom:example.com",
|
||||||
|
"#evilsaloon:example.com"
|
||||||
|
],
|
||||||
|
"new_room_id": "!newroomid:example.com"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following parameters should be set in the URL:
|
||||||
|
|
||||||
|
* `room_id` - The ID of the room.
|
||||||
|
|
||||||
|
### Query by `delete_id`
|
||||||
|
|
||||||
|
With this API you can get the status of one specific task by `delete_id`.
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /_synapse/admin/v2/rooms/delete_status/<delete_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "purging",
|
||||||
|
"shutdown_room": {
|
||||||
|
"kicked_users": [
|
||||||
|
"@foobar:example.com"
|
||||||
|
],
|
||||||
|
"failed_to_kick_users": [],
|
||||||
|
"local_aliases": [
|
||||||
|
"#badroom:example.com",
|
||||||
|
"#evilsaloon:example.com"
|
||||||
|
],
|
||||||
|
"new_room_id": "!newroomid:example.com"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following parameters should be set in the URL:
|
||||||
|
|
||||||
|
* `delete_id` - The ID for this delete.
|
||||||
|
|
||||||
|
### Response
|
||||||
|
|
||||||
The following fields are returned in the JSON response body:
|
The following fields are returned in the JSON response body:
|
||||||
|
|
||||||
* `kicked_users` - An array of users (`user_id`) that were kicked.
|
- `results` - An array of objects, each containing information about one task.
|
||||||
* `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
|
This field is omitted from the result when you query by `delete_id`.
|
||||||
* `local_aliases` - An array of strings representing the local aliases that were migrated from
|
Task objects contain the following fields:
|
||||||
the old room to the new.
|
- `delete_id` - The ID for this purge if you query by `room_id`.
|
||||||
* `new_room_id` - A string representing the room ID of the new room.
|
- `status` - The status will be one of:
|
||||||
|
- `shutting_down` - The process is removing users from the room.
|
||||||
|
- `purging` - The process is purging the room and event data from database.
|
||||||
|
- `complete` - The process has completed successfully.
|
||||||
|
- `failed` - The process is aborted, an error has occurred.
|
||||||
|
- `error` - A string that shows an error message if `status` is `failed`.
|
||||||
|
Otherwise this field is hidden.
|
||||||
|
- `shutdown_room` - An object containing information about the result of shutting down the room.
|
||||||
|
*Note:* The result is shown after removing the room members.
|
||||||
|
The delete process can still be running. Please pay attention to the `status`.
|
||||||
|
- `kicked_users` - An array of users (`user_id`) that were kicked.
|
||||||
|
- `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
|
||||||
|
- `local_aliases` - An array of strings representing the local aliases that were
|
||||||
|
migrated from the old room to the new.
|
||||||
|
- `new_room_id` - A string representing the room ID of the new room, or `null` if
|
||||||
|
no such room was created.
|
||||||
|
|
||||||
## Undoing room deletions
|
## Undoing room deletions
|
||||||
|
|
||||||
|
|
|
@ -948,7 +948,7 @@ The following fields are returned in the JSON response body:
|
||||||
See also the
|
See also the
|
||||||
[Client-Server API Spec on pushers](https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers).
|
[Client-Server API Spec on pushers](https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers).
|
||||||
|
|
||||||
## Shadow-banning users
|
## Controlling whether a user is shadow-banned
|
||||||
|
|
||||||
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
|
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
|
||||||
A shadow-banned users receives successful responses to their client-server API requests,
|
A shadow-banned users receives successful responses to their client-server API requests,
|
||||||
|
@ -961,16 +961,22 @@ or broken behaviour for the client. A shadow-banned user will not receive any
|
||||||
notification and it is generally more appropriate to ban or kick abusive users.
|
notification and it is generally more appropriate to ban or kick abusive users.
|
||||||
A shadow-banned user will be unable to contact anyone on the server.
|
A shadow-banned user will be unable to contact anyone on the server.
|
||||||
|
|
||||||
The API is:
|
To shadow-ban a user the API is:
|
||||||
|
|
||||||
```
|
```
|
||||||
POST /_synapse/admin/v1/users/<user_id>/shadow_ban
|
POST /_synapse/admin/v1/users/<user_id>/shadow_ban
|
||||||
```
|
```
|
||||||
|
|
||||||
|
To un-shadow-ban a user the API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
DELETE /_synapse/admin/v1/users/<user_id>/shadow_ban
|
||||||
|
```
|
||||||
|
|
||||||
To use it, you will need to authenticate by providing an `access_token` for a
|
To use it, you will need to authenticate by providing an `access_token` for a
|
||||||
server admin: [Admin API](../usage/administration/admin_api)
|
server admin: [Admin API](../usage/administration/admin_api)
|
||||||
|
|
||||||
An empty JSON dict is returned.
|
An empty JSON dict is returned in both cases.
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
|
||||||
|
@ -1107,7 +1113,7 @@ This endpoint will work even if registration is disabled on the server, unlike
|
||||||
The API is:
|
The API is:
|
||||||
|
|
||||||
```
|
```
|
||||||
POST /_synapse/admin/v1/username_availabile?username=$localpart
|
GET /_synapse/admin/v1/username_available?username=$localpart
|
||||||
```
|
```
|
||||||
|
|
||||||
The request and response format is the same as the [/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
|
The request and response format is the same as the [/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
|
|
||||||
## Server to Server Stack
|
## Server to Server Stack
|
||||||
|
|
||||||
To use the server to server stack, home servers should only need to
|
To use the server to server stack, homeservers should only need to
|
||||||
interact with the Messaging layer.
|
interact with the Messaging layer.
|
||||||
|
|
||||||
The server to server side of things is designed into 4 distinct layers:
|
The server to server side of things is designed into 4 distinct layers:
|
||||||
|
@ -23,7 +23,7 @@ Server with a domain specific API.
|
||||||
|
|
||||||
1. **Messaging Layer**
|
1. **Messaging Layer**
|
||||||
|
|
||||||
This is what the rest of the Home Server hits to send messages, join rooms,
|
This is what the rest of the homeserver hits to send messages, join rooms,
|
||||||
etc. It also allows you to register callbacks for when it get's notified by
|
etc. It also allows you to register callbacks for when it get's notified by
|
||||||
lower levels that e.g. a new message has been received.
|
lower levels that e.g. a new message has been received.
|
||||||
|
|
||||||
|
@ -45,7 +45,7 @@ Server with a domain specific API.
|
||||||
|
|
||||||
For incoming PDUs, it has to check the PDUs it references to see
|
For incoming PDUs, it has to check the PDUs it references to see
|
||||||
if we have missed any. If we have go and ask someone (another
|
if we have missed any. If we have go and ask someone (another
|
||||||
home server) for it.
|
homeserver) for it.
|
||||||
|
|
||||||
3. **Transaction Layer**
|
3. **Transaction Layer**
|
||||||
|
|
||||||
|
|
|
@ -22,8 +22,9 @@ will be removed in a future version of Synapse.
|
||||||
|
|
||||||
The `token` field should include the JSON web token with the following claims:
|
The `token` field should include the JSON web token with the following claims:
|
||||||
|
|
||||||
* The `sub` (subject) claim is required and should encode the local part of the
|
* A claim that encodes the local part of the user ID is required. By default,
|
||||||
user ID.
|
the `sub` (subject) claim is used, or a custom claim can be set in the
|
||||||
|
configuration file.
|
||||||
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
|
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
|
||||||
claims are optional, but validated if present.
|
claims are optional, but validated if present.
|
||||||
* The issuer (`iss`) claim is optional, but required and validated if configured.
|
* The issuer (`iss`) claim is optional, but required and validated if configured.
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
<h2 style="color:red">
|
<h2 style="color:red">
|
||||||
This page of the Synapse documentation is now deprecated. For up to date
|
This page of the Synapse documentation is now deprecated. For up to date
|
||||||
documentation on setting up or writing a password auth provider module, please see
|
documentation on setting up or writing a password auth provider module, please see
|
||||||
<a href="modules.md">this page</a>.
|
<a href="modules/index.md">this page</a>.
|
||||||
</h2>
|
</h2>
|
||||||
|
|
||||||
# Password auth provider modules
|
# Password auth provider modules
|
||||||
|
|
|
@ -647,8 +647,8 @@ retention:
|
||||||
#
|
#
|
||||||
#federation_certificate_verification_whitelist:
|
#federation_certificate_verification_whitelist:
|
||||||
# - lon.example.com
|
# - lon.example.com
|
||||||
# - *.domain.com
|
# - "*.domain.com"
|
||||||
# - *.onion
|
# - "*.onion"
|
||||||
|
|
||||||
# List of custom certificate authorities for federation traffic.
|
# List of custom certificate authorities for federation traffic.
|
||||||
#
|
#
|
||||||
|
@ -2039,6 +2039,12 @@ sso:
|
||||||
#
|
#
|
||||||
#algorithm: "provided-by-your-issuer"
|
#algorithm: "provided-by-your-issuer"
|
||||||
|
|
||||||
|
# Name of the claim containing a unique identifier for the user.
|
||||||
|
#
|
||||||
|
# Optional, defaults to `sub`.
|
||||||
|
#
|
||||||
|
#subject_claim: "sub"
|
||||||
|
|
||||||
# The issuer to validate the "iss" claim against.
|
# The issuer to validate the "iss" claim against.
|
||||||
#
|
#
|
||||||
# Optional, if provided the "iss" claim will be required and
|
# Optional, if provided the "iss" claim will be required and
|
||||||
|
@ -2360,8 +2366,8 @@ user_directory:
|
||||||
# indexes were (re)built was before Synapse 1.44, you'll have to
|
# indexes were (re)built was before Synapse 1.44, you'll have to
|
||||||
# rebuild the indexes in order to search through all known users.
|
# rebuild the indexes in order to search through all known users.
|
||||||
# These indexes are built the first time Synapse starts; admins can
|
# These indexes are built the first time Synapse starts; admins can
|
||||||
# manually trigger a rebuild following the instructions at
|
# manually trigger a rebuild via API following the instructions at
|
||||||
# https://matrix-org.github.io/synapse/latest/user_directory.html
|
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
|
||||||
#
|
#
|
||||||
# Uncomment to return search results containing all known users, even if that
|
# Uncomment to return search results containing all known users, even if that
|
||||||
# user does not share a room with the requester.
|
# user does not share a room with the requester.
|
||||||
|
|
|
@ -76,6 +76,12 @@ The fingerprint of the repository signing key (as shown by `gpg
|
||||||
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
|
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
|
||||||
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
|
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
|
||||||
|
|
||||||
|
When installing with Debian packages, you might prefer to place files in
|
||||||
|
`/etc/matrix-synapse/conf.d/` to override your configuration without editing
|
||||||
|
the main configuration file at `/etc/matrix-synapse/homeserver.yaml`.
|
||||||
|
By doing that, you won't be asked if you want to replace your configuration
|
||||||
|
file when you upgrade the Debian package to a later version.
|
||||||
|
|
||||||
##### Downstream Debian packages
|
##### Downstream Debian packages
|
||||||
|
|
||||||
We do not recommend using the packages from the default Debian `buster`
|
We do not recommend using the packages from the default Debian `buster`
|
||||||
|
|
|
@ -1,12 +1,12 @@
|
||||||
# Overview
|
# Overview
|
||||||
|
|
||||||
This document explains how to enable VoIP relaying on your Home Server with
|
This document explains how to enable VoIP relaying on your homeserver with
|
||||||
TURN.
|
TURN.
|
||||||
|
|
||||||
The synapse Matrix Home Server supports integration with TURN server via the
|
The synapse Matrix homeserver supports integration with TURN server via the
|
||||||
[TURN server REST API](<https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
|
[TURN server REST API](<https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
|
||||||
allows the Home Server to generate credentials that are valid for use on the
|
allows the homeserver to generate credentials that are valid for use on the
|
||||||
TURN server through the use of a secret shared between the Home Server and the
|
TURN server through the use of a secret shared between the homeserver and the
|
||||||
TURN server.
|
TURN server.
|
||||||
|
|
||||||
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
|
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
|
||||||
|
@ -165,18 +165,18 @@ This will install and start a systemd service called `coturn`.
|
||||||
|
|
||||||
## Synapse setup
|
## Synapse setup
|
||||||
|
|
||||||
Your home server configuration file needs the following extra keys:
|
Your homeserver configuration file needs the following extra keys:
|
||||||
|
|
||||||
1. "`turn_uris`": This needs to be a yaml list of public-facing URIs
|
1. "`turn_uris`": This needs to be a yaml list of public-facing URIs
|
||||||
for your TURN server to be given out to your clients. Add separate
|
for your TURN server to be given out to your clients. Add separate
|
||||||
entries for each transport your TURN server supports.
|
entries for each transport your TURN server supports.
|
||||||
2. "`turn_shared_secret`": This is the secret shared between your
|
2. "`turn_shared_secret`": This is the secret shared between your
|
||||||
Home server and your TURN server, so you should set it to the same
|
homeserver and your TURN server, so you should set it to the same
|
||||||
string you used in turnserver.conf.
|
string you used in turnserver.conf.
|
||||||
3. "`turn_user_lifetime`": This is the amount of time credentials
|
3. "`turn_user_lifetime`": This is the amount of time credentials
|
||||||
generated by your Home Server are valid for (in milliseconds).
|
generated by your homeserver are valid for (in milliseconds).
|
||||||
Shorter times offer less potential for abuse at the expense of
|
Shorter times offer less potential for abuse at the expense of
|
||||||
increased traffic between web clients and your home server to
|
increased traffic between web clients and your homeserver to
|
||||||
refresh credentials. The TURN REST API specification recommends
|
refresh credentials. The TURN REST API specification recommends
|
||||||
one day (86400000).
|
one day (86400000).
|
||||||
4. "`turn_allow_guests`": Whether to allow guest users to use the
|
4. "`turn_allow_guests`": Whether to allow guest users to use the
|
||||||
|
@ -220,7 +220,7 @@ Here are a few things to try:
|
||||||
anyone who has successfully set this up.
|
anyone who has successfully set this up.
|
||||||
|
|
||||||
* Check that you have opened your firewall to allow TCP and UDP traffic to the
|
* Check that you have opened your firewall to allow TCP and UDP traffic to the
|
||||||
TURN ports (normally 3478 and 5479).
|
TURN ports (normally 3478 and 5349).
|
||||||
|
|
||||||
* Check that you have opened your firewall to allow UDP traffic to the UDP
|
* Check that you have opened your firewall to allow UDP traffic to the UDP
|
||||||
relay ports (49152-65535 by default).
|
relay ports (49152-65535 by default).
|
||||||
|
|
|
@ -42,7 +42,6 @@ For each update:
|
||||||
`average_items_per_ms` how many items are processed per millisecond based on an exponential average.
|
`average_items_per_ms` how many items are processed per millisecond based on an exponential average.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Enabled
|
## Enabled
|
||||||
|
|
||||||
This API allow pausing background updates.
|
This API allow pausing background updates.
|
||||||
|
@ -82,3 +81,29 @@ The API returns the `enabled` param.
|
||||||
```
|
```
|
||||||
|
|
||||||
There is also a `GET` version which returns the `enabled` state.
|
There is also a `GET` version which returns the `enabled` state.
|
||||||
|
|
||||||
|
|
||||||
|
## Run
|
||||||
|
|
||||||
|
This API schedules a specific background update to run. The job starts immediately after calling the API.
|
||||||
|
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /_synapse/admin/v1/background_updates/start_job
|
||||||
|
```
|
||||||
|
|
||||||
|
with the following body:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"job_name": "populate_stats_process_rooms"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following JSON body parameters are available:
|
||||||
|
|
||||||
|
- `job_name` - A string which job to run. Valid values are:
|
||||||
|
- `populate_stats_process_rooms` - Recalculate the stats for all rooms.
|
||||||
|
- `regenerate_directory` - Recalculate the [user directory](../../../user_directory.md) if it is stale or out of sync.
|
||||||
|
|
|
@ -0,0 +1,5 @@
|
||||||
|
# Single Sign-On
|
||||||
|
|
||||||
|
Synapse supports single sign-on through the SAML, Open ID Connect or CAS protocols.
|
||||||
|
LDAP and other login methods are supported through first and third-party password
|
||||||
|
auth provider modules.
|
|
@ -0,0 +1,8 @@
|
||||||
|
# CAS
|
||||||
|
|
||||||
|
Synapse supports authenticating users via the [Central Authentication
|
||||||
|
Service protocol](https://en.wikipedia.org/wiki/Central_Authentication_Service)
|
||||||
|
(CAS) natively.
|
||||||
|
|
||||||
|
Please see the `cas_config` and `sso` sections of the [Synapse configuration
|
||||||
|
file](../../../configuration/homeserver_sample_config.md) for more details.
|
|
@ -0,0 +1,8 @@
|
||||||
|
# SAML
|
||||||
|
|
||||||
|
Synapse supports authenticating users via the [Security Assertion
|
||||||
|
Markup Language](https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language)
|
||||||
|
(SAML) protocol natively.
|
||||||
|
|
||||||
|
Please see the `saml2_config` and `sso` sections of the [Synapse configuration
|
||||||
|
file](../../../configuration/homeserver_sample_config.md) for more details.
|
|
@ -6,9 +6,9 @@ on this particular server - i.e. ones which your account shares a room with, or
|
||||||
who are present in a publicly viewable room present on the server.
|
who are present in a publicly viewable room present on the server.
|
||||||
|
|
||||||
The directory info is stored in various tables, which can (typically after
|
The directory info is stored in various tables, which can (typically after
|
||||||
DB corruption) get stale or out of sync. If this happens, for now the
|
DB corruption) get stale or out of sync. If this happens, for now the
|
||||||
solution to fix it is to execute the SQL [here](https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/main/delta/53/user_dir_populate.sql)
|
solution to fix it is to use the [admin API](usage/administration/admin_api/background_updates.md#run)
|
||||||
and then restart synapse. This should then start a background task to
|
and execute the job `regenerate_directory`. This should then start a background task to
|
||||||
flush the current tables and regenerate the directory.
|
flush the current tables and regenerate the directory.
|
||||||
|
|
||||||
Data model
|
Data model
|
||||||
|
|
|
@ -182,10 +182,10 @@ This worker can handle API requests matching the following regular
|
||||||
expressions:
|
expressions:
|
||||||
|
|
||||||
# Sync requests
|
# Sync requests
|
||||||
^/_matrix/client/(v2_alpha|r0)/sync$
|
^/_matrix/client/(v2_alpha|r0|v3)/sync$
|
||||||
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
|
^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$
|
||||||
^/_matrix/client/(api/v1|r0)/initialSync$
|
^/_matrix/client/(api/v1|r0|v3)/initialSync$
|
||||||
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
|
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
|
||||||
|
|
||||||
# Federation requests
|
# Federation requests
|
||||||
^/_matrix/federation/v1/event/
|
^/_matrix/federation/v1/event/
|
||||||
|
@ -216,40 +216,40 @@ expressions:
|
||||||
^/_matrix/federation/v1/send/
|
^/_matrix/federation/v1/send/
|
||||||
|
|
||||||
# Client API requests
|
# Client API requests
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/createRoom$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/createRoom$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/publicRooms$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/joined_members$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
|
||||||
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
|
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
|
||||||
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
|
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
|
||||||
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
|
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/devices$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/devices$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/query$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/changes$
|
||||||
^/_matrix/client/versions$
|
^/_matrix/client/versions$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_groups$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups/
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/search$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/search$
|
||||||
|
|
||||||
# Registration/login requests
|
# Registration/login requests
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/login$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
|
||||||
^/_matrix/client/(r0|unstable)/register$
|
^/_matrix/client/(r0|v3|unstable)/register$
|
||||||
^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
|
^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
|
||||||
|
|
||||||
# Event sending requests
|
# Event sending requests
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state/
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/join/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/join/
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/profile/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
|
||||||
|
|
||||||
|
|
||||||
Additionally, the following REST endpoints can be handled for GET requests:
|
Additionally, the following REST endpoints can be handled for GET requests:
|
||||||
|
@ -261,14 +261,14 @@ room must be routed to the same instance. Additionally, care must be taken to
|
||||||
ensure that the purge history admin API is not used while pagination requests
|
ensure that the purge history admin API is not used while pagination requests
|
||||||
for the room are in flight:
|
for the room are in flight:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/messages$
|
||||||
|
|
||||||
Additionally, the following endpoints should be included if Synapse is configured
|
Additionally, the following endpoints should be included if Synapse is configured
|
||||||
to use SSO (you only need to include the ones for whichever SSO provider you're
|
to use SSO (you only need to include the ones for whichever SSO provider you're
|
||||||
using):
|
using):
|
||||||
|
|
||||||
# for all SSO providers
|
# for all SSO providers
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect
|
^/_matrix/client/(api/v1|r0|v3|unstable)/login/sso/redirect
|
||||||
^/_synapse/client/pick_idp$
|
^/_synapse/client/pick_idp$
|
||||||
^/_synapse/client/pick_username
|
^/_synapse/client/pick_username
|
||||||
^/_synapse/client/new_user_consent$
|
^/_synapse/client/new_user_consent$
|
||||||
|
@ -281,7 +281,7 @@ using):
|
||||||
^/_synapse/client/saml2/authn_response$
|
^/_synapse/client/saml2/authn_response$
|
||||||
|
|
||||||
# CAS requests.
|
# CAS requests.
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/login/cas/ticket$
|
||||||
|
|
||||||
Ensure that all SSO logins go to a single process.
|
Ensure that all SSO logins go to a single process.
|
||||||
For multiple workers not handling the SSO endpoints properly, see
|
For multiple workers not handling the SSO endpoints properly, see
|
||||||
|
@ -465,7 +465,7 @@ Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for
|
||||||
Handles searches in the user directory. It can handle REST endpoints matching
|
Handles searches in the user directory. It can handle REST endpoints matching
|
||||||
the following regular expressions:
|
the following regular expressions:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$
|
||||||
|
|
||||||
When using this worker you must also set `update_user_directory: False` in the
|
When using this worker you must also set `update_user_directory: False` in the
|
||||||
shared configuration file to stop the main synapse running background
|
shared configuration file to stop the main synapse running background
|
||||||
|
@ -477,12 +477,12 @@ Proxies some frequently-requested client endpoints to add caching and remove
|
||||||
load from the main synapse. It can handle REST endpoints matching the following
|
load from the main synapse. It can handle REST endpoints matching the following
|
||||||
regular expressions:
|
regular expressions:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
|
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload
|
||||||
|
|
||||||
If `use_presence` is False in the homeserver config, it can also handle REST
|
If `use_presence` is False in the homeserver config, it can also handle REST
|
||||||
endpoints matching the following regular expressions:
|
endpoints matching the following regular expressions:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
|
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/[^/]+/status
|
||||||
|
|
||||||
This "stub" presence handler will pass through `GET` request but make the
|
This "stub" presence handler will pass through `GET` request but make the
|
||||||
`PUT` effectively a no-op.
|
`PUT` effectively a no-op.
|
||||||
|
|
326
mypy.ini
326
mypy.ini
|
@ -10,86 +10,150 @@ warn_unreachable = True
|
||||||
local_partial_types = True
|
local_partial_types = True
|
||||||
no_implicit_optional = True
|
no_implicit_optional = True
|
||||||
|
|
||||||
# To find all folders that pass mypy you run:
|
|
||||||
#
|
|
||||||
# find synapse/* -type d -not -name __pycache__ -exec bash -c "mypy '{}' > /dev/null" \; -print
|
|
||||||
|
|
||||||
files =
|
files =
|
||||||
scripts-dev/sign_json,
|
scripts-dev/sign_json,
|
||||||
synapse/__init__.py,
|
setup.py,
|
||||||
synapse/api,
|
synapse/,
|
||||||
synapse/appservice,
|
tests/
|
||||||
synapse/config,
|
|
||||||
synapse/crypto,
|
# Note: Better exclusion syntax coming in mypy > 0.910
|
||||||
synapse/event_auth.py,
|
# https://github.com/python/mypy/pull/11329
|
||||||
synapse/events,
|
#
|
||||||
synapse/federation,
|
# For now, set the (?x) flag enable "verbose" regexes
|
||||||
synapse/groups,
|
# https://docs.python.org/3/library/re.html#re.X
|
||||||
synapse/handlers,
|
exclude = (?x)
|
||||||
synapse/http,
|
^(
|
||||||
synapse/logging,
|
|synapse/storage/databases/__init__.py
|
||||||
synapse/metrics,
|
|synapse/storage/databases/main/__init__.py
|
||||||
synapse/module_api,
|
|synapse/storage/databases/main/account_data.py
|
||||||
synapse/notifier.py,
|
|synapse/storage/databases/main/cache.py
|
||||||
synapse/push,
|
|synapse/storage/databases/main/devices.py
|
||||||
synapse/replication,
|
|synapse/storage/databases/main/e2e_room_keys.py
|
||||||
synapse/rest,
|
|synapse/storage/databases/main/end_to_end_keys.py
|
||||||
synapse/server.py,
|
|synapse/storage/databases/main/event_federation.py
|
||||||
synapse/server_notices,
|
|synapse/storage/databases/main/event_push_actions.py
|
||||||
synapse/spam_checker_api,
|
|synapse/storage/databases/main/events_bg_updates.py
|
||||||
synapse/state,
|
|synapse/storage/databases/main/events_worker.py
|
||||||
synapse/storage/__init__.py,
|
|synapse/storage/databases/main/group_server.py
|
||||||
synapse/storage/_base.py,
|
|synapse/storage/databases/main/metrics.py
|
||||||
synapse/storage/background_updates.py,
|
|synapse/storage/databases/main/monthly_active_users.py
|
||||||
synapse/storage/databases/main/appservice.py,
|
|synapse/storage/databases/main/presence.py
|
||||||
synapse/storage/databases/main/client_ips.py,
|
|synapse/storage/databases/main/purge_events.py
|
||||||
synapse/storage/databases/main/events.py,
|
|synapse/storage/databases/main/push_rule.py
|
||||||
synapse/storage/databases/main/keys.py,
|
|synapse/storage/databases/main/receipts.py
|
||||||
synapse/storage/databases/main/pusher.py,
|
|synapse/storage/databases/main/room.py
|
||||||
synapse/storage/databases/main/registration.py,
|
|synapse/storage/databases/main/roommember.py
|
||||||
synapse/storage/databases/main/relations.py,
|
|synapse/storage/databases/main/search.py
|
||||||
synapse/storage/databases/main/session.py,
|
|synapse/storage/databases/main/state.py
|
||||||
synapse/storage/databases/main/stream.py,
|
|synapse/storage/databases/main/stats.py
|
||||||
synapse/storage/databases/main/ui_auth.py,
|
|synapse/storage/databases/main/transactions.py
|
||||||
synapse/storage/databases/state,
|
|synapse/storage/databases/main/user_directory.py
|
||||||
synapse/storage/database.py,
|
|synapse/storage/schema/
|
||||||
synapse/storage/engines,
|
|
||||||
synapse/storage/keys.py,
|
|tests/api/test_auth.py
|
||||||
synapse/storage/persist_events.py,
|
|tests/api/test_ratelimiting.py
|
||||||
synapse/storage/prepare_database.py,
|
|tests/app/test_openid_listener.py
|
||||||
synapse/storage/purge_events.py,
|
|tests/appservice/test_scheduler.py
|
||||||
synapse/storage/push_rule.py,
|
|tests/config/test_cache.py
|
||||||
synapse/storage/relations.py,
|
|tests/config/test_tls.py
|
||||||
synapse/storage/roommember.py,
|
|tests/crypto/test_keyring.py
|
||||||
synapse/storage/state.py,
|
|tests/events/test_presence_router.py
|
||||||
synapse/storage/types.py,
|
|tests/events/test_utils.py
|
||||||
synapse/storage/util,
|
|tests/federation/test_federation_catch_up.py
|
||||||
synapse/streams,
|
|tests/federation/test_federation_sender.py
|
||||||
synapse/types.py,
|
|tests/federation/test_federation_server.py
|
||||||
synapse/util,
|
|tests/federation/transport/test_knocking.py
|
||||||
synapse/visibility.py,
|
|tests/federation/transport/test_server.py
|
||||||
tests/replication,
|
|tests/handlers/test_cas.py
|
||||||
tests/test_event_auth.py,
|
|tests/handlers/test_directory.py
|
||||||
tests/test_utils,
|
|tests/handlers/test_e2e_keys.py
|
||||||
tests/handlers/test_password_providers.py,
|
|tests/handlers/test_federation.py
|
||||||
tests/handlers/test_room.py,
|
|tests/handlers/test_oidc.py
|
||||||
tests/handlers/test_room_summary.py,
|
|tests/handlers/test_presence.py
|
||||||
tests/handlers/test_send_email.py,
|
|tests/handlers/test_profile.py
|
||||||
tests/handlers/test_sync.py,
|
|tests/handlers/test_saml.py
|
||||||
tests/handlers/test_user_directory.py,
|
|tests/handlers/test_typing.py
|
||||||
tests/rest/client/test_login.py,
|
|tests/http/federation/test_matrix_federation_agent.py
|
||||||
tests/rest/client/test_auth.py,
|
|tests/http/federation/test_srv_resolver.py
|
||||||
tests/rest/client/test_relations.py,
|
|tests/http/test_fedclient.py
|
||||||
tests/rest/media/v1/test_filepath.py,
|
|tests/http/test_proxyagent.py
|
||||||
tests/rest/media/v1/test_oembed.py,
|
|tests/http/test_servlet.py
|
||||||
tests/storage/test_state.py,
|
|tests/http/test_site.py
|
||||||
tests/storage/test_user_directory.py,
|
|tests/logging/__init__.py
|
||||||
tests/util/test_itertools.py,
|
|tests/logging/test_terse_json.py
|
||||||
tests/util/test_stream_change_cache.py
|
|tests/module_api/test_api.py
|
||||||
|
|tests/push/test_email.py
|
||||||
|
|tests/push/test_http.py
|
||||||
|
|tests/push/test_presentable_names.py
|
||||||
|
|tests/push/test_push_rule_evaluator.py
|
||||||
|
|tests/rest/admin/test_admin.py
|
||||||
|
|tests/rest/admin/test_device.py
|
||||||
|
|tests/rest/admin/test_media.py
|
||||||
|
|tests/rest/admin/test_server_notice.py
|
||||||
|
|tests/rest/admin/test_user.py
|
||||||
|
|tests/rest/admin/test_username_available.py
|
||||||
|
|tests/rest/client/test_account.py
|
||||||
|
|tests/rest/client/test_events.py
|
||||||
|
|tests/rest/client/test_filter.py
|
||||||
|
|tests/rest/client/test_groups.py
|
||||||
|
|tests/rest/client/test_register.py
|
||||||
|
|tests/rest/client/test_report_event.py
|
||||||
|
|tests/rest/client/test_rooms.py
|
||||||
|
|tests/rest/client/test_third_party_rules.py
|
||||||
|
|tests/rest/client/test_transactions.py
|
||||||
|
|tests/rest/client/test_typing.py
|
||||||
|
|tests/rest/client/utils.py
|
||||||
|
|tests/rest/key/v2/test_remote_key_resource.py
|
||||||
|
|tests/rest/media/v1/test_base.py
|
||||||
|
|tests/rest/media/v1/test_media_storage.py
|
||||||
|
|tests/rest/media/v1/test_url_preview.py
|
||||||
|
|tests/scripts/test_new_matrix_user.py
|
||||||
|
|tests/server.py
|
||||||
|
|tests/server_notices/test_resource_limits_server_notices.py
|
||||||
|
|tests/state/test_v2.py
|
||||||
|
|tests/storage/test_account_data.py
|
||||||
|
|tests/storage/test_appservice.py
|
||||||
|
|tests/storage/test_background_update.py
|
||||||
|
|tests/storage/test_base.py
|
||||||
|
|tests/storage/test_client_ips.py
|
||||||
|
|tests/storage/test_database.py
|
||||||
|
|tests/storage/test_event_federation.py
|
||||||
|
|tests/storage/test_id_generators.py
|
||||||
|
|tests/storage/test_roommember.py
|
||||||
|
|tests/test_metrics.py
|
||||||
|
|tests/test_phone_home.py
|
||||||
|
|tests/test_server.py
|
||||||
|
|tests/test_state.py
|
||||||
|
|tests/test_terms_auth.py
|
||||||
|
|tests/test_visibility.py
|
||||||
|
|tests/unittest.py
|
||||||
|
|tests/util/caches/test_cached_call.py
|
||||||
|
|tests/util/caches/test_deferred_cache.py
|
||||||
|
|tests/util/caches/test_descriptors.py
|
||||||
|
|tests/util/caches/test_response_cache.py
|
||||||
|
|tests/util/caches/test_ttlcache.py
|
||||||
|
|tests/util/test_async_helpers.py
|
||||||
|
|tests/util/test_batching_queue.py
|
||||||
|
|tests/util/test_dict_cache.py
|
||||||
|
|tests/util/test_expiring_cache.py
|
||||||
|
|tests/util/test_file_consumer.py
|
||||||
|
|tests/util/test_linearizer.py
|
||||||
|
|tests/util/test_logcontext.py
|
||||||
|
|tests/util/test_lrucache.py
|
||||||
|
|tests/util/test_rwlock.py
|
||||||
|
|tests/util/test_wheel_timer.py
|
||||||
|
|tests/utils.py
|
||||||
|
)$
|
||||||
|
|
||||||
[mypy-synapse.api.*]
|
[mypy-synapse.api.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.app.*]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.config._base]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.crypto.*]
|
[mypy-synapse.crypto.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
@ -99,6 +163,9 @@ disallow_untyped_defs = True
|
||||||
[mypy-synapse.handlers.*]
|
[mypy-synapse.handlers.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.metrics.*]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.push.*]
|
[mypy-synapse.push.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
@ -114,105 +181,45 @@ disallow_untyped_defs = True
|
||||||
[mypy-synapse.storage.databases.main.client_ips]
|
[mypy-synapse.storage.databases.main.client_ips]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.directory]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.room_batch]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.profile]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.state_deltas]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.user_erasure_store]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.storage.util.*]
|
[mypy-synapse.storage.util.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.streams.*]
|
[mypy-synapse.streams.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.util.batching_queue]
|
[mypy-synapse.util.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.util.caches.cached_call]
|
[mypy-synapse.util.caches.treecache]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
[mypy-synapse.util.caches.dictionary_cache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.caches.lrucache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.caches.response_cache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.caches.stream_change_cache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.caches.ttl_cache]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.daemonize]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.file_consumer]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.frozenutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.hash]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.httpresourcetree]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.iterutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.linked_list]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.logcontext]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.logformatter]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.macaroons]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.manhole]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.module_loader]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.msisdn]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.patch_inline_callbacks]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.ratelimitutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.retryutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.rlimit]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.stringutils]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.templates]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.threepids]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.wheel_timer]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-synapse.util.versionstring]
|
|
||||||
disallow_untyped_defs = True
|
|
||||||
|
|
||||||
[mypy-tests.handlers.test_user_directory]
|
[mypy-tests.handlers.test_user_directory]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-tests.storage.test_profile]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-tests.storage.test_user_directory]
|
[mypy-tests.storage.test_user_directory]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-tests.rest.client.test_directory]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
;; Dependencies without annotations
|
;; Dependencies without annotations
|
||||||
;; Before ignoring a module, check to see if type stubs are available.
|
;; Before ignoring a module, check to see if type stubs are available.
|
||||||
;; The `typeshed` project maintains stubs here:
|
;; The `typeshed` project maintains stubs here:
|
||||||
|
@ -272,6 +279,9 @@ ignore_missing_imports = True
|
||||||
[mypy-opentracing]
|
[mypy-opentracing]
|
||||||
ignore_missing_imports = True
|
ignore_missing_imports = True
|
||||||
|
|
||||||
|
[mypy-parameterized.*]
|
||||||
|
ignore_missing_imports = True
|
||||||
|
|
||||||
[mypy-phonenumbers.*]
|
[mypy-phonenumbers.*]
|
||||||
ignore_missing_imports = True
|
ignore_missing_imports = True
|
||||||
|
|
||||||
|
|
|
@ -24,7 +24,7 @@
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
# Change to the repository root
|
# Change to the repository root
|
||||||
cd "$(dirname "$0")/.."
|
cd "$(dirname $0)/.."
|
||||||
|
|
||||||
# Check for a user-specified Complement checkout
|
# Check for a user-specified Complement checkout
|
||||||
if [[ -z "$COMPLEMENT_DIR" ]]; then
|
if [[ -z "$COMPLEMENT_DIR" ]]; then
|
||||||
|
@ -61,8 +61,8 @@ cd "$COMPLEMENT_DIR"
|
||||||
EXTRA_COMPLEMENT_ARGS=""
|
EXTRA_COMPLEMENT_ARGS=""
|
||||||
if [[ -n "$1" ]]; then
|
if [[ -n "$1" ]]; then
|
||||||
# A test name regex has been set, supply it to Complement
|
# A test name regex has been set, supply it to Complement
|
||||||
EXTRA_COMPLEMENT_ARGS=(-run "$1")
|
EXTRA_COMPLEMENT_ARGS+="-run $1 "
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Run the tests!
|
# Run the tests!
|
||||||
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403,msc2716 -count=1 "${EXTRA_COMPLEMENT_ARGS[@]}" ./tests/...
|
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
|
||||||
|
|
6
setup.py
6
setup.py
|
@ -17,6 +17,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import glob
|
import glob
|
||||||
import os
|
import os
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
from setuptools import Command, find_packages, setup
|
from setuptools import Command, find_packages, setup
|
||||||
|
|
||||||
|
@ -49,8 +50,6 @@ here = os.path.abspath(os.path.dirname(__file__))
|
||||||
# [1]: http://tox.readthedocs.io/en/2.5.0/example/basic.html#integration-with-setup-py-test-command
|
# [1]: http://tox.readthedocs.io/en/2.5.0/example/basic.html#integration-with-setup-py-test-command
|
||||||
# [2]: https://pypi.python.org/pypi/setuptools_trial
|
# [2]: https://pypi.python.org/pypi/setuptools_trial
|
||||||
class TestCommand(Command):
|
class TestCommand(Command):
|
||||||
user_options = []
|
|
||||||
|
|
||||||
def initialize_options(self):
|
def initialize_options(self):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@ -75,7 +74,7 @@ def read_file(path_segments):
|
||||||
|
|
||||||
def exec_file(path_segments):
|
def exec_file(path_segments):
|
||||||
"""Execute a single python file to get the variables defined in it"""
|
"""Execute a single python file to get the variables defined in it"""
|
||||||
result = {}
|
result: Dict[str, Any] = {}
|
||||||
code = read_file(path_segments)
|
code = read_file(path_segments)
|
||||||
exec(code, result)
|
exec(code, result)
|
||||||
return result
|
return result
|
||||||
|
@ -111,6 +110,7 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
|
||||||
"types-Pillow>=8.3.4",
|
"types-Pillow>=8.3.4",
|
||||||
"types-pyOpenSSL>=20.0.7",
|
"types-pyOpenSSL>=20.0.7",
|
||||||
"types-PyYAML>=5.4.10",
|
"types-PyYAML>=5.4.10",
|
||||||
|
"types-requests>=2.26.0",
|
||||||
"types-setuptools>=57.4.0",
|
"types-setuptools>=57.4.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
|
@ -47,7 +47,7 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "1.47.0"
|
__version__ = "1.48.0rc1"
|
||||||
|
|
||||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||||
# We import here so that we don't have to install a bunch of deps when
|
# We import here so that we don't have to install a bunch of deps when
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
# Copyright 2018 New Vector
|
# Copyright 2018 New Vector
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -19,22 +20,23 @@ import hashlib
|
||||||
import hmac
|
import hmac
|
||||||
import logging
|
import logging
|
||||||
import sys
|
import sys
|
||||||
|
from typing import Callable, Optional
|
||||||
|
|
||||||
import requests as _requests
|
import requests as _requests
|
||||||
import yaml
|
import yaml
|
||||||
|
|
||||||
|
|
||||||
def request_registration(
|
def request_registration(
|
||||||
user,
|
user: str,
|
||||||
password,
|
password: str,
|
||||||
server_location,
|
server_location: str,
|
||||||
shared_secret,
|
shared_secret: str,
|
||||||
admin=False,
|
admin: bool = False,
|
||||||
user_type=None,
|
user_type: Optional[str] = None,
|
||||||
requests=_requests,
|
requests=_requests,
|
||||||
_print=print,
|
_print: Callable[[str], None] = print,
|
||||||
exit=sys.exit,
|
exit: Callable[[int], None] = sys.exit,
|
||||||
):
|
) -> None:
|
||||||
|
|
||||||
url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
|
url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
|
||||||
|
|
||||||
|
@ -65,13 +67,13 @@ def request_registration(
|
||||||
mac.update(b"\x00")
|
mac.update(b"\x00")
|
||||||
mac.update(user_type.encode("utf8"))
|
mac.update(user_type.encode("utf8"))
|
||||||
|
|
||||||
mac = mac.hexdigest()
|
hex_mac = mac.hexdigest()
|
||||||
|
|
||||||
data = {
|
data = {
|
||||||
"nonce": nonce,
|
"nonce": nonce,
|
||||||
"username": user,
|
"username": user,
|
||||||
"password": password,
|
"password": password,
|
||||||
"mac": mac,
|
"mac": hex_mac,
|
||||||
"admin": admin,
|
"admin": admin,
|
||||||
"user_type": user_type,
|
"user_type": user_type,
|
||||||
}
|
}
|
||||||
|
@ -91,10 +93,17 @@ def request_registration(
|
||||||
_print("Success!")
|
_print("Success!")
|
||||||
|
|
||||||
|
|
||||||
def register_new_user(user, password, server_location, shared_secret, admin, user_type):
|
def register_new_user(
|
||||||
|
user: str,
|
||||||
|
password: str,
|
||||||
|
server_location: str,
|
||||||
|
shared_secret: str,
|
||||||
|
admin: Optional[bool],
|
||||||
|
user_type: Optional[str],
|
||||||
|
) -> None:
|
||||||
if not user:
|
if not user:
|
||||||
try:
|
try:
|
||||||
default_user = getpass.getuser()
|
default_user: Optional[str] = getpass.getuser()
|
||||||
except Exception:
|
except Exception:
|
||||||
default_user = None
|
default_user = None
|
||||||
|
|
||||||
|
@ -123,8 +132,8 @@ def register_new_user(user, password, server_location, shared_secret, admin, use
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
if admin is None:
|
if admin is None:
|
||||||
admin = input("Make admin [no]: ")
|
admin_inp = input("Make admin [no]: ")
|
||||||
if admin in ("y", "yes", "true"):
|
if admin_inp in ("y", "yes", "true"):
|
||||||
admin = True
|
admin = True
|
||||||
else:
|
else:
|
||||||
admin = False
|
admin = False
|
||||||
|
@ -134,7 +143,7 @@ def register_new_user(user, password, server_location, shared_secret, admin, use
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main() -> None:
|
||||||
|
|
||||||
logging.captureWarnings(True)
|
logging.captureWarnings(True)
|
||||||
|
|
||||||
|
|
|
@ -92,7 +92,7 @@ def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
|
||||||
return user_infos
|
return user_infos
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main() -> None:
|
||||||
parser = argparse.ArgumentParser()
|
parser = argparse.ArgumentParser()
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"-c",
|
"-c",
|
||||||
|
@ -142,7 +142,8 @@ def main():
|
||||||
engine = create_engine(database_config.config)
|
engine = create_engine(database_config.config)
|
||||||
|
|
||||||
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
|
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
|
||||||
user_infos = get_recent_users(db_conn.cursor(), since_ms)
|
# This generates a type of Cursor, not LoggingTransaction.
|
||||||
|
user_infos = get_recent_users(db_conn.cursor(), since_ms) # type: ignore[arg-type]
|
||||||
|
|
||||||
for user_info in user_infos:
|
for user_info in user_infos:
|
||||||
if exclude_users_with_email and user_info.emails:
|
if exclude_users_with_email and user_info.emails:
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
# Copyright 2017 Vector Creations Ltd
|
# Copyright 2017 Vector Creations Ltd
|
||||||
# Copyright 2018-2019 New Vector Ltd
|
# Copyright 2018-2019 New Vector Ltd
|
||||||
# Copyright 2019 The Matrix.org Foundation C.I.C.
|
# Copyright 2019-2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -86,6 +86,9 @@ ROOM_EVENT_FILTER_SCHEMA = {
|
||||||
# cf https://github.com/matrix-org/matrix-doc/pull/2326
|
# cf https://github.com/matrix-org/matrix-doc/pull/2326
|
||||||
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
|
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
|
||||||
"org.matrix.not_labels": {"type": "array", "items": {"type": "string"}},
|
"org.matrix.not_labels": {"type": "array", "items": {"type": "string"}},
|
||||||
|
# MSC3440, filtering by event relations.
|
||||||
|
"io.element.relation_senders": {"type": "array", "items": {"type": "string"}},
|
||||||
|
"io.element.relation_types": {"type": "array", "items": {"type": "string"}},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -146,14 +149,16 @@ def matrix_user_id_validator(user_id_str: str) -> UserID:
|
||||||
|
|
||||||
class Filtering:
|
class Filtering:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
self._hs = hs
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
|
self.DEFAULT_FILTER_COLLECTION = FilterCollection(hs, {})
|
||||||
|
|
||||||
async def get_user_filter(
|
async def get_user_filter(
|
||||||
self, user_localpart: str, filter_id: Union[int, str]
|
self, user_localpart: str, filter_id: Union[int, str]
|
||||||
) -> "FilterCollection":
|
) -> "FilterCollection":
|
||||||
result = await self.store.get_user_filter(user_localpart, filter_id)
|
result = await self.store.get_user_filter(user_localpart, filter_id)
|
||||||
return FilterCollection(result)
|
return FilterCollection(self._hs, result)
|
||||||
|
|
||||||
def add_user_filter(
|
def add_user_filter(
|
||||||
self, user_localpart: str, user_filter: JsonDict
|
self, user_localpart: str, user_filter: JsonDict
|
||||||
|
@ -191,21 +196,22 @@ FilterEvent = TypeVar("FilterEvent", EventBase, UserPresenceState, JsonDict)
|
||||||
|
|
||||||
|
|
||||||
class FilterCollection:
|
class FilterCollection:
|
||||||
def __init__(self, filter_json: JsonDict):
|
def __init__(self, hs: "HomeServer", filter_json: JsonDict):
|
||||||
self._filter_json = filter_json
|
self._filter_json = filter_json
|
||||||
|
|
||||||
room_filter_json = self._filter_json.get("room", {})
|
room_filter_json = self._filter_json.get("room", {})
|
||||||
|
|
||||||
self._room_filter = Filter(
|
self._room_filter = Filter(
|
||||||
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")}
|
hs,
|
||||||
|
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")},
|
||||||
)
|
)
|
||||||
|
|
||||||
self._room_timeline_filter = Filter(room_filter_json.get("timeline", {}))
|
self._room_timeline_filter = Filter(hs, room_filter_json.get("timeline", {}))
|
||||||
self._room_state_filter = Filter(room_filter_json.get("state", {}))
|
self._room_state_filter = Filter(hs, room_filter_json.get("state", {}))
|
||||||
self._room_ephemeral_filter = Filter(room_filter_json.get("ephemeral", {}))
|
self._room_ephemeral_filter = Filter(hs, room_filter_json.get("ephemeral", {}))
|
||||||
self._room_account_data = Filter(room_filter_json.get("account_data", {}))
|
self._room_account_data = Filter(hs, room_filter_json.get("account_data", {}))
|
||||||
self._presence_filter = Filter(filter_json.get("presence", {}))
|
self._presence_filter = Filter(hs, filter_json.get("presence", {}))
|
||||||
self._account_data = Filter(filter_json.get("account_data", {}))
|
self._account_data = Filter(hs, filter_json.get("account_data", {}))
|
||||||
|
|
||||||
self.include_leave = filter_json.get("room", {}).get("include_leave", False)
|
self.include_leave = filter_json.get("room", {}).get("include_leave", False)
|
||||||
self.event_fields = filter_json.get("event_fields", [])
|
self.event_fields = filter_json.get("event_fields", [])
|
||||||
|
@ -232,25 +238,37 @@ class FilterCollection:
|
||||||
def include_redundant_members(self) -> bool:
|
def include_redundant_members(self) -> bool:
|
||||||
return self._room_state_filter.include_redundant_members
|
return self._room_state_filter.include_redundant_members
|
||||||
|
|
||||||
def filter_presence(
|
async def filter_presence(
|
||||||
self, events: Iterable[UserPresenceState]
|
self, events: Iterable[UserPresenceState]
|
||||||
) -> List[UserPresenceState]:
|
) -> List[UserPresenceState]:
|
||||||
return self._presence_filter.filter(events)
|
return await self._presence_filter.filter(events)
|
||||||
|
|
||||||
def filter_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
async def filter_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
||||||
return self._account_data.filter(events)
|
return await self._account_data.filter(events)
|
||||||
|
|
||||||
def filter_room_state(self, events: Iterable[EventBase]) -> List[EventBase]:
|
async def filter_room_state(self, events: Iterable[EventBase]) -> List[EventBase]:
|
||||||
return self._room_state_filter.filter(self._room_filter.filter(events))
|
return await self._room_state_filter.filter(
|
||||||
|
await self._room_filter.filter(events)
|
||||||
|
)
|
||||||
|
|
||||||
def filter_room_timeline(self, events: Iterable[EventBase]) -> List[EventBase]:
|
async def filter_room_timeline(
|
||||||
return self._room_timeline_filter.filter(self._room_filter.filter(events))
|
self, events: Iterable[EventBase]
|
||||||
|
) -> List[EventBase]:
|
||||||
|
return await self._room_timeline_filter.filter(
|
||||||
|
await self._room_filter.filter(events)
|
||||||
|
)
|
||||||
|
|
||||||
def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
async def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
||||||
return self._room_ephemeral_filter.filter(self._room_filter.filter(events))
|
return await self._room_ephemeral_filter.filter(
|
||||||
|
await self._room_filter.filter(events)
|
||||||
|
)
|
||||||
|
|
||||||
def filter_room_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
async def filter_room_account_data(
|
||||||
return self._room_account_data.filter(self._room_filter.filter(events))
|
self, events: Iterable[JsonDict]
|
||||||
|
) -> List[JsonDict]:
|
||||||
|
return await self._room_account_data.filter(
|
||||||
|
await self._room_filter.filter(events)
|
||||||
|
)
|
||||||
|
|
||||||
def blocks_all_presence(self) -> bool:
|
def blocks_all_presence(self) -> bool:
|
||||||
return (
|
return (
|
||||||
|
@ -274,7 +292,9 @@ class FilterCollection:
|
||||||
|
|
||||||
|
|
||||||
class Filter:
|
class Filter:
|
||||||
def __init__(self, filter_json: JsonDict):
|
def __init__(self, hs: "HomeServer", filter_json: JsonDict):
|
||||||
|
self._hs = hs
|
||||||
|
self._store = hs.get_datastore()
|
||||||
self.filter_json = filter_json
|
self.filter_json = filter_json
|
||||||
|
|
||||||
self.limit = filter_json.get("limit", 10)
|
self.limit = filter_json.get("limit", 10)
|
||||||
|
@ -297,6 +317,20 @@ class Filter:
|
||||||
self.labels = filter_json.get("org.matrix.labels", None)
|
self.labels = filter_json.get("org.matrix.labels", None)
|
||||||
self.not_labels = filter_json.get("org.matrix.not_labels", [])
|
self.not_labels = filter_json.get("org.matrix.not_labels", [])
|
||||||
|
|
||||||
|
# Ideally these would be rejected at the endpoint if they were provided
|
||||||
|
# and not supported, but that would involve modifying the JSON schema
|
||||||
|
# based on the homeserver configuration.
|
||||||
|
if hs.config.experimental.msc3440_enabled:
|
||||||
|
self.relation_senders = self.filter_json.get(
|
||||||
|
"io.element.relation_senders", None
|
||||||
|
)
|
||||||
|
self.relation_types = self.filter_json.get(
|
||||||
|
"io.element.relation_types", None
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.relation_senders = None
|
||||||
|
self.relation_types = None
|
||||||
|
|
||||||
def filters_all_types(self) -> bool:
|
def filters_all_types(self) -> bool:
|
||||||
return "*" in self.not_types
|
return "*" in self.not_types
|
||||||
|
|
||||||
|
@ -306,7 +340,7 @@ class Filter:
|
||||||
def filters_all_rooms(self) -> bool:
|
def filters_all_rooms(self) -> bool:
|
||||||
return "*" in self.not_rooms
|
return "*" in self.not_rooms
|
||||||
|
|
||||||
def check(self, event: FilterEvent) -> bool:
|
def _check(self, event: FilterEvent) -> bool:
|
||||||
"""Checks whether the filter matches the given event.
|
"""Checks whether the filter matches the given event.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -420,8 +454,30 @@ class Filter:
|
||||||
|
|
||||||
return room_ids
|
return room_ids
|
||||||
|
|
||||||
def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
|
async def _check_event_relations(
|
||||||
return list(filter(self.check, events))
|
self, events: Iterable[FilterEvent]
|
||||||
|
) -> List[FilterEvent]:
|
||||||
|
# The event IDs to check, mypy doesn't understand the ifinstance check.
|
||||||
|
event_ids = [event.event_id for event in events if isinstance(event, EventBase)] # type: ignore[attr-defined]
|
||||||
|
event_ids_to_keep = set(
|
||||||
|
await self._store.events_have_relations(
|
||||||
|
event_ids, self.relation_senders, self.relation_types
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return [
|
||||||
|
event
|
||||||
|
for event in events
|
||||||
|
if not isinstance(event, EventBase) or event.event_id in event_ids_to_keep
|
||||||
|
]
|
||||||
|
|
||||||
|
async def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
|
||||||
|
result = [event for event in events if self._check(event)]
|
||||||
|
|
||||||
|
if self.relation_senders or self.relation_types:
|
||||||
|
return await self._check_event_relations(result)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
def with_room_ids(self, room_ids: Iterable[str]) -> "Filter":
|
def with_room_ids(self, room_ids: Iterable[str]) -> "Filter":
|
||||||
"""Returns a new filter with the given room IDs appended.
|
"""Returns a new filter with the given room IDs appended.
|
||||||
|
@ -433,7 +489,7 @@ class Filter:
|
||||||
filter: A new filter including the given rooms and the old
|
filter: A new filter including the given rooms and the old
|
||||||
filter's rooms.
|
filter's rooms.
|
||||||
"""
|
"""
|
||||||
newFilter = Filter(self.filter_json)
|
newFilter = Filter(self._hs, self.filter_json)
|
||||||
newFilter.rooms += room_ids
|
newFilter.rooms += room_ids
|
||||||
return newFilter
|
return newFilter
|
||||||
|
|
||||||
|
@ -444,6 +500,3 @@ def _matches_wildcard(actual_value: Optional[str], filter_value: str) -> bool:
|
||||||
return actual_value.startswith(type_prefix)
|
return actual_value.startswith(type_prefix)
|
||||||
else:
|
else:
|
||||||
return actual_value == filter_value
|
return actual_value == filter_value
|
||||||
|
|
||||||
|
|
||||||
DEFAULT_FILTER_COLLECTION = FilterCollection({})
|
|
||||||
|
|
|
@ -30,7 +30,8 @@ FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
|
||||||
STATIC_PREFIX = "/_matrix/static"
|
STATIC_PREFIX = "/_matrix/static"
|
||||||
WEB_CLIENT_PREFIX = "/_matrix/client"
|
WEB_CLIENT_PREFIX = "/_matrix/client"
|
||||||
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
||||||
MEDIA_PREFIX = "/_matrix/media/r0"
|
MEDIA_R0_PREFIX = "/_matrix/media/r0"
|
||||||
|
MEDIA_V3_PREFIX = "/_matrix/media/v3"
|
||||||
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -13,6 +13,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
import sys
|
import sys
|
||||||
|
from typing import Container
|
||||||
|
|
||||||
from synapse import python_dependencies # noqa: E402
|
from synapse import python_dependencies # noqa: E402
|
||||||
|
|
||||||
|
@ -27,7 +28,9 @@ except python_dependencies.DependencyException as e:
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
def check_bind_error(e, address, bind_addresses):
|
def check_bind_error(
|
||||||
|
e: Exception, address: str, bind_addresses: Container[str]
|
||||||
|
) -> None:
|
||||||
"""
|
"""
|
||||||
This method checks an exception occurred while binding on 0.0.0.0.
|
This method checks an exception occurred while binding on 0.0.0.0.
|
||||||
If :: is specified in the bind addresses a warning is shown.
|
If :: is specified in the bind addresses a warning is shown.
|
||||||
|
@ -38,9 +41,9 @@ def check_bind_error(e, address, bind_addresses):
|
||||||
When binding on 0.0.0.0 after :: this can safely be ignored.
|
When binding on 0.0.0.0 after :: this can safely be ignored.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
e (Exception): Exception that was caught.
|
e: Exception that was caught.
|
||||||
address (str): Address on which binding was attempted.
|
address: Address on which binding was attempted.
|
||||||
bind_addresses (list): Addresses on which the service listens.
|
bind_addresses: Addresses on which the service listens.
|
||||||
"""
|
"""
|
||||||
if address == "0.0.0.0" and "::" in bind_addresses:
|
if address == "0.0.0.0" and "::" in bind_addresses:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
|
|
|
@ -22,13 +22,27 @@ import socket
|
||||||
import sys
|
import sys
|
||||||
import traceback
|
import traceback
|
||||||
import warnings
|
import warnings
|
||||||
from typing import TYPE_CHECKING, Awaitable, Callable, Iterable
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Any,
|
||||||
|
Awaitable,
|
||||||
|
Callable,
|
||||||
|
Collection,
|
||||||
|
Dict,
|
||||||
|
Iterable,
|
||||||
|
List,
|
||||||
|
NoReturn,
|
||||||
|
Tuple,
|
||||||
|
cast,
|
||||||
|
)
|
||||||
|
|
||||||
from cryptography.utils import CryptographyDeprecationWarning
|
from cryptography.utils import CryptographyDeprecationWarning
|
||||||
from typing_extensions import NoReturn
|
|
||||||
|
|
||||||
import twisted
|
import twisted
|
||||||
from twisted.internet import defer, error, reactor
|
from twisted.internet import defer, error, reactor as _reactor
|
||||||
|
from twisted.internet.interfaces import IOpenSSLContextFactory, IReactorSSL, IReactorTCP
|
||||||
|
from twisted.internet.protocol import ServerFactory
|
||||||
|
from twisted.internet.tcp import Port
|
||||||
from twisted.logger import LoggingFile, LogLevel
|
from twisted.logger import LoggingFile, LogLevel
|
||||||
from twisted.protocols.tls import TLSMemoryBIOFactory
|
from twisted.protocols.tls import TLSMemoryBIOFactory
|
||||||
from twisted.python.threadpool import ThreadPool
|
from twisted.python.threadpool import ThreadPool
|
||||||
|
@ -48,6 +62,7 @@ from synapse.logging.context import PreserveLoggingContext
|
||||||
from synapse.metrics import register_threadpool
|
from synapse.metrics import register_threadpool
|
||||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||||
from synapse.metrics.jemalloc import setup_jemalloc_stats
|
from synapse.metrics.jemalloc import setup_jemalloc_stats
|
||||||
|
from synapse.types import ISynapseReactor
|
||||||
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
|
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
|
||||||
from synapse.util.daemonize import daemonize_process
|
from synapse.util.daemonize import daemonize_process
|
||||||
from synapse.util.gai_resolver import GAIResolver
|
from synapse.util.gai_resolver import GAIResolver
|
||||||
|
@ -57,33 +72,44 @@ from synapse.util.versionstring import get_version_string
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
# Twisted injects the global reactor to make it easier to import, this confuses
|
||||||
|
# mypy which thinks it is a module. Tell it that it a more proper type.
|
||||||
|
reactor = cast(ISynapseReactor, _reactor)
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# list of tuples of function, args list, kwargs dict
|
# list of tuples of function, args list, kwargs dict
|
||||||
_sighup_callbacks = []
|
_sighup_callbacks: List[
|
||||||
|
Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]]
|
||||||
|
] = []
|
||||||
|
|
||||||
|
|
||||||
def register_sighup(func, *args, **kwargs):
|
def register_sighup(func: Callable[..., None], *args: Any, **kwargs: Any) -> None:
|
||||||
"""
|
"""
|
||||||
Register a function to be called when a SIGHUP occurs.
|
Register a function to be called when a SIGHUP occurs.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
func (function): Function to be called when sent a SIGHUP signal.
|
func: Function to be called when sent a SIGHUP signal.
|
||||||
*args, **kwargs: args and kwargs to be passed to the target function.
|
*args, **kwargs: args and kwargs to be passed to the target function.
|
||||||
"""
|
"""
|
||||||
_sighup_callbacks.append((func, args, kwargs))
|
_sighup_callbacks.append((func, args, kwargs))
|
||||||
|
|
||||||
|
|
||||||
def start_worker_reactor(appname, config, run_command=reactor.run):
|
def start_worker_reactor(
|
||||||
|
appname: str,
|
||||||
|
config: HomeServerConfig,
|
||||||
|
run_command: Callable[[], None] = reactor.run,
|
||||||
|
) -> None:
|
||||||
"""Run the reactor in the main process
|
"""Run the reactor in the main process
|
||||||
|
|
||||||
Daemonizes if necessary, and then configures some resources, before starting
|
Daemonizes if necessary, and then configures some resources, before starting
|
||||||
the reactor. Pulls configuration from the 'worker' settings in 'config'.
|
the reactor. Pulls configuration from the 'worker' settings in 'config'.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
appname (str): application name which will be sent to syslog
|
appname: application name which will be sent to syslog
|
||||||
config (synapse.config.Config): config object
|
config: config object
|
||||||
run_command (Callable[]): callable that actually runs the reactor
|
run_command: callable that actually runs the reactor
|
||||||
"""
|
"""
|
||||||
|
|
||||||
logger = logging.getLogger(config.worker.worker_app)
|
logger = logging.getLogger(config.worker.worker_app)
|
||||||
|
@ -101,32 +127,32 @@ def start_worker_reactor(appname, config, run_command=reactor.run):
|
||||||
|
|
||||||
|
|
||||||
def start_reactor(
|
def start_reactor(
|
||||||
appname,
|
appname: str,
|
||||||
soft_file_limit,
|
soft_file_limit: int,
|
||||||
gc_thresholds,
|
gc_thresholds: Tuple[int, int, int],
|
||||||
pid_file,
|
pid_file: str,
|
||||||
daemonize,
|
daemonize: bool,
|
||||||
print_pidfile,
|
print_pidfile: bool,
|
||||||
logger,
|
logger: logging.Logger,
|
||||||
run_command=reactor.run,
|
run_command: Callable[[], None] = reactor.run,
|
||||||
):
|
) -> None:
|
||||||
"""Run the reactor in the main process
|
"""Run the reactor in the main process
|
||||||
|
|
||||||
Daemonizes if necessary, and then configures some resources, before starting
|
Daemonizes if necessary, and then configures some resources, before starting
|
||||||
the reactor
|
the reactor
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
appname (str): application name which will be sent to syslog
|
appname: application name which will be sent to syslog
|
||||||
soft_file_limit (int):
|
soft_file_limit:
|
||||||
gc_thresholds:
|
gc_thresholds:
|
||||||
pid_file (str): name of pid file to write to if daemonize is True
|
pid_file: name of pid file to write to if daemonize is True
|
||||||
daemonize (bool): true to run the reactor in a background process
|
daemonize: true to run the reactor in a background process
|
||||||
print_pidfile (bool): whether to print the pid file, if daemonize is True
|
print_pidfile: whether to print the pid file, if daemonize is True
|
||||||
logger (logging.Logger): logger instance to pass to Daemonize
|
logger: logger instance to pass to Daemonize
|
||||||
run_command (Callable[]): callable that actually runs the reactor
|
run_command: callable that actually runs the reactor
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def run():
|
def run() -> None:
|
||||||
logger.info("Running")
|
logger.info("Running")
|
||||||
setup_jemalloc_stats()
|
setup_jemalloc_stats()
|
||||||
change_resource_limit(soft_file_limit)
|
change_resource_limit(soft_file_limit)
|
||||||
|
@ -185,7 +211,7 @@ def redirect_stdio_to_logs() -> None:
|
||||||
print("Redirected stdout/stderr to logs")
|
print("Redirected stdout/stderr to logs")
|
||||||
|
|
||||||
|
|
||||||
def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
|
def register_start(cb: Callable[..., Awaitable], *args: Any, **kwargs: Any) -> None:
|
||||||
"""Register a callback with the reactor, to be called once it is running
|
"""Register a callback with the reactor, to be called once it is running
|
||||||
|
|
||||||
This can be used to initialise parts of the system which require an asynchronous
|
This can be used to initialise parts of the system which require an asynchronous
|
||||||
|
@ -195,7 +221,7 @@ def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
|
||||||
will exit.
|
will exit.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
async def wrapper():
|
async def wrapper() -> None:
|
||||||
try:
|
try:
|
||||||
await cb(*args, **kwargs)
|
await cb(*args, **kwargs)
|
||||||
except Exception:
|
except Exception:
|
||||||
|
@ -224,7 +250,7 @@ def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
|
||||||
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper()))
|
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper()))
|
||||||
|
|
||||||
|
|
||||||
def listen_metrics(bind_addresses, port):
|
def listen_metrics(bind_addresses: Iterable[str], port: int) -> None:
|
||||||
"""
|
"""
|
||||||
Start Prometheus metrics server.
|
Start Prometheus metrics server.
|
||||||
"""
|
"""
|
||||||
|
@ -236,11 +262,11 @@ def listen_metrics(bind_addresses, port):
|
||||||
|
|
||||||
|
|
||||||
def listen_manhole(
|
def listen_manhole(
|
||||||
bind_addresses: Iterable[str],
|
bind_addresses: Collection[str],
|
||||||
port: int,
|
port: int,
|
||||||
manhole_settings: ManholeConfig,
|
manhole_settings: ManholeConfig,
|
||||||
manhole_globals: dict,
|
manhole_globals: dict,
|
||||||
):
|
) -> None:
|
||||||
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
|
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
|
||||||
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
|
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
|
||||||
# suppress the warning for now.
|
# suppress the warning for now.
|
||||||
|
@ -259,12 +285,18 @@ def listen_manhole(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
|
def listen_tcp(
|
||||||
|
bind_addresses: Collection[str],
|
||||||
|
port: int,
|
||||||
|
factory: ServerFactory,
|
||||||
|
reactor: IReactorTCP = reactor,
|
||||||
|
backlog: int = 50,
|
||||||
|
) -> List[Port]:
|
||||||
"""
|
"""
|
||||||
Create a TCP socket for a port and several addresses
|
Create a TCP socket for a port and several addresses
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
list[twisted.internet.tcp.Port]: listening for TCP connections
|
list of twisted.internet.tcp.Port listening for TCP connections
|
||||||
"""
|
"""
|
||||||
r = []
|
r = []
|
||||||
for address in bind_addresses:
|
for address in bind_addresses:
|
||||||
|
@ -273,12 +305,19 @@ def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
|
||||||
except error.CannotListenError as e:
|
except error.CannotListenError as e:
|
||||||
check_bind_error(e, address, bind_addresses)
|
check_bind_error(e, address, bind_addresses)
|
||||||
|
|
||||||
return r
|
# IReactorTCP returns an object implementing IListeningPort from listenTCP,
|
||||||
|
# but we know it will be a Port instance.
|
||||||
|
return r # type: ignore[return-value]
|
||||||
|
|
||||||
|
|
||||||
def listen_ssl(
|
def listen_ssl(
|
||||||
bind_addresses, port, factory, context_factory, reactor=reactor, backlog=50
|
bind_addresses: Collection[str],
|
||||||
):
|
port: int,
|
||||||
|
factory: ServerFactory,
|
||||||
|
context_factory: IOpenSSLContextFactory,
|
||||||
|
reactor: IReactorSSL = reactor,
|
||||||
|
backlog: int = 50,
|
||||||
|
) -> List[Port]:
|
||||||
"""
|
"""
|
||||||
Create an TLS-over-TCP socket for a port and several addresses
|
Create an TLS-over-TCP socket for a port and several addresses
|
||||||
|
|
||||||
|
@ -294,10 +333,13 @@ def listen_ssl(
|
||||||
except error.CannotListenError as e:
|
except error.CannotListenError as e:
|
||||||
check_bind_error(e, address, bind_addresses)
|
check_bind_error(e, address, bind_addresses)
|
||||||
|
|
||||||
return r
|
# IReactorSSL incorrectly declares that an int is returned from listenSSL,
|
||||||
|
# it actually returns an object implementing IListeningPort, but we know it
|
||||||
|
# will be a Port instance.
|
||||||
|
return r # type: ignore[return-value]
|
||||||
|
|
||||||
|
|
||||||
def refresh_certificate(hs: "HomeServer"):
|
def refresh_certificate(hs: "HomeServer") -> None:
|
||||||
"""
|
"""
|
||||||
Refresh the TLS certificates that Synapse is using by re-reading them from
|
Refresh the TLS certificates that Synapse is using by re-reading them from
|
||||||
disk and updating the TLS context factories to use them.
|
disk and updating the TLS context factories to use them.
|
||||||
|
@ -329,7 +371,7 @@ def refresh_certificate(hs: "HomeServer"):
|
||||||
logger.info("Context factories updated.")
|
logger.info("Context factories updated.")
|
||||||
|
|
||||||
|
|
||||||
async def start(hs: "HomeServer"):
|
async def start(hs: "HomeServer") -> None:
|
||||||
"""
|
"""
|
||||||
Start a Synapse server or worker.
|
Start a Synapse server or worker.
|
||||||
|
|
||||||
|
@ -360,7 +402,7 @@ async def start(hs: "HomeServer"):
|
||||||
if hasattr(signal, "SIGHUP"):
|
if hasattr(signal, "SIGHUP"):
|
||||||
|
|
||||||
@wrap_as_background_process("sighup")
|
@wrap_as_background_process("sighup")
|
||||||
def handle_sighup(*args, **kwargs):
|
async def handle_sighup(*args: Any, **kwargs: Any) -> None:
|
||||||
# Tell systemd our state, if we're using it. This will silently fail if
|
# Tell systemd our state, if we're using it. This will silently fail if
|
||||||
# we're not using systemd.
|
# we're not using systemd.
|
||||||
sdnotify(b"RELOADING=1")
|
sdnotify(b"RELOADING=1")
|
||||||
|
@ -373,7 +415,7 @@ async def start(hs: "HomeServer"):
|
||||||
# We defer running the sighup handlers until next reactor tick. This
|
# We defer running the sighup handlers until next reactor tick. This
|
||||||
# is so that we're in a sane state, e.g. flushing the logs may fail
|
# is so that we're in a sane state, e.g. flushing the logs may fail
|
||||||
# if the sighup happens in the middle of writing a log entry.
|
# if the sighup happens in the middle of writing a log entry.
|
||||||
def run_sighup(*args, **kwargs):
|
def run_sighup(*args: Any, **kwargs: Any) -> None:
|
||||||
# `callFromThread` should be "signal safe" as well as thread
|
# `callFromThread` should be "signal safe" as well as thread
|
||||||
# safe.
|
# safe.
|
||||||
reactor.callFromThread(handle_sighup, *args, **kwargs)
|
reactor.callFromThread(handle_sighup, *args, **kwargs)
|
||||||
|
@ -436,12 +478,8 @@ async def start(hs: "HomeServer"):
|
||||||
atexit.register(gc.freeze)
|
atexit.register(gc.freeze)
|
||||||
|
|
||||||
|
|
||||||
def setup_sentry(hs: "HomeServer"):
|
def setup_sentry(hs: "HomeServer") -> None:
|
||||||
"""Enable sentry integration, if enabled in configuration
|
"""Enable sentry integration, if enabled in configuration"""
|
||||||
|
|
||||||
Args:
|
|
||||||
hs
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not hs.config.metrics.sentry_enabled:
|
if not hs.config.metrics.sentry_enabled:
|
||||||
return
|
return
|
||||||
|
@ -466,7 +504,7 @@ def setup_sentry(hs: "HomeServer"):
|
||||||
scope.set_tag("worker_name", name)
|
scope.set_tag("worker_name", name)
|
||||||
|
|
||||||
|
|
||||||
def setup_sdnotify(hs: "HomeServer"):
|
def setup_sdnotify(hs: "HomeServer") -> None:
|
||||||
"""Adds process state hooks to tell systemd what we are up to."""
|
"""Adds process state hooks to tell systemd what we are up to."""
|
||||||
|
|
||||||
# Tell systemd our state, if we're using it. This will silently fail if
|
# Tell systemd our state, if we're using it. This will silently fail if
|
||||||
|
@ -481,7 +519,7 @@ def setup_sdnotify(hs: "HomeServer"):
|
||||||
sdnotify_sockaddr = os.getenv("NOTIFY_SOCKET")
|
sdnotify_sockaddr = os.getenv("NOTIFY_SOCKET")
|
||||||
|
|
||||||
|
|
||||||
def sdnotify(state):
|
def sdnotify(state: bytes) -> None:
|
||||||
"""
|
"""
|
||||||
Send a notification to systemd, if the NOTIFY_SOCKET env var is set.
|
Send a notification to systemd, if the NOTIFY_SOCKET env var is set.
|
||||||
|
|
||||||
|
@ -490,7 +528,7 @@ def sdnotify(state):
|
||||||
package which many OSes don't include as a matter of principle.
|
package which many OSes don't include as a matter of principle.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
state (bytes): notification to send
|
state: notification to send
|
||||||
"""
|
"""
|
||||||
if not isinstance(state, bytes):
|
if not isinstance(state, bytes):
|
||||||
raise TypeError("sdnotify should be called with a bytes")
|
raise TypeError("sdnotify should be called with a bytes")
|
||||||
|
|
|
@ -17,6 +17,7 @@ import logging
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
import tempfile
|
import tempfile
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
from twisted.internet import defer, task
|
from twisted.internet import defer, task
|
||||||
|
|
||||||
|
@ -25,6 +26,7 @@ from synapse.app import _base
|
||||||
from synapse.config._base import ConfigError
|
from synapse.config._base import ConfigError
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.config.logger import setup_logging
|
from synapse.config.logger import setup_logging
|
||||||
|
from synapse.events import EventBase
|
||||||
from synapse.handlers.admin import ExfiltrationWriter
|
from synapse.handlers.admin import ExfiltrationWriter
|
||||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||||
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
||||||
|
@ -40,6 +42,7 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
|
||||||
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
|
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
from synapse.storage.databases.main.room import RoomWorkerStore
|
from synapse.storage.databases.main.room import RoomWorkerStore
|
||||||
|
from synapse.types import StateMap
|
||||||
from synapse.util.logcontext import LoggingContext
|
from synapse.util.logcontext import LoggingContext
|
||||||
from synapse.util.versionstring import get_version_string
|
from synapse.util.versionstring import get_version_string
|
||||||
|
|
||||||
|
@ -65,16 +68,11 @@ class AdminCmdSlavedStore(
|
||||||
|
|
||||||
|
|
||||||
class AdminCmdServer(HomeServer):
|
class AdminCmdServer(HomeServer):
|
||||||
DATASTORE_CLASS = AdminCmdSlavedStore
|
DATASTORE_CLASS = AdminCmdSlavedStore # type: ignore
|
||||||
|
|
||||||
|
|
||||||
async def export_data_command(hs: HomeServer, args):
|
async def export_data_command(hs: HomeServer, args: argparse.Namespace) -> None:
|
||||||
"""Export data for a user.
|
"""Export data for a user."""
|
||||||
|
|
||||||
Args:
|
|
||||||
hs
|
|
||||||
args (argparse.Namespace)
|
|
||||||
"""
|
|
||||||
|
|
||||||
user_id = args.user_id
|
user_id = args.user_id
|
||||||
directory = args.output_directory
|
directory = args.output_directory
|
||||||
|
@ -92,12 +90,12 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
Note: This writes to disk on the main reactor thread.
|
Note: This writes to disk on the main reactor thread.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
user_id (str): The user whose data is being exfiltrated.
|
user_id: The user whose data is being exfiltrated.
|
||||||
directory (str|None): The directory to write the data to, if None then
|
directory: The directory to write the data to, if None then will write
|
||||||
will write to a temporary directory.
|
to a temporary directory.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, user_id, directory=None):
|
def __init__(self, user_id: str, directory: Optional[str] = None):
|
||||||
self.user_id = user_id
|
self.user_id = user_id
|
||||||
|
|
||||||
if directory:
|
if directory:
|
||||||
|
@ -111,7 +109,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
if list(os.listdir(self.base_directory)):
|
if list(os.listdir(self.base_directory)):
|
||||||
raise Exception("Directory must be empty")
|
raise Exception("Directory must be empty")
|
||||||
|
|
||||||
def write_events(self, room_id, events):
|
def write_events(self, room_id: str, events: List[EventBase]) -> None:
|
||||||
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
||||||
os.makedirs(room_directory, exist_ok=True)
|
os.makedirs(room_directory, exist_ok=True)
|
||||||
events_file = os.path.join(room_directory, "events")
|
events_file = os.path.join(room_directory, "events")
|
||||||
|
@ -120,7 +118,9 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
for event in events:
|
for event in events:
|
||||||
print(json.dumps(event.get_pdu_json()), file=f)
|
print(json.dumps(event.get_pdu_json()), file=f)
|
||||||
|
|
||||||
def write_state(self, room_id, event_id, state):
|
def write_state(
|
||||||
|
self, room_id: str, event_id: str, state: StateMap[EventBase]
|
||||||
|
) -> None:
|
||||||
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
||||||
state_directory = os.path.join(room_directory, "state")
|
state_directory = os.path.join(room_directory, "state")
|
||||||
os.makedirs(state_directory, exist_ok=True)
|
os.makedirs(state_directory, exist_ok=True)
|
||||||
|
@ -131,7 +131,9 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
for event in state.values():
|
for event in state.values():
|
||||||
print(json.dumps(event.get_pdu_json()), file=f)
|
print(json.dumps(event.get_pdu_json()), file=f)
|
||||||
|
|
||||||
def write_invite(self, room_id, event, state):
|
def write_invite(
|
||||||
|
self, room_id: str, event: EventBase, state: StateMap[EventBase]
|
||||||
|
) -> None:
|
||||||
self.write_events(room_id, [event])
|
self.write_events(room_id, [event])
|
||||||
|
|
||||||
# We write the invite state somewhere else as they aren't full events
|
# We write the invite state somewhere else as they aren't full events
|
||||||
|
@ -145,7 +147,9 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
for event in state.values():
|
for event in state.values():
|
||||||
print(json.dumps(event), file=f)
|
print(json.dumps(event), file=f)
|
||||||
|
|
||||||
def write_knock(self, room_id, event, state):
|
def write_knock(
|
||||||
|
self, room_id: str, event: EventBase, state: StateMap[EventBase]
|
||||||
|
) -> None:
|
||||||
self.write_events(room_id, [event])
|
self.write_events(room_id, [event])
|
||||||
|
|
||||||
# We write the knock state somewhere else as they aren't full events
|
# We write the knock state somewhere else as they aren't full events
|
||||||
|
@ -159,11 +163,11 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||||
for event in state.values():
|
for event in state.values():
|
||||||
print(json.dumps(event), file=f)
|
print(json.dumps(event), file=f)
|
||||||
|
|
||||||
def finished(self):
|
def finished(self) -> str:
|
||||||
return self.base_directory
|
return self.base_directory
|
||||||
|
|
||||||
|
|
||||||
def start(config_options):
|
def start(config_options: List[str]) -> None:
|
||||||
parser = argparse.ArgumentParser(description="Synapse Admin Command")
|
parser = argparse.ArgumentParser(description="Synapse Admin Command")
|
||||||
HomeServerConfig.add_arguments_to_parser(parser)
|
HomeServerConfig.add_arguments_to_parser(parser)
|
||||||
|
|
||||||
|
@ -231,7 +235,7 @@ def start(config_options):
|
||||||
# We also make sure that `_base.start` gets run before we actually run the
|
# We also make sure that `_base.start` gets run before we actually run the
|
||||||
# command.
|
# command.
|
||||||
|
|
||||||
async def run():
|
async def run() -> None:
|
||||||
with LoggingContext("command"):
|
with LoggingContext("command"):
|
||||||
await _base.start(ss)
|
await _base.start(ss)
|
||||||
await args.func(ss, args)
|
await args.func(ss, args)
|
||||||
|
|
|
@ -14,11 +14,10 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
import sys
|
import sys
|
||||||
from typing import Dict, Optional
|
from typing import Dict, List, Optional, Tuple
|
||||||
|
|
||||||
from twisted.internet import address
|
from twisted.internet import address
|
||||||
from twisted.web.resource import IResource
|
from twisted.web.resource import Resource
|
||||||
from twisted.web.server import Request
|
|
||||||
|
|
||||||
import synapse
|
import synapse
|
||||||
import synapse.events
|
import synapse.events
|
||||||
|
@ -27,7 +26,8 @@ from synapse.api.urls import (
|
||||||
CLIENT_API_PREFIX,
|
CLIENT_API_PREFIX,
|
||||||
FEDERATION_PREFIX,
|
FEDERATION_PREFIX,
|
||||||
LEGACY_MEDIA_PREFIX,
|
LEGACY_MEDIA_PREFIX,
|
||||||
MEDIA_PREFIX,
|
MEDIA_R0_PREFIX,
|
||||||
|
MEDIA_V3_PREFIX,
|
||||||
SERVER_KEY_V2_PREFIX,
|
SERVER_KEY_V2_PREFIX,
|
||||||
)
|
)
|
||||||
from synapse.app import _base
|
from synapse.app import _base
|
||||||
|
@ -44,7 +44,7 @@ from synapse.config.server import ListenerConfig
|
||||||
from synapse.federation.transport.server import TransportLayerServer
|
from synapse.federation.transport.server import TransportLayerServer
|
||||||
from synapse.http.server import JsonResource, OptionsResource
|
from synapse.http.server import JsonResource, OptionsResource
|
||||||
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
||||||
from synapse.http.site import SynapseSite
|
from synapse.http.site import SynapseRequest, SynapseSite
|
||||||
from synapse.logging.context import LoggingContext
|
from synapse.logging.context import LoggingContext
|
||||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||||
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||||
|
@ -120,6 +120,7 @@ from synapse.storage.databases.main.stats import StatsStore
|
||||||
from synapse.storage.databases.main.transactions import TransactionWorkerStore
|
from synapse.storage.databases.main.transactions import TransactionWorkerStore
|
||||||
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
|
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
|
||||||
from synapse.storage.databases.main.user_directory import UserDirectoryStore
|
from synapse.storage.databases.main.user_directory import UserDirectoryStore
|
||||||
|
from synapse.types import JsonDict
|
||||||
from synapse.util.httpresourcetree import create_resource_tree
|
from synapse.util.httpresourcetree import create_resource_tree
|
||||||
from synapse.util.versionstring import get_version_string
|
from synapse.util.versionstring import get_version_string
|
||||||
|
|
||||||
|
@ -144,7 +145,9 @@ class KeyUploadServlet(RestServlet):
|
||||||
self.http_client = hs.get_simple_http_client()
|
self.http_client = hs.get_simple_http_client()
|
||||||
self.main_uri = hs.config.worker.worker_main_http_uri
|
self.main_uri = hs.config.worker.worker_main_http_uri
|
||||||
|
|
||||||
async def on_POST(self, request: Request, device_id: Optional[str]):
|
async def on_POST(
|
||||||
|
self, request: SynapseRequest, device_id: Optional[str]
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
||||||
user_id = requester.user.to_string()
|
user_id = requester.user.to_string()
|
||||||
body = parse_json_object_from_request(request)
|
body = parse_json_object_from_request(request)
|
||||||
|
@ -188,9 +191,8 @@ class KeyUploadServlet(RestServlet):
|
||||||
# If the header exists, add to the comma-separated list of the first
|
# If the header exists, add to the comma-separated list of the first
|
||||||
# instance of the header. Otherwise, generate a new header.
|
# instance of the header. Otherwise, generate a new header.
|
||||||
if x_forwarded_for:
|
if x_forwarded_for:
|
||||||
x_forwarded_for = [
|
x_forwarded_for = [x_forwarded_for[0] + b", " + previous_host]
|
||||||
x_forwarded_for[0] + b", " + previous_host
|
x_forwarded_for.extend(x_forwarded_for[1:])
|
||||||
] + x_forwarded_for[1:]
|
|
||||||
else:
|
else:
|
||||||
x_forwarded_for = [previous_host]
|
x_forwarded_for = [previous_host]
|
||||||
headers[b"X-Forwarded-For"] = x_forwarded_for
|
headers[b"X-Forwarded-For"] = x_forwarded_for
|
||||||
|
@ -255,13 +257,16 @@ class GenericWorkerSlavedStore(
|
||||||
SessionStore,
|
SessionStore,
|
||||||
BaseSlavedStore,
|
BaseSlavedStore,
|
||||||
):
|
):
|
||||||
pass
|
# Properties that multiple storage classes define. Tell mypy what the
|
||||||
|
# expected type is.
|
||||||
|
server_name: str
|
||||||
|
config: HomeServerConfig
|
||||||
|
|
||||||
|
|
||||||
class GenericWorkerServer(HomeServer):
|
class GenericWorkerServer(HomeServer):
|
||||||
DATASTORE_CLASS = GenericWorkerSlavedStore
|
DATASTORE_CLASS = GenericWorkerSlavedStore # type: ignore
|
||||||
|
|
||||||
def _listen_http(self, listener_config: ListenerConfig):
|
def _listen_http(self, listener_config: ListenerConfig) -> None:
|
||||||
port = listener_config.port
|
port = listener_config.port
|
||||||
bind_addresses = listener_config.bind_addresses
|
bind_addresses = listener_config.bind_addresses
|
||||||
|
|
||||||
|
@ -269,10 +274,10 @@ class GenericWorkerServer(HomeServer):
|
||||||
|
|
||||||
site_tag = listener_config.http_options.tag
|
site_tag = listener_config.http_options.tag
|
||||||
if site_tag is None:
|
if site_tag is None:
|
||||||
site_tag = port
|
site_tag = str(port)
|
||||||
|
|
||||||
# We always include a health resource.
|
# We always include a health resource.
|
||||||
resources: Dict[str, IResource] = {"/health": HealthResource()}
|
resources: Dict[str, Resource] = {"/health": HealthResource()}
|
||||||
|
|
||||||
for res in listener_config.http_options.resources:
|
for res in listener_config.http_options.resources:
|
||||||
for name in res.names:
|
for name in res.names:
|
||||||
|
@ -336,7 +341,8 @@ class GenericWorkerServer(HomeServer):
|
||||||
|
|
||||||
resources.update(
|
resources.update(
|
||||||
{
|
{
|
||||||
MEDIA_PREFIX: media_repo,
|
MEDIA_R0_PREFIX: media_repo,
|
||||||
|
MEDIA_V3_PREFIX: media_repo,
|
||||||
LEGACY_MEDIA_PREFIX: media_repo,
|
LEGACY_MEDIA_PREFIX: media_repo,
|
||||||
"/_synapse/admin": admin_resource,
|
"/_synapse/admin": admin_resource,
|
||||||
}
|
}
|
||||||
|
@ -388,7 +394,7 @@ class GenericWorkerServer(HomeServer):
|
||||||
|
|
||||||
logger.info("Synapse worker now listening on port %d", port)
|
logger.info("Synapse worker now listening on port %d", port)
|
||||||
|
|
||||||
def start_listening(self):
|
def start_listening(self) -> None:
|
||||||
for listener in self.config.worker.worker_listeners:
|
for listener in self.config.worker.worker_listeners:
|
||||||
if listener.type == "http":
|
if listener.type == "http":
|
||||||
self._listen_http(listener)
|
self._listen_http(listener)
|
||||||
|
@ -413,7 +419,7 @@ class GenericWorkerServer(HomeServer):
|
||||||
self.get_tcp_replication().start_replication(self)
|
self.get_tcp_replication().start_replication(self)
|
||||||
|
|
||||||
|
|
||||||
def start(config_options):
|
def start(config_options: List[str]) -> None:
|
||||||
try:
|
try:
|
||||||
config = HomeServerConfig.load_config("Synapse worker", config_options)
|
config = HomeServerConfig.load_config("Synapse worker", config_options)
|
||||||
except ConfigError as e:
|
except ConfigError as e:
|
||||||
|
|
|
@ -16,10 +16,10 @@
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
from typing import Iterator
|
from typing import Dict, Iterable, Iterator, List
|
||||||
|
|
||||||
from twisted.internet import reactor
|
from twisted.internet.tcp import Port
|
||||||
from twisted.web.resource import EncodingResourceWrapper, IResource
|
from twisted.web.resource import EncodingResourceWrapper, Resource
|
||||||
from twisted.web.server import GzipEncoderFactory
|
from twisted.web.server import GzipEncoderFactory
|
||||||
from twisted.web.static import File
|
from twisted.web.static import File
|
||||||
|
|
||||||
|
@ -29,7 +29,8 @@ from synapse import events
|
||||||
from synapse.api.urls import (
|
from synapse.api.urls import (
|
||||||
FEDERATION_PREFIX,
|
FEDERATION_PREFIX,
|
||||||
LEGACY_MEDIA_PREFIX,
|
LEGACY_MEDIA_PREFIX,
|
||||||
MEDIA_PREFIX,
|
MEDIA_R0_PREFIX,
|
||||||
|
MEDIA_V3_PREFIX,
|
||||||
SERVER_KEY_V2_PREFIX,
|
SERVER_KEY_V2_PREFIX,
|
||||||
STATIC_PREFIX,
|
STATIC_PREFIX,
|
||||||
WEB_CLIENT_PREFIX,
|
WEB_CLIENT_PREFIX,
|
||||||
|
@ -76,23 +77,27 @@ from synapse.util.versionstring import get_version_string
|
||||||
logger = logging.getLogger("synapse.app.homeserver")
|
logger = logging.getLogger("synapse.app.homeserver")
|
||||||
|
|
||||||
|
|
||||||
def gz_wrap(r):
|
def gz_wrap(r: Resource) -> Resource:
|
||||||
return EncodingResourceWrapper(r, [GzipEncoderFactory()])
|
return EncodingResourceWrapper(r, [GzipEncoderFactory()])
|
||||||
|
|
||||||
|
|
||||||
class SynapseHomeServer(HomeServer):
|
class SynapseHomeServer(HomeServer):
|
||||||
DATASTORE_CLASS = DataStore
|
DATASTORE_CLASS = DataStore # type: ignore
|
||||||
|
|
||||||
def _listener_http(self, config: HomeServerConfig, listener_config: ListenerConfig):
|
def _listener_http(
|
||||||
|
self, config: HomeServerConfig, listener_config: ListenerConfig
|
||||||
|
) -> Iterable[Port]:
|
||||||
port = listener_config.port
|
port = listener_config.port
|
||||||
bind_addresses = listener_config.bind_addresses
|
bind_addresses = listener_config.bind_addresses
|
||||||
tls = listener_config.tls
|
tls = listener_config.tls
|
||||||
|
# Must exist since this is an HTTP listener.
|
||||||
|
assert listener_config.http_options is not None
|
||||||
site_tag = listener_config.http_options.tag
|
site_tag = listener_config.http_options.tag
|
||||||
if site_tag is None:
|
if site_tag is None:
|
||||||
site_tag = str(port)
|
site_tag = str(port)
|
||||||
|
|
||||||
# We always include a health resource.
|
# We always include a health resource.
|
||||||
resources = {"/health": HealthResource()}
|
resources: Dict[str, Resource] = {"/health": HealthResource()}
|
||||||
|
|
||||||
for res in listener_config.http_options.resources:
|
for res in listener_config.http_options.resources:
|
||||||
for name in res.names:
|
for name in res.names:
|
||||||
|
@ -111,7 +116,7 @@ class SynapseHomeServer(HomeServer):
|
||||||
("listeners", site_tag, "additional_resources", "<%s>" % (path,)),
|
("listeners", site_tag, "additional_resources", "<%s>" % (path,)),
|
||||||
)
|
)
|
||||||
handler = handler_cls(config, module_api)
|
handler = handler_cls(config, module_api)
|
||||||
if IResource.providedBy(handler):
|
if isinstance(handler, Resource):
|
||||||
resource = handler
|
resource = handler
|
||||||
elif hasattr(handler, "handle_request"):
|
elif hasattr(handler, "handle_request"):
|
||||||
resource = AdditionalResource(self, handler.handle_request)
|
resource = AdditionalResource(self, handler.handle_request)
|
||||||
|
@ -128,7 +133,7 @@ class SynapseHomeServer(HomeServer):
|
||||||
|
|
||||||
# try to find something useful to redirect '/' to
|
# try to find something useful to redirect '/' to
|
||||||
if WEB_CLIENT_PREFIX in resources:
|
if WEB_CLIENT_PREFIX in resources:
|
||||||
root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
|
root_resource: Resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
|
||||||
elif STATIC_PREFIX in resources:
|
elif STATIC_PREFIX in resources:
|
||||||
root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
|
root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
|
||||||
else:
|
else:
|
||||||
|
@ -145,6 +150,8 @@ class SynapseHomeServer(HomeServer):
|
||||||
)
|
)
|
||||||
|
|
||||||
if tls:
|
if tls:
|
||||||
|
# refresh_certificate should have been called before this.
|
||||||
|
assert self.tls_server_context_factory is not None
|
||||||
ports = listen_ssl(
|
ports = listen_ssl(
|
||||||
bind_addresses,
|
bind_addresses,
|
||||||
port,
|
port,
|
||||||
|
@ -165,20 +172,21 @@ class SynapseHomeServer(HomeServer):
|
||||||
|
|
||||||
return ports
|
return ports
|
||||||
|
|
||||||
def _configure_named_resource(self, name, compress=False):
|
def _configure_named_resource(
|
||||||
|
self, name: str, compress: bool = False
|
||||||
|
) -> Dict[str, Resource]:
|
||||||
"""Build a resource map for a named resource
|
"""Build a resource map for a named resource
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
name (str): named resource: one of "client", "federation", etc
|
name: named resource: one of "client", "federation", etc
|
||||||
compress (bool): whether to enable gzip compression for this
|
compress: whether to enable gzip compression for this resource
|
||||||
resource
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
dict[str, Resource]: map from path to HTTP resource
|
map from path to HTTP resource
|
||||||
"""
|
"""
|
||||||
resources = {}
|
resources: Dict[str, Resource] = {}
|
||||||
if name == "client":
|
if name == "client":
|
||||||
client_resource = ClientRestResource(self)
|
client_resource: Resource = ClientRestResource(self)
|
||||||
if compress:
|
if compress:
|
||||||
client_resource = gz_wrap(client_resource)
|
client_resource = gz_wrap(client_resource)
|
||||||
|
|
||||||
|
@ -186,6 +194,7 @@ class SynapseHomeServer(HomeServer):
|
||||||
{
|
{
|
||||||
"/_matrix/client/api/v1": client_resource,
|
"/_matrix/client/api/v1": client_resource,
|
||||||
"/_matrix/client/r0": client_resource,
|
"/_matrix/client/r0": client_resource,
|
||||||
|
"/_matrix/client/v3": client_resource,
|
||||||
"/_matrix/client/unstable": client_resource,
|
"/_matrix/client/unstable": client_resource,
|
||||||
"/_matrix/client/v2_alpha": client_resource,
|
"/_matrix/client/v2_alpha": client_resource,
|
||||||
"/_matrix/client/versions": client_resource,
|
"/_matrix/client/versions": client_resource,
|
||||||
|
@ -207,7 +216,7 @@ class SynapseHomeServer(HomeServer):
|
||||||
if name == "consent":
|
if name == "consent":
|
||||||
from synapse.rest.consent.consent_resource import ConsentResource
|
from synapse.rest.consent.consent_resource import ConsentResource
|
||||||
|
|
||||||
consent_resource = ConsentResource(self)
|
consent_resource: Resource = ConsentResource(self)
|
||||||
if compress:
|
if compress:
|
||||||
consent_resource = gz_wrap(consent_resource)
|
consent_resource = gz_wrap(consent_resource)
|
||||||
resources.update({"/_matrix/consent": consent_resource})
|
resources.update({"/_matrix/consent": consent_resource})
|
||||||
|
@ -237,7 +246,11 @@ class SynapseHomeServer(HomeServer):
|
||||||
if self.config.server.enable_media_repo:
|
if self.config.server.enable_media_repo:
|
||||||
media_repo = self.get_media_repository_resource()
|
media_repo = self.get_media_repository_resource()
|
||||||
resources.update(
|
resources.update(
|
||||||
{MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo}
|
{
|
||||||
|
MEDIA_R0_PREFIX: media_repo,
|
||||||
|
MEDIA_V3_PREFIX: media_repo,
|
||||||
|
LEGACY_MEDIA_PREFIX: media_repo,
|
||||||
|
}
|
||||||
)
|
)
|
||||||
elif name == "media":
|
elif name == "media":
|
||||||
raise ConfigError(
|
raise ConfigError(
|
||||||
|
@ -277,7 +290,7 @@ class SynapseHomeServer(HomeServer):
|
||||||
|
|
||||||
return resources
|
return resources
|
||||||
|
|
||||||
def start_listening(self):
|
def start_listening(self) -> None:
|
||||||
if self.config.redis.redis_enabled:
|
if self.config.redis.redis_enabled:
|
||||||
# If redis is enabled we connect via the replication command handler
|
# If redis is enabled we connect via the replication command handler
|
||||||
# in the same way as the workers (since we're effectively a client
|
# in the same way as the workers (since we're effectively a client
|
||||||
|
@ -303,7 +316,9 @@ class SynapseHomeServer(HomeServer):
|
||||||
ReplicationStreamProtocolFactory(self),
|
ReplicationStreamProtocolFactory(self),
|
||||||
)
|
)
|
||||||
for s in services:
|
for s in services:
|
||||||
reactor.addSystemEventTrigger("before", "shutdown", s.stopListening)
|
self.get_reactor().addSystemEventTrigger(
|
||||||
|
"before", "shutdown", s.stopListening
|
||||||
|
)
|
||||||
elif listener.type == "metrics":
|
elif listener.type == "metrics":
|
||||||
if not self.config.metrics.enable_metrics:
|
if not self.config.metrics.enable_metrics:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
|
@ -318,14 +333,13 @@ class SynapseHomeServer(HomeServer):
|
||||||
logger.warning("Unrecognized listener type: %s", listener.type)
|
logger.warning("Unrecognized listener type: %s", listener.type)
|
||||||
|
|
||||||
|
|
||||||
def setup(config_options):
|
def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
config_options_options: The options passed to Synapse. Usually
|
config_options_options: The options passed to Synapse. Usually `sys.argv[1:]`.
|
||||||
`sys.argv[1:]`.
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
HomeServer
|
A homeserver instance.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
config = HomeServerConfig.load_or_generate_config(
|
config = HomeServerConfig.load_or_generate_config(
|
||||||
|
@ -364,7 +378,7 @@ def setup(config_options):
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
handle_startup_exception(e)
|
handle_startup_exception(e)
|
||||||
|
|
||||||
async def start():
|
async def start() -> None:
|
||||||
# Load the OIDC provider metadatas, if OIDC is enabled.
|
# Load the OIDC provider metadatas, if OIDC is enabled.
|
||||||
if hs.config.oidc.oidc_enabled:
|
if hs.config.oidc.oidc_enabled:
|
||||||
oidc = hs.get_oidc_handler()
|
oidc = hs.get_oidc_handler()
|
||||||
|
@ -404,39 +418,15 @@ def format_config_error(e: ConfigError) -> Iterator[str]:
|
||||||
|
|
||||||
yield ":\n %s" % (e.msg,)
|
yield ":\n %s" % (e.msg,)
|
||||||
|
|
||||||
e = e.__cause__
|
parent_e = e.__cause__
|
||||||
indent = 1
|
indent = 1
|
||||||
while e:
|
while parent_e:
|
||||||
indent += 1
|
indent += 1
|
||||||
yield ":\n%s%s" % (" " * indent, str(e))
|
yield ":\n%s%s" % (" " * indent, str(parent_e))
|
||||||
e = e.__cause__
|
parent_e = parent_e.__cause__
|
||||||
|
|
||||||
|
|
||||||
def run(hs: HomeServer):
|
def run(hs: HomeServer) -> None:
|
||||||
PROFILE_SYNAPSE = False
|
|
||||||
if PROFILE_SYNAPSE:
|
|
||||||
|
|
||||||
def profile(func):
|
|
||||||
from cProfile import Profile
|
|
||||||
from threading import current_thread
|
|
||||||
|
|
||||||
def profiled(*args, **kargs):
|
|
||||||
profile = Profile()
|
|
||||||
profile.enable()
|
|
||||||
func(*args, **kargs)
|
|
||||||
profile.disable()
|
|
||||||
ident = current_thread().ident
|
|
||||||
profile.dump_stats(
|
|
||||||
"/tmp/%s.%s.%i.pstat" % (hs.hostname, func.__name__, ident)
|
|
||||||
)
|
|
||||||
|
|
||||||
return profiled
|
|
||||||
|
|
||||||
from twisted.python.threadpool import ThreadPool
|
|
||||||
|
|
||||||
ThreadPool._worker = profile(ThreadPool._worker)
|
|
||||||
reactor.run = profile(reactor.run)
|
|
||||||
|
|
||||||
_base.start_reactor(
|
_base.start_reactor(
|
||||||
"synapse-homeserver",
|
"synapse-homeserver",
|
||||||
soft_file_limit=hs.config.server.soft_file_limit,
|
soft_file_limit=hs.config.server.soft_file_limit,
|
||||||
|
@ -448,7 +438,7 @@ def run(hs: HomeServer):
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main() -> None:
|
||||||
with LoggingContext("main"):
|
with LoggingContext("main"):
|
||||||
# check base requirements
|
# check base requirements
|
||||||
check_requirements()
|
check_requirements()
|
||||||
|
|
|
@ -15,11 +15,12 @@ import logging
|
||||||
import math
|
import math
|
||||||
import resource
|
import resource
|
||||||
import sys
|
import sys
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, List, Sized, Tuple
|
||||||
|
|
||||||
from prometheus_client import Gauge
|
from prometheus_client import Gauge
|
||||||
|
|
||||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
@ -28,7 +29,7 @@ logger = logging.getLogger("synapse.app.homeserver")
|
||||||
|
|
||||||
# Contains the list of processes we will be monitoring
|
# Contains the list of processes we will be monitoring
|
||||||
# currently either 0 or 1
|
# currently either 0 or 1
|
||||||
_stats_process = []
|
_stats_process: List[Tuple[int, "resource.struct_rusage"]] = []
|
||||||
|
|
||||||
# Gauges to expose monthly active user control metrics
|
# Gauges to expose monthly active user control metrics
|
||||||
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
|
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
|
||||||
|
@ -45,9 +46,15 @@ registered_reserved_users_mau_gauge = Gauge(
|
||||||
|
|
||||||
|
|
||||||
@wrap_as_background_process("phone_stats_home")
|
@wrap_as_background_process("phone_stats_home")
|
||||||
async def phone_stats_home(hs: "HomeServer", stats, stats_process=_stats_process):
|
async def phone_stats_home(
|
||||||
|
hs: "HomeServer",
|
||||||
|
stats: JsonDict,
|
||||||
|
stats_process: List[Tuple[int, "resource.struct_rusage"]] = _stats_process,
|
||||||
|
) -> None:
|
||||||
logger.info("Gathering stats for reporting")
|
logger.info("Gathering stats for reporting")
|
||||||
now = int(hs.get_clock().time())
|
now = int(hs.get_clock().time())
|
||||||
|
# Ensure the homeserver has started.
|
||||||
|
assert hs.start_time is not None
|
||||||
uptime = int(now - hs.start_time)
|
uptime = int(now - hs.start_time)
|
||||||
if uptime < 0:
|
if uptime < 0:
|
||||||
uptime = 0
|
uptime = 0
|
||||||
|
@ -146,15 +153,15 @@ async def phone_stats_home(hs: "HomeServer", stats, stats_process=_stats_process
|
||||||
logger.warning("Error reporting stats: %s", e)
|
logger.warning("Error reporting stats: %s", e)
|
||||||
|
|
||||||
|
|
||||||
def start_phone_stats_home(hs: "HomeServer"):
|
def start_phone_stats_home(hs: "HomeServer") -> None:
|
||||||
"""
|
"""
|
||||||
Start the background tasks which report phone home stats.
|
Start the background tasks which report phone home stats.
|
||||||
"""
|
"""
|
||||||
clock = hs.get_clock()
|
clock = hs.get_clock()
|
||||||
|
|
||||||
stats = {}
|
stats: JsonDict = {}
|
||||||
|
|
||||||
def performance_stats_init():
|
def performance_stats_init() -> None:
|
||||||
_stats_process.clear()
|
_stats_process.clear()
|
||||||
_stats_process.append(
|
_stats_process.append(
|
||||||
(int(hs.get_clock().time()), resource.getrusage(resource.RUSAGE_SELF))
|
(int(hs.get_clock().time()), resource.getrusage(resource.RUSAGE_SELF))
|
||||||
|
@ -170,10 +177,10 @@ def start_phone_stats_home(hs: "HomeServer"):
|
||||||
hs.get_datastore().reap_monthly_active_users()
|
hs.get_datastore().reap_monthly_active_users()
|
||||||
|
|
||||||
@wrap_as_background_process("generate_monthly_active_users")
|
@wrap_as_background_process("generate_monthly_active_users")
|
||||||
async def generate_monthly_active_users():
|
async def generate_monthly_active_users() -> None:
|
||||||
current_mau_count = 0
|
current_mau_count = 0
|
||||||
current_mau_count_by_service = {}
|
current_mau_count_by_service = {}
|
||||||
reserved_users = ()
|
reserved_users: Sized = ()
|
||||||
store = hs.get_datastore()
|
store = hs.get_datastore()
|
||||||
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
|
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
|
||||||
current_mau_count = await store.get_monthly_active_count()
|
current_mau_count = await store.get_monthly_active_count()
|
||||||
|
|
|
@ -231,13 +231,32 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||||
json_body=body,
|
json_body=body,
|
||||||
args={"access_token": service.hs_token},
|
args={"access_token": service.hs_token},
|
||||||
)
|
)
|
||||||
|
if logger.isEnabledFor(logging.DEBUG):
|
||||||
|
logger.debug(
|
||||||
|
"push_bulk to %s succeeded! events=%s",
|
||||||
|
uri,
|
||||||
|
[event.get("event_id") for event in events],
|
||||||
|
)
|
||||||
sent_transactions_counter.labels(service.id).inc()
|
sent_transactions_counter.labels(service.id).inc()
|
||||||
sent_events_counter.labels(service.id).inc(len(events))
|
sent_events_counter.labels(service.id).inc(len(events))
|
||||||
return True
|
return True
|
||||||
except CodeMessageException as e:
|
except CodeMessageException as e:
|
||||||
logger.warning("push_bulk to %s received %s", uri, e.code)
|
logger.warning(
|
||||||
|
"push_bulk to %s received code=%s msg=%s",
|
||||||
|
uri,
|
||||||
|
e.code,
|
||||||
|
e.msg,
|
||||||
|
exc_info=logger.isEnabledFor(logging.DEBUG),
|
||||||
|
)
|
||||||
except Exception as ex:
|
except Exception as ex:
|
||||||
logger.warning("push_bulk to %s threw exception %s", uri, ex)
|
logger.warning(
|
||||||
|
"push_bulk to %s threw exception(%s) %s args=%s",
|
||||||
|
uri,
|
||||||
|
type(ex).__name__,
|
||||||
|
ex,
|
||||||
|
ex.args,
|
||||||
|
exc_info=logger.isEnabledFor(logging.DEBUG),
|
||||||
|
)
|
||||||
failed_transactions_counter.labels(service.id).inc()
|
failed_transactions_counter.labels(service.id).inc()
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
|
@ -20,7 +20,18 @@ import os
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
from hashlib import sha256
|
from hashlib import sha256
|
||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
from typing import Any, Iterable, List, MutableMapping, Optional, Union
|
from typing import (
|
||||||
|
Any,
|
||||||
|
Dict,
|
||||||
|
Iterable,
|
||||||
|
List,
|
||||||
|
MutableMapping,
|
||||||
|
Optional,
|
||||||
|
Tuple,
|
||||||
|
Type,
|
||||||
|
TypeVar,
|
||||||
|
Union,
|
||||||
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
import jinja2
|
import jinja2
|
||||||
|
@ -78,7 +89,7 @@ CONFIG_FILE_HEADER = """\
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def path_exists(file_path):
|
def path_exists(file_path: str) -> bool:
|
||||||
"""Check if a file exists
|
"""Check if a file exists
|
||||||
|
|
||||||
Unlike os.path.exists, this throws an exception if there is an error
|
Unlike os.path.exists, this throws an exception if there is an error
|
||||||
|
@ -86,7 +97,7 @@ def path_exists(file_path):
|
||||||
the parent dir).
|
the parent dir).
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
bool: True if the file exists; False if not.
|
True if the file exists; False if not.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
os.stat(file_path)
|
os.stat(file_path)
|
||||||
|
@ -102,15 +113,15 @@ class Config:
|
||||||
A configuration section, containing configuration keys and values.
|
A configuration section, containing configuration keys and values.
|
||||||
|
|
||||||
Attributes:
|
Attributes:
|
||||||
section (str): The section title of this config object, such as
|
section: The section title of this config object, such as
|
||||||
"tls" or "logger". This is used to refer to it on the root
|
"tls" or "logger". This is used to refer to it on the root
|
||||||
logger (for example, `config.tls.some_option`). Must be
|
logger (for example, `config.tls.some_option`). Must be
|
||||||
defined in subclasses.
|
defined in subclasses.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
section = None
|
section: str
|
||||||
|
|
||||||
def __init__(self, root_config=None):
|
def __init__(self, root_config: "RootConfig" = None):
|
||||||
self.root = root_config
|
self.root = root_config
|
||||||
|
|
||||||
# Get the path to the default Synapse template directory
|
# Get the path to the default Synapse template directory
|
||||||
|
@ -119,7 +130,7 @@ class Config:
|
||||||
)
|
)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def parse_size(value):
|
def parse_size(value: Union[str, int]) -> int:
|
||||||
if isinstance(value, int):
|
if isinstance(value, int):
|
||||||
return value
|
return value
|
||||||
sizes = {"K": 1024, "M": 1024 * 1024}
|
sizes = {"K": 1024, "M": 1024 * 1024}
|
||||||
|
@ -162,15 +173,15 @@ class Config:
|
||||||
return int(value) * size
|
return int(value) * size
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def abspath(file_path):
|
def abspath(file_path: str) -> str:
|
||||||
return os.path.abspath(file_path) if file_path else file_path
|
return os.path.abspath(file_path) if file_path else file_path
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def path_exists(cls, file_path):
|
def path_exists(cls, file_path: str) -> bool:
|
||||||
return path_exists(file_path)
|
return path_exists(file_path)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def check_file(cls, file_path, config_name):
|
def check_file(cls, file_path: Optional[str], config_name: str) -> str:
|
||||||
if file_path is None:
|
if file_path is None:
|
||||||
raise ConfigError("Missing config for %s." % (config_name,))
|
raise ConfigError("Missing config for %s." % (config_name,))
|
||||||
try:
|
try:
|
||||||
|
@ -183,7 +194,7 @@ class Config:
|
||||||
return cls.abspath(file_path)
|
return cls.abspath(file_path)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def ensure_directory(cls, dir_path):
|
def ensure_directory(cls, dir_path: str) -> str:
|
||||||
dir_path = cls.abspath(dir_path)
|
dir_path = cls.abspath(dir_path)
|
||||||
os.makedirs(dir_path, exist_ok=True)
|
os.makedirs(dir_path, exist_ok=True)
|
||||||
if not os.path.isdir(dir_path):
|
if not os.path.isdir(dir_path):
|
||||||
|
@ -191,7 +202,7 @@ class Config:
|
||||||
return dir_path
|
return dir_path
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def read_file(cls, file_path, config_name):
|
def read_file(cls, file_path: Any, config_name: str) -> str:
|
||||||
"""Deprecated: call read_file directly"""
|
"""Deprecated: call read_file directly"""
|
||||||
return read_file(file_path, (config_name,))
|
return read_file(file_path, (config_name,))
|
||||||
|
|
||||||
|
@ -284,6 +295,9 @@ class Config:
|
||||||
return [env.get_template(filename) for filename in filenames]
|
return [env.get_template(filename) for filename in filenames]
|
||||||
|
|
||||||
|
|
||||||
|
TRootConfig = TypeVar("TRootConfig", bound="RootConfig")
|
||||||
|
|
||||||
|
|
||||||
class RootConfig:
|
class RootConfig:
|
||||||
"""
|
"""
|
||||||
Holder of an application's configuration.
|
Holder of an application's configuration.
|
||||||
|
@ -308,7 +322,9 @@ class RootConfig:
|
||||||
raise Exception("Failed making %s: %r" % (config_class.section, e))
|
raise Exception("Failed making %s: %r" % (config_class.section, e))
|
||||||
setattr(self, config_class.section, conf)
|
setattr(self, config_class.section, conf)
|
||||||
|
|
||||||
def invoke_all(self, func_name: str, *args, **kwargs) -> MutableMapping[str, Any]:
|
def invoke_all(
|
||||||
|
self, func_name: str, *args: Any, **kwargs: Any
|
||||||
|
) -> MutableMapping[str, Any]:
|
||||||
"""
|
"""
|
||||||
Invoke a function on all instantiated config objects this RootConfig is
|
Invoke a function on all instantiated config objects this RootConfig is
|
||||||
configured to use.
|
configured to use.
|
||||||
|
@ -317,6 +333,7 @@ class RootConfig:
|
||||||
func_name: Name of function to invoke
|
func_name: Name of function to invoke
|
||||||
*args
|
*args
|
||||||
**kwargs
|
**kwargs
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
ordered dictionary of config section name and the result of the
|
ordered dictionary of config section name and the result of the
|
||||||
function from it.
|
function from it.
|
||||||
|
@ -332,7 +349,7 @@ class RootConfig:
|
||||||
return res
|
return res
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def invoke_all_static(cls, func_name: str, *args, **kwargs):
|
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: any) -> None:
|
||||||
"""
|
"""
|
||||||
Invoke a static function on config objects this RootConfig is
|
Invoke a static function on config objects this RootConfig is
|
||||||
configured to use.
|
configured to use.
|
||||||
|
@ -341,6 +358,7 @@ class RootConfig:
|
||||||
func_name: Name of function to invoke
|
func_name: Name of function to invoke
|
||||||
*args
|
*args
|
||||||
**kwargs
|
**kwargs
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
ordered dictionary of config section name and the result of the
|
ordered dictionary of config section name and the result of the
|
||||||
function from it.
|
function from it.
|
||||||
|
@ -351,16 +369,16 @@ class RootConfig:
|
||||||
|
|
||||||
def generate_config(
|
def generate_config(
|
||||||
self,
|
self,
|
||||||
config_dir_path,
|
config_dir_path: str,
|
||||||
data_dir_path,
|
data_dir_path: str,
|
||||||
server_name,
|
server_name: str,
|
||||||
generate_secrets=False,
|
generate_secrets: bool = False,
|
||||||
report_stats=None,
|
report_stats: Optional[bool] = None,
|
||||||
open_private_ports=False,
|
open_private_ports: bool = False,
|
||||||
listeners=None,
|
listeners: Optional[List[dict]] = None,
|
||||||
tls_certificate_path=None,
|
tls_certificate_path: Optional[str] = None,
|
||||||
tls_private_key_path=None,
|
tls_private_key_path: Optional[str] = None,
|
||||||
):
|
) -> str:
|
||||||
"""
|
"""
|
||||||
Build a default configuration file
|
Build a default configuration file
|
||||||
|
|
||||||
|
@ -368,27 +386,27 @@ class RootConfig:
|
||||||
(eg with --generate_config).
|
(eg with --generate_config).
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
config_dir_path (str): The path where the config files are kept. Used to
|
config_dir_path: The path where the config files are kept. Used to
|
||||||
create filenames for things like the log config and the signing key.
|
create filenames for things like the log config and the signing key.
|
||||||
|
|
||||||
data_dir_path (str): The path where the data files are kept. Used to create
|
data_dir_path: The path where the data files are kept. Used to create
|
||||||
filenames for things like the database and media store.
|
filenames for things like the database and media store.
|
||||||
|
|
||||||
server_name (str): The server name. Used to initialise the server_name
|
server_name: The server name. Used to initialise the server_name
|
||||||
config param, but also used in the names of some of the config files.
|
config param, but also used in the names of some of the config files.
|
||||||
|
|
||||||
generate_secrets (bool): True if we should generate new secrets for things
|
generate_secrets: True if we should generate new secrets for things
|
||||||
like the macaroon_secret_key. If False, these parameters will be left
|
like the macaroon_secret_key. If False, these parameters will be left
|
||||||
unset.
|
unset.
|
||||||
|
|
||||||
report_stats (bool|None): Initial setting for the report_stats setting.
|
report_stats: Initial setting for the report_stats setting.
|
||||||
If None, report_stats will be left unset.
|
If None, report_stats will be left unset.
|
||||||
|
|
||||||
open_private_ports (bool): True to leave private ports (such as the non-TLS
|
open_private_ports: True to leave private ports (such as the non-TLS
|
||||||
HTTP listener) open to the internet.
|
HTTP listener) open to the internet.
|
||||||
|
|
||||||
listeners (list(dict)|None): A list of descriptions of the listeners
|
listeners: A list of descriptions of the listeners synapse should
|
||||||
synapse should start with each of which specifies a port (str), a list of
|
start with each of which specifies a port (int), a list of
|
||||||
resources (list(str)), tls (bool) and type (str). For example:
|
resources (list(str)), tls (bool) and type (str). For example:
|
||||||
[{
|
[{
|
||||||
"port": 8448,
|
"port": 8448,
|
||||||
|
@ -403,16 +421,12 @@ class RootConfig:
|
||||||
"type": "http",
|
"type": "http",
|
||||||
}],
|
}],
|
||||||
|
|
||||||
|
tls_certificate_path: The path to the tls certificate.
|
||||||
|
|
||||||
database (str|None): The database type to configure, either `psycog2`
|
tls_private_key_path: The path to the tls private key.
|
||||||
or `sqlite3`.
|
|
||||||
|
|
||||||
tls_certificate_path (str|None): The path to the tls certificate.
|
|
||||||
|
|
||||||
tls_private_key_path (str|None): The path to the tls private key.
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
str: the yaml config file
|
The yaml config file
|
||||||
"""
|
"""
|
||||||
|
|
||||||
return CONFIG_FILE_HEADER + "\n\n".join(
|
return CONFIG_FILE_HEADER + "\n\n".join(
|
||||||
|
@ -432,12 +446,15 @@ class RootConfig:
|
||||||
)
|
)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def load_config(cls, description, argv):
|
def load_config(
|
||||||
|
cls: Type[TRootConfig], description: str, argv: List[str]
|
||||||
|
) -> TRootConfig:
|
||||||
"""Parse the commandline and config files
|
"""Parse the commandline and config files
|
||||||
|
|
||||||
Doesn't support config-file-generation: used by the worker apps.
|
Doesn't support config-file-generation: used by the worker apps.
|
||||||
|
|
||||||
Returns: Config object.
|
Returns:
|
||||||
|
Config object.
|
||||||
"""
|
"""
|
||||||
config_parser = argparse.ArgumentParser(description=description)
|
config_parser = argparse.ArgumentParser(description=description)
|
||||||
cls.add_arguments_to_parser(config_parser)
|
cls.add_arguments_to_parser(config_parser)
|
||||||
|
@ -446,7 +463,7 @@ class RootConfig:
|
||||||
return obj
|
return obj
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def add_arguments_to_parser(cls, config_parser):
|
def add_arguments_to_parser(cls, config_parser: argparse.ArgumentParser) -> None:
|
||||||
"""Adds all the config flags to an ArgumentParser.
|
"""Adds all the config flags to an ArgumentParser.
|
||||||
|
|
||||||
Doesn't support config-file-generation: used by the worker apps.
|
Doesn't support config-file-generation: used by the worker apps.
|
||||||
|
@ -454,7 +471,7 @@ class RootConfig:
|
||||||
Used for workers where we want to add extra flags/subcommands.
|
Used for workers where we want to add extra flags/subcommands.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
config_parser (ArgumentParser): App description
|
config_parser: App description
|
||||||
"""
|
"""
|
||||||
|
|
||||||
config_parser.add_argument(
|
config_parser.add_argument(
|
||||||
|
@ -477,7 +494,9 @@ class RootConfig:
|
||||||
cls.invoke_all_static("add_arguments", config_parser)
|
cls.invoke_all_static("add_arguments", config_parser)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def load_config_with_parser(cls, parser, argv):
|
def load_config_with_parser(
|
||||||
|
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
|
||||||
|
) -> Tuple[TRootConfig, argparse.Namespace]:
|
||||||
"""Parse the commandline and config files with the given parser
|
"""Parse the commandline and config files with the given parser
|
||||||
|
|
||||||
Doesn't support config-file-generation: used by the worker apps.
|
Doesn't support config-file-generation: used by the worker apps.
|
||||||
|
@ -485,13 +504,12 @@ class RootConfig:
|
||||||
Used for workers where we want to add extra flags/subcommands.
|
Used for workers where we want to add extra flags/subcommands.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
parser (ArgumentParser)
|
parser
|
||||||
argv (list[str])
|
argv
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
tuple[HomeServerConfig, argparse.Namespace]: Returns the parsed
|
Returns the parsed config object and the parsed argparse.Namespace
|
||||||
config object and the parsed argparse.Namespace object from
|
object from parser.parse_args(..)`
|
||||||
`parser.parse_args(..)`
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
obj = cls()
|
obj = cls()
|
||||||
|
@ -520,12 +538,15 @@ class RootConfig:
|
||||||
return obj, config_args
|
return obj, config_args
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def load_or_generate_config(cls, description, argv):
|
def load_or_generate_config(
|
||||||
|
cls: Type[TRootConfig], description: str, argv: List[str]
|
||||||
|
) -> Optional[TRootConfig]:
|
||||||
"""Parse the commandline and config files
|
"""Parse the commandline and config files
|
||||||
|
|
||||||
Supports generation of config files, so is used for the main homeserver app.
|
Supports generation of config files, so is used for the main homeserver app.
|
||||||
|
|
||||||
Returns: Config object, or None if --generate-config or --generate-keys was set
|
Returns:
|
||||||
|
Config object, or None if --generate-config or --generate-keys was set
|
||||||
"""
|
"""
|
||||||
parser = argparse.ArgumentParser(description=description)
|
parser = argparse.ArgumentParser(description=description)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
|
@ -680,16 +701,21 @@ class RootConfig:
|
||||||
|
|
||||||
return obj
|
return obj
|
||||||
|
|
||||||
def parse_config_dict(self, config_dict, config_dir_path=None, data_dir_path=None):
|
def parse_config_dict(
|
||||||
|
self,
|
||||||
|
config_dict: Dict[str, Any],
|
||||||
|
config_dir_path: Optional[str] = None,
|
||||||
|
data_dir_path: Optional[str] = None,
|
||||||
|
) -> None:
|
||||||
"""Read the information from the config dict into this Config object.
|
"""Read the information from the config dict into this Config object.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
config_dict (dict): Configuration data, as read from the yaml
|
config_dict: Configuration data, as read from the yaml
|
||||||
|
|
||||||
config_dir_path (str): The path where the config files are kept. Used to
|
config_dir_path: The path where the config files are kept. Used to
|
||||||
create filenames for things like the log config and the signing key.
|
create filenames for things like the log config and the signing key.
|
||||||
|
|
||||||
data_dir_path (str): The path where the data files are kept. Used to create
|
data_dir_path: The path where the data files are kept. Used to create
|
||||||
filenames for things like the database and media store.
|
filenames for things like the database and media store.
|
||||||
"""
|
"""
|
||||||
self.invoke_all(
|
self.invoke_all(
|
||||||
|
@ -699,17 +725,20 @@ class RootConfig:
|
||||||
data_dir_path=data_dir_path,
|
data_dir_path=data_dir_path,
|
||||||
)
|
)
|
||||||
|
|
||||||
def generate_missing_files(self, config_dict, config_dir_path):
|
def generate_missing_files(
|
||||||
|
self, config_dict: Dict[str, Any], config_dir_path: str
|
||||||
|
) -> None:
|
||||||
self.invoke_all("generate_files", config_dict, config_dir_path)
|
self.invoke_all("generate_files", config_dict, config_dir_path)
|
||||||
|
|
||||||
|
|
||||||
def read_config_files(config_files):
|
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
|
||||||
"""Read the config files into a dict
|
"""Read the config files into a dict
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
config_files (iterable[str]): A list of the config files to read
|
config_files: A list of the config files to read
|
||||||
|
|
||||||
Returns: dict
|
Returns:
|
||||||
|
The configuration dictionary.
|
||||||
"""
|
"""
|
||||||
specified_config = {}
|
specified_config = {}
|
||||||
for config_file in config_files:
|
for config_file in config_files:
|
||||||
|
@ -733,17 +762,17 @@ def read_config_files(config_files):
|
||||||
return specified_config
|
return specified_config
|
||||||
|
|
||||||
|
|
||||||
def find_config_files(search_paths):
|
def find_config_files(search_paths: List[str]) -> List[str]:
|
||||||
"""Finds config files using a list of search paths. If a path is a file
|
"""Finds config files using a list of search paths. If a path is a file
|
||||||
then that file path is added to the list. If a search path is a directory
|
then that file path is added to the list. If a search path is a directory
|
||||||
then all the "*.yaml" files in that directory are added to the list in
|
then all the "*.yaml" files in that directory are added to the list in
|
||||||
sorted order.
|
sorted order.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
search_paths(list(str)): A list of paths to search.
|
search_paths: A list of paths to search.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
list(str): A list of file paths.
|
A list of file paths.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
config_files = []
|
config_files = []
|
||||||
|
@ -777,7 +806,7 @@ def find_config_files(search_paths):
|
||||||
return config_files
|
return config_files
|
||||||
|
|
||||||
|
|
||||||
@attr.s
|
@attr.s(auto_attribs=True)
|
||||||
class ShardedWorkerHandlingConfig:
|
class ShardedWorkerHandlingConfig:
|
||||||
"""Algorithm for choosing which instance is responsible for handling some
|
"""Algorithm for choosing which instance is responsible for handling some
|
||||||
sharded work.
|
sharded work.
|
||||||
|
@ -787,7 +816,7 @@ class ShardedWorkerHandlingConfig:
|
||||||
below).
|
below).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
instances = attr.ib(type=List[str])
|
instances: List[str]
|
||||||
|
|
||||||
def should_handle(self, instance_name: str, key: str) -> bool:
|
def should_handle(self, instance_name: str, key: str) -> bool:
|
||||||
"""Whether this instance is responsible for handling the given key."""
|
"""Whether this instance is responsible for handling the given key."""
|
||||||
|
|
|
@ -1,4 +1,18 @@
|
||||||
from typing import Any, Iterable, List, Optional
|
import argparse
|
||||||
|
from typing import (
|
||||||
|
Any,
|
||||||
|
Dict,
|
||||||
|
Iterable,
|
||||||
|
List,
|
||||||
|
MutableMapping,
|
||||||
|
Optional,
|
||||||
|
Tuple,
|
||||||
|
Type,
|
||||||
|
TypeVar,
|
||||||
|
Union,
|
||||||
|
)
|
||||||
|
|
||||||
|
import jinja2
|
||||||
|
|
||||||
from synapse.config import (
|
from synapse.config import (
|
||||||
account_validity,
|
account_validity,
|
||||||
|
@ -20,6 +34,7 @@ from synapse.config import (
|
||||||
meow,
|
meow,
|
||||||
metrics,
|
metrics,
|
||||||
modules,
|
modules,
|
||||||
|
oembed,
|
||||||
oidc,
|
oidc,
|
||||||
password_auth_providers,
|
password_auth_providers,
|
||||||
push,
|
push,
|
||||||
|
@ -28,6 +43,7 @@ from synapse.config import (
|
||||||
registration,
|
registration,
|
||||||
repository,
|
repository,
|
||||||
retention,
|
retention,
|
||||||
|
room,
|
||||||
room_directory,
|
room_directory,
|
||||||
saml2,
|
saml2,
|
||||||
server,
|
server,
|
||||||
|
@ -52,7 +68,9 @@ MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
|
||||||
MISSING_REPORT_STATS_SPIEL: str
|
MISSING_REPORT_STATS_SPIEL: str
|
||||||
MISSING_SERVER_NAME: str
|
MISSING_SERVER_NAME: str
|
||||||
|
|
||||||
def path_exists(file_path: str): ...
|
def path_exists(file_path: str) -> bool: ...
|
||||||
|
|
||||||
|
TRootConfig = TypeVar("TRootConfig", bound="RootConfig")
|
||||||
|
|
||||||
class RootConfig:
|
class RootConfig:
|
||||||
server: server.ServerConfig
|
server: server.ServerConfig
|
||||||
|
@ -62,6 +80,7 @@ class RootConfig:
|
||||||
logging: logger.LoggingConfig
|
logging: logger.LoggingConfig
|
||||||
ratelimiting: ratelimiting.RatelimitConfig
|
ratelimiting: ratelimiting.RatelimitConfig
|
||||||
media: repository.ContentRepositoryConfig
|
media: repository.ContentRepositoryConfig
|
||||||
|
oembed: oembed.OembedConfig
|
||||||
captcha: captcha.CaptchaConfig
|
captcha: captcha.CaptchaConfig
|
||||||
voip: voip.VoipConfig
|
voip: voip.VoipConfig
|
||||||
registration: registration.RegistrationConfig
|
registration: registration.RegistrationConfig
|
||||||
|
@ -82,6 +101,7 @@ class RootConfig:
|
||||||
authproviders: password_auth_providers.PasswordAuthProviderConfig
|
authproviders: password_auth_providers.PasswordAuthProviderConfig
|
||||||
push: push.PushConfig
|
push: push.PushConfig
|
||||||
spamchecker: spam_checker.SpamCheckerConfig
|
spamchecker: spam_checker.SpamCheckerConfig
|
||||||
|
room: room.RoomConfig
|
||||||
groups: groups.GroupsConfig
|
groups: groups.GroupsConfig
|
||||||
userdirectory: user_directory.UserDirectoryConfig
|
userdirectory: user_directory.UserDirectoryConfig
|
||||||
consent: consent.ConsentConfig
|
consent: consent.ConsentConfig
|
||||||
|
@ -89,72 +109,85 @@ class RootConfig:
|
||||||
servernotices: server_notices.ServerNoticesConfig
|
servernotices: server_notices.ServerNoticesConfig
|
||||||
roomdirectory: room_directory.RoomDirectoryConfig
|
roomdirectory: room_directory.RoomDirectoryConfig
|
||||||
thirdpartyrules: third_party_event_rules.ThirdPartyRulesConfig
|
thirdpartyrules: third_party_event_rules.ThirdPartyRulesConfig
|
||||||
tracer: tracer.TracerConfig
|
tracing: tracer.TracerConfig
|
||||||
redis: redis.RedisConfig
|
redis: redis.RedisConfig
|
||||||
modules: modules.ModulesConfig
|
modules: modules.ModulesConfig
|
||||||
caches: cache.CacheConfig
|
caches: cache.CacheConfig
|
||||||
federation: federation.FederationConfig
|
federation: federation.FederationConfig
|
||||||
retention: retention.RetentionConfig
|
retention: retention.RetentionConfig
|
||||||
|
|
||||||
config_classes: List = ...
|
config_classes: List[Type["Config"]] = ...
|
||||||
def __init__(self) -> None: ...
|
def __init__(self) -> None: ...
|
||||||
def invoke_all(self, func_name: str, *args: Any, **kwargs: Any): ...
|
def invoke_all(
|
||||||
|
self, func_name: str, *args: Any, **kwargs: Any
|
||||||
|
) -> MutableMapping[str, Any]: ...
|
||||||
@classmethod
|
@classmethod
|
||||||
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: Any) -> None: ...
|
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: Any) -> None: ...
|
||||||
def __getattr__(self, item: str): ...
|
|
||||||
def parse_config_dict(
|
def parse_config_dict(
|
||||||
self,
|
self,
|
||||||
config_dict: Any,
|
config_dict: Dict[str, Any],
|
||||||
config_dir_path: Optional[Any] = ...,
|
config_dir_path: Optional[str] = ...,
|
||||||
data_dir_path: Optional[Any] = ...,
|
data_dir_path: Optional[str] = ...,
|
||||||
) -> None: ...
|
) -> None: ...
|
||||||
read_config: Any = ...
|
|
||||||
def generate_config(
|
def generate_config(
|
||||||
self,
|
self,
|
||||||
config_dir_path: str,
|
config_dir_path: str,
|
||||||
data_dir_path: str,
|
data_dir_path: str,
|
||||||
server_name: str,
|
server_name: str,
|
||||||
generate_secrets: bool = ...,
|
generate_secrets: bool = ...,
|
||||||
report_stats: Optional[str] = ...,
|
report_stats: Optional[bool] = ...,
|
||||||
open_private_ports: bool = ...,
|
open_private_ports: bool = ...,
|
||||||
listeners: Optional[Any] = ...,
|
listeners: Optional[Any] = ...,
|
||||||
database_conf: Optional[Any] = ...,
|
|
||||||
tls_certificate_path: Optional[str] = ...,
|
tls_certificate_path: Optional[str] = ...,
|
||||||
tls_private_key_path: Optional[str] = ...,
|
tls_private_key_path: Optional[str] = ...,
|
||||||
): ...
|
) -> str: ...
|
||||||
@classmethod
|
@classmethod
|
||||||
def load_or_generate_config(cls, description: Any, argv: Any): ...
|
def load_or_generate_config(
|
||||||
|
cls: Type[TRootConfig], description: str, argv: List[str]
|
||||||
|
) -> Optional[TRootConfig]: ...
|
||||||
@classmethod
|
@classmethod
|
||||||
def load_config(cls, description: Any, argv: Any): ...
|
def load_config(
|
||||||
|
cls: Type[TRootConfig], description: str, argv: List[str]
|
||||||
|
) -> TRootConfig: ...
|
||||||
@classmethod
|
@classmethod
|
||||||
def add_arguments_to_parser(cls, config_parser: Any) -> None: ...
|
def add_arguments_to_parser(
|
||||||
|
cls, config_parser: argparse.ArgumentParser
|
||||||
|
) -> None: ...
|
||||||
@classmethod
|
@classmethod
|
||||||
def load_config_with_parser(cls, parser: Any, argv: Any): ...
|
def load_config_with_parser(
|
||||||
|
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
|
||||||
|
) -> Tuple[TRootConfig, argparse.Namespace]: ...
|
||||||
def generate_missing_files(
|
def generate_missing_files(
|
||||||
self, config_dict: dict, config_dir_path: str
|
self, config_dict: dict, config_dir_path: str
|
||||||
) -> None: ...
|
) -> None: ...
|
||||||
|
|
||||||
class Config:
|
class Config:
|
||||||
root: RootConfig
|
root: RootConfig
|
||||||
|
default_template_dir: str
|
||||||
def __init__(self, root_config: Optional[RootConfig] = ...) -> None: ...
|
def __init__(self, root_config: Optional[RootConfig] = ...) -> None: ...
|
||||||
def __getattr__(self, item: str, from_root: bool = ...): ...
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def parse_size(value: Any): ...
|
def parse_size(value: Union[str, int]) -> int: ...
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def parse_duration(value: Any): ...
|
def parse_duration(value: Union[str, int]) -> int: ...
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def abspath(file_path: Optional[str]): ...
|
def abspath(file_path: Optional[str]) -> str: ...
|
||||||
@classmethod
|
@classmethod
|
||||||
def path_exists(cls, file_path: str): ...
|
def path_exists(cls, file_path: str) -> bool: ...
|
||||||
@classmethod
|
@classmethod
|
||||||
def check_file(cls, file_path: str, config_name: str): ...
|
def check_file(cls, file_path: str, config_name: str) -> str: ...
|
||||||
@classmethod
|
@classmethod
|
||||||
def ensure_directory(cls, dir_path: str): ...
|
def ensure_directory(cls, dir_path: str) -> str: ...
|
||||||
@classmethod
|
@classmethod
|
||||||
def read_file(cls, file_path: str, config_name: str): ...
|
def read_file(cls, file_path: str, config_name: str) -> str: ...
|
||||||
|
def read_template(self, filenames: str) -> jinja2.Template: ...
|
||||||
|
def read_templates(
|
||||||
|
self,
|
||||||
|
filenames: List[str],
|
||||||
|
custom_template_directories: Optional[Iterable[str]] = None,
|
||||||
|
) -> List[jinja2.Template]: ...
|
||||||
|
|
||||||
def read_config_files(config_files: List[str]): ...
|
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]: ...
|
||||||
def find_config_files(search_paths: List[str]): ...
|
def find_config_files(search_paths: List[str]) -> List[str]: ...
|
||||||
|
|
||||||
class ShardedWorkerHandlingConfig:
|
class ShardedWorkerHandlingConfig:
|
||||||
instances: List[str]
|
instances: List[str]
|
||||||
|
|
|
@ -15,7 +15,7 @@
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import threading
|
import threading
|
||||||
from typing import Callable, Dict
|
from typing import Callable, Dict, Optional
|
||||||
|
|
||||||
from synapse.python_dependencies import DependencyException, check_requirements
|
from synapse.python_dependencies import DependencyException, check_requirements
|
||||||
|
|
||||||
|
@ -217,7 +217,7 @@ class CacheConfig(Config):
|
||||||
|
|
||||||
expiry_time = cache_config.get("expiry_time")
|
expiry_time = cache_config.get("expiry_time")
|
||||||
if expiry_time:
|
if expiry_time:
|
||||||
self.expiry_time_msec = self.parse_duration(expiry_time)
|
self.expiry_time_msec: Optional[int] = self.parse_duration(expiry_time)
|
||||||
else:
|
else:
|
||||||
self.expiry_time_msec = None
|
self.expiry_time_msec = None
|
||||||
|
|
||||||
|
|
|
@ -137,33 +137,14 @@ class EmailConfig(Config):
|
||||||
if self.root.registration.account_threepid_delegate_email
|
if self.root.registration.account_threepid_delegate_email
|
||||||
else ThreepidBehaviour.LOCAL
|
else ThreepidBehaviour.LOCAL
|
||||||
)
|
)
|
||||||
# Prior to Synapse v1.4.0, there was another option that defined whether Synapse would
|
|
||||||
# use an identity server to password reset tokens on its behalf. We now warn the user
|
|
||||||
# if they have this set and tell them to use the updated option, while using a default
|
|
||||||
# identity server in the process.
|
|
||||||
self.using_identity_server_from_trusted_list = False
|
|
||||||
if (
|
|
||||||
not self.root.registration.account_threepid_delegate_email
|
|
||||||
and config.get("trust_identity_server_for_password_resets", False) is True
|
|
||||||
):
|
|
||||||
# Use the first entry in self.trusted_third_party_id_servers instead
|
|
||||||
if self.trusted_third_party_id_servers:
|
|
||||||
# XXX: It's a little confusing that account_threepid_delegate_email is modified
|
|
||||||
# both in RegistrationConfig and here. We should factor this bit out
|
|
||||||
|
|
||||||
first_trusted_identity_server = self.trusted_third_party_id_servers[0]
|
if config.get("trust_identity_server_for_password_resets"):
|
||||||
|
raise ConfigError(
|
||||||
# trusted_third_party_id_servers does not contain a scheme whereas
|
'The config option "trust_identity_server_for_password_resets" '
|
||||||
# account_threepid_delegate_email is expected to. Presume https
|
'has been replaced by "account_threepid_delegate". '
|
||||||
self.root.registration.account_threepid_delegate_email = (
|
"Please consult the sample config at docs/sample_config.yaml for "
|
||||||
"https://" + first_trusted_identity_server
|
"details and update your config file."
|
||||||
)
|
)
|
||||||
self.using_identity_server_from_trusted_list = True
|
|
||||||
else:
|
|
||||||
raise ConfigError(
|
|
||||||
"Attempted to use an identity server from"
|
|
||||||
'"trusted_third_party_id_servers" but it is empty.'
|
|
||||||
)
|
|
||||||
|
|
||||||
self.local_threepid_handling_disabled_due_to_email_config = False
|
self.local_threepid_handling_disabled_due_to_email_config = False
|
||||||
if (
|
if (
|
||||||
|
|
|
@ -31,6 +31,8 @@ class JWTConfig(Config):
|
||||||
self.jwt_secret = jwt_config["secret"]
|
self.jwt_secret = jwt_config["secret"]
|
||||||
self.jwt_algorithm = jwt_config["algorithm"]
|
self.jwt_algorithm = jwt_config["algorithm"]
|
||||||
|
|
||||||
|
self.jwt_subject_claim = jwt_config.get("subject_claim", "sub")
|
||||||
|
|
||||||
# The issuer and audiences are optional, if provided, it is asserted
|
# The issuer and audiences are optional, if provided, it is asserted
|
||||||
# that the claims exist on the JWT.
|
# that the claims exist on the JWT.
|
||||||
self.jwt_issuer = jwt_config.get("issuer")
|
self.jwt_issuer = jwt_config.get("issuer")
|
||||||
|
@ -46,6 +48,7 @@ class JWTConfig(Config):
|
||||||
self.jwt_enabled = False
|
self.jwt_enabled = False
|
||||||
self.jwt_secret = None
|
self.jwt_secret = None
|
||||||
self.jwt_algorithm = None
|
self.jwt_algorithm = None
|
||||||
|
self.jwt_subject_claim = None
|
||||||
self.jwt_issuer = None
|
self.jwt_issuer = None
|
||||||
self.jwt_audiences = None
|
self.jwt_audiences = None
|
||||||
|
|
||||||
|
@ -88,6 +91,12 @@ class JWTConfig(Config):
|
||||||
#
|
#
|
||||||
#algorithm: "provided-by-your-issuer"
|
#algorithm: "provided-by-your-issuer"
|
||||||
|
|
||||||
|
# Name of the claim containing a unique identifier for the user.
|
||||||
|
#
|
||||||
|
# Optional, defaults to `sub`.
|
||||||
|
#
|
||||||
|
#subject_claim: "sub"
|
||||||
|
|
||||||
# The issuer to validate the "iss" claim against.
|
# The issuer to validate the "iss" claim against.
|
||||||
#
|
#
|
||||||
# Optional, if provided the "iss" claim will be required and
|
# Optional, if provided the "iss" claim will be required and
|
||||||
|
|
|
@ -16,6 +16,7 @@
|
||||||
import hashlib
|
import hashlib
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
import jsonschema
|
import jsonschema
|
||||||
|
@ -312,7 +313,7 @@ class KeyConfig(Config):
|
||||||
)
|
)
|
||||||
return keys
|
return keys
|
||||||
|
|
||||||
def generate_files(self, config, config_dir_path):
|
def generate_files(self, config: Dict[str, Any], config_dir_path: str) -> None:
|
||||||
if "signing_key" in config:
|
if "signing_key" in config:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
|
@ -18,7 +18,7 @@ import os
|
||||||
import sys
|
import sys
|
||||||
import threading
|
import threading
|
||||||
from string import Template
|
from string import Template
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Any, Dict
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
from zope.interface import implementer
|
from zope.interface import implementer
|
||||||
|
@ -185,7 +185,7 @@ class LoggingConfig(Config):
|
||||||
help=argparse.SUPPRESS,
|
help=argparse.SUPPRESS,
|
||||||
)
|
)
|
||||||
|
|
||||||
def generate_files(self, config, config_dir_path):
|
def generate_files(self, config: Dict[str, Any], config_dir_path: str) -> None:
|
||||||
log_config = config.get("log_config")
|
log_config = config.get("log_config")
|
||||||
if log_config and not os.path.exists(log_config):
|
if log_config and not os.path.exists(log_config):
|
||||||
log_file = self.abspath("homeserver.log")
|
log_file = self.abspath("homeserver.log")
|
||||||
|
|
|
@ -39,9 +39,7 @@ class RegistrationConfig(Config):
|
||||||
self.registration_shared_secret = config.get("registration_shared_secret")
|
self.registration_shared_secret = config.get("registration_shared_secret")
|
||||||
|
|
||||||
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
|
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
|
||||||
self.trusted_third_party_id_servers = config.get(
|
|
||||||
"trusted_third_party_id_servers", ["matrix.org", "vector.im"]
|
|
||||||
)
|
|
||||||
account_threepid_delegates = config.get("account_threepid_delegates") or {}
|
account_threepid_delegates = config.get("account_threepid_delegates") or {}
|
||||||
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
|
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
|
||||||
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
|
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
|
||||||
|
@ -114,25 +112,32 @@ class RegistrationConfig(Config):
|
||||||
session_lifetime = self.parse_duration(session_lifetime)
|
session_lifetime = self.parse_duration(session_lifetime)
|
||||||
self.session_lifetime = session_lifetime
|
self.session_lifetime = session_lifetime
|
||||||
|
|
||||||
# The `access_token_lifetime` applies for tokens that can be renewed
|
# The `refreshable_access_token_lifetime` applies for tokens that can be renewed
|
||||||
# using a refresh token, as per MSC2918. If it is `None`, the refresh
|
# using a refresh token, as per MSC2918. If it is `None`, the refresh
|
||||||
# token mechanism is disabled.
|
# token mechanism is disabled.
|
||||||
#
|
#
|
||||||
# Since it is incompatible with the `session_lifetime` mechanism, it is set to
|
# Since it is incompatible with the `session_lifetime` mechanism, it is set to
|
||||||
# `None` by default if a `session_lifetime` is set.
|
# `None` by default if a `session_lifetime` is set.
|
||||||
access_token_lifetime = config.get(
|
refreshable_access_token_lifetime = config.get(
|
||||||
"access_token_lifetime", "5m" if session_lifetime is None else None
|
"refreshable_access_token_lifetime",
|
||||||
|
"5m" if session_lifetime is None else None,
|
||||||
)
|
)
|
||||||
if access_token_lifetime is not None:
|
if refreshable_access_token_lifetime is not None:
|
||||||
access_token_lifetime = self.parse_duration(access_token_lifetime)
|
refreshable_access_token_lifetime = self.parse_duration(
|
||||||
self.access_token_lifetime = access_token_lifetime
|
refreshable_access_token_lifetime
|
||||||
|
)
|
||||||
|
self.refreshable_access_token_lifetime = refreshable_access_token_lifetime
|
||||||
|
|
||||||
if session_lifetime is not None and access_token_lifetime is not None:
|
if (
|
||||||
|
session_lifetime is not None
|
||||||
|
and refreshable_access_token_lifetime is not None
|
||||||
|
):
|
||||||
raise ConfigError(
|
raise ConfigError(
|
||||||
"The refresh token mechanism is incompatible with the "
|
"The refresh token mechanism is incompatible with the "
|
||||||
"`session_lifetime` option. Consider disabling the "
|
"`session_lifetime` option. Consider disabling the "
|
||||||
"`session_lifetime` option or disabling the refresh token "
|
"`session_lifetime` option or disabling the refresh token "
|
||||||
"mechanism by removing the `access_token_lifetime` option."
|
"mechanism by removing the `refreshable_access_token_lifetime` "
|
||||||
|
"option."
|
||||||
)
|
)
|
||||||
|
|
||||||
# The fallback template used for authenticating using a registration token
|
# The fallback template used for authenticating using a registration token
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2018 New Vector Ltd
|
# Copyright 2018 New Vector Ltd
|
||||||
|
# Copyright 2021 Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -12,6 +13,9 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
from synapse.types import JsonDict
|
||||||
from synapse.util import glob_to_regex
|
from synapse.util import glob_to_regex
|
||||||
|
|
||||||
from ._base import Config, ConfigError
|
from ._base import Config, ConfigError
|
||||||
|
@ -20,7 +24,7 @@ from ._base import Config, ConfigError
|
||||||
class RoomDirectoryConfig(Config):
|
class RoomDirectoryConfig(Config):
|
||||||
section = "roomdirectory"
|
section = "roomdirectory"
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs) -> None:
|
||||||
self.enable_room_list_search = config.get("enable_room_list_search", True)
|
self.enable_room_list_search = config.get("enable_room_list_search", True)
|
||||||
|
|
||||||
alias_creation_rules = config.get("alias_creation_rules")
|
alias_creation_rules = config.get("alias_creation_rules")
|
||||||
|
@ -47,7 +51,7 @@ class RoomDirectoryConfig(Config):
|
||||||
_RoomDirectoryRule("room_list_publication_rules", {"action": "allow"})
|
_RoomDirectoryRule("room_list_publication_rules", {"action": "allow"})
|
||||||
]
|
]
|
||||||
|
|
||||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||||
return """
|
return """
|
||||||
# Uncomment to disable searching the public room list. When disabled
|
# Uncomment to disable searching the public room list. When disabled
|
||||||
# blocks searching local and remote room lists for local and remote
|
# blocks searching local and remote room lists for local and remote
|
||||||
|
@ -113,16 +117,16 @@ class RoomDirectoryConfig(Config):
|
||||||
# action: allow
|
# action: allow
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def is_alias_creation_allowed(self, user_id, room_id, alias):
|
def is_alias_creation_allowed(self, user_id: str, room_id: str, alias: str) -> bool:
|
||||||
"""Checks if the given user is allowed to create the given alias
|
"""Checks if the given user is allowed to create the given alias
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
user_id (str)
|
user_id: The user to check.
|
||||||
room_id (str)
|
room_id: The room ID for the alias.
|
||||||
alias (str)
|
alias: The alias being created.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
boolean: True if user is allowed to create the alias
|
True if user is allowed to create the alias
|
||||||
"""
|
"""
|
||||||
for rule in self._alias_creation_rules:
|
for rule in self._alias_creation_rules:
|
||||||
if rule.matches(user_id, room_id, [alias]):
|
if rule.matches(user_id, room_id, [alias]):
|
||||||
|
@ -130,16 +134,18 @@ class RoomDirectoryConfig(Config):
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def is_publishing_room_allowed(self, user_id, room_id, aliases):
|
def is_publishing_room_allowed(
|
||||||
|
self, user_id: str, room_id: str, aliases: List[str]
|
||||||
|
) -> bool:
|
||||||
"""Checks if the given user is allowed to publish the room
|
"""Checks if the given user is allowed to publish the room
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
user_id (str)
|
user_id: The user ID publishing the room.
|
||||||
room_id (str)
|
room_id: The room being published.
|
||||||
aliases (list[str]): any local aliases associated with the room
|
aliases: any local aliases associated with the room
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
boolean: True if user can publish room
|
True if user can publish room
|
||||||
"""
|
"""
|
||||||
for rule in self._room_list_publication_rules:
|
for rule in self._room_list_publication_rules:
|
||||||
if rule.matches(user_id, room_id, aliases):
|
if rule.matches(user_id, room_id, aliases):
|
||||||
|
@ -153,11 +159,11 @@ class _RoomDirectoryRule:
|
||||||
creating an alias or publishing a room.
|
creating an alias or publishing a room.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, option_name, rule):
|
def __init__(self, option_name: str, rule: JsonDict):
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
option_name (str): Name of the config option this rule belongs to
|
option_name: Name of the config option this rule belongs to
|
||||||
rule (dict): The rule as specified in the config
|
rule: The rule as specified in the config
|
||||||
"""
|
"""
|
||||||
|
|
||||||
action = rule["action"]
|
action = rule["action"]
|
||||||
|
@ -181,18 +187,18 @@ class _RoomDirectoryRule:
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise ConfigError("Failed to parse glob into regex") from e
|
raise ConfigError("Failed to parse glob into regex") from e
|
||||||
|
|
||||||
def matches(self, user_id, room_id, aliases):
|
def matches(self, user_id: str, room_id: str, aliases: List[str]) -> bool:
|
||||||
"""Tests if this rule matches the given user_id, room_id and aliases.
|
"""Tests if this rule matches the given user_id, room_id and aliases.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
user_id (str)
|
user_id: The user ID to check.
|
||||||
room_id (str)
|
room_id: The room ID to check.
|
||||||
aliases (list[str]): The associated aliases to the room. Will be a
|
aliases: The associated aliases to the room. Will be a single element
|
||||||
single element for testing alias creation, and can be empty for
|
for testing alias creation, and can be empty for testing room
|
||||||
testing room publishing.
|
publishing.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
boolean
|
True if the rule matches.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Note: The regexes are anchored at both ends
|
# Note: The regexes are anchored at both ends
|
||||||
|
|
|
@ -421,7 +421,7 @@ class ServerConfig(Config):
|
||||||
# before redacting them.
|
# before redacting them.
|
||||||
redaction_retention_period = config.get("redaction_retention_period", "7d")
|
redaction_retention_period = config.get("redaction_retention_period", "7d")
|
||||||
if redaction_retention_period is not None:
|
if redaction_retention_period is not None:
|
||||||
self.redaction_retention_period = self.parse_duration(
|
self.redaction_retention_period: Optional[int] = self.parse_duration(
|
||||||
redaction_retention_period
|
redaction_retention_period
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
|
@ -430,7 +430,7 @@ class ServerConfig(Config):
|
||||||
# How long to keep entries in the `users_ips` table.
|
# How long to keep entries in the `users_ips` table.
|
||||||
user_ips_max_age = config.get("user_ips_max_age", "28d")
|
user_ips_max_age = config.get("user_ips_max_age", "28d")
|
||||||
if user_ips_max_age is not None:
|
if user_ips_max_age is not None:
|
||||||
self.user_ips_max_age = self.parse_duration(user_ips_max_age)
|
self.user_ips_max_age: Optional[int] = self.parse_duration(user_ips_max_age)
|
||||||
else:
|
else:
|
||||||
self.user_ips_max_age = None
|
self.user_ips_max_age = None
|
||||||
|
|
||||||
|
|
|
@ -14,7 +14,6 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
from datetime import datetime
|
|
||||||
from typing import List, Optional, Pattern
|
from typing import List, Optional, Pattern
|
||||||
|
|
||||||
from OpenSSL import SSL, crypto
|
from OpenSSL import SSL, crypto
|
||||||
|
@ -133,55 +132,6 @@ class TlsConfig(Config):
|
||||||
self.tls_certificate: Optional[crypto.X509] = None
|
self.tls_certificate: Optional[crypto.X509] = None
|
||||||
self.tls_private_key: Optional[crypto.PKey] = None
|
self.tls_private_key: Optional[crypto.PKey] = None
|
||||||
|
|
||||||
def is_disk_cert_valid(self, allow_self_signed=True):
|
|
||||||
"""
|
|
||||||
Is the certificate we have on disk valid, and if so, for how long?
|
|
||||||
|
|
||||||
Args:
|
|
||||||
allow_self_signed (bool): Should we allow the certificate we
|
|
||||||
read to be self signed?
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
int: Days remaining of certificate validity.
|
|
||||||
None: No certificate exists.
|
|
||||||
"""
|
|
||||||
if not os.path.exists(self.tls_certificate_file):
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
with open(self.tls_certificate_file, "rb") as f:
|
|
||||||
cert_pem = f.read()
|
|
||||||
except Exception as e:
|
|
||||||
raise ConfigError(
|
|
||||||
"Failed to read existing certificate file %s: %s"
|
|
||||||
% (self.tls_certificate_file, e)
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
tls_certificate = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
|
|
||||||
except Exception as e:
|
|
||||||
raise ConfigError(
|
|
||||||
"Failed to parse existing certificate file %s: %s"
|
|
||||||
% (self.tls_certificate_file, e)
|
|
||||||
)
|
|
||||||
|
|
||||||
if not allow_self_signed:
|
|
||||||
if tls_certificate.get_subject() == tls_certificate.get_issuer():
|
|
||||||
raise ValueError(
|
|
||||||
"TLS Certificate is self signed, and this is not permitted"
|
|
||||||
)
|
|
||||||
|
|
||||||
# YYYYMMDDhhmmssZ -- in UTC
|
|
||||||
expiry_data = tls_certificate.get_notAfter()
|
|
||||||
if expiry_data is None:
|
|
||||||
raise ValueError(
|
|
||||||
"TLS Certificate has no expiry date, and this is not permitted"
|
|
||||||
)
|
|
||||||
expires_on = datetime.strptime(expiry_data.decode("ascii"), "%Y%m%d%H%M%SZ")
|
|
||||||
now = datetime.utcnow()
|
|
||||||
days_remaining = (expires_on - now).days
|
|
||||||
return days_remaining
|
|
||||||
|
|
||||||
def read_certificate_from_disk(self):
|
def read_certificate_from_disk(self):
|
||||||
"""
|
"""
|
||||||
Read the certificates and private key from disk.
|
Read the certificates and private key from disk.
|
||||||
|
@ -263,8 +213,8 @@ class TlsConfig(Config):
|
||||||
#
|
#
|
||||||
#federation_certificate_verification_whitelist:
|
#federation_certificate_verification_whitelist:
|
||||||
# - lon.example.com
|
# - lon.example.com
|
||||||
# - *.domain.com
|
# - "*.domain.com"
|
||||||
# - *.onion
|
# - "*.onion"
|
||||||
|
|
||||||
# List of custom certificate authorities for federation traffic.
|
# List of custom certificate authorities for federation traffic.
|
||||||
#
|
#
|
||||||
|
@ -295,7 +245,7 @@ class TlsConfig(Config):
|
||||||
cert_path = self.tls_certificate_file
|
cert_path = self.tls_certificate_file
|
||||||
logger.info("Loading TLS certificate from %s", cert_path)
|
logger.info("Loading TLS certificate from %s", cert_path)
|
||||||
cert_pem = self.read_file(cert_path, "tls_certificate_path")
|
cert_pem = self.read_file(cert_path, "tls_certificate_path")
|
||||||
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
|
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem.encode())
|
||||||
|
|
||||||
return cert
|
return cert
|
||||||
|
|
||||||
|
|
|
@ -53,8 +53,8 @@ class UserDirectoryConfig(Config):
|
||||||
# indexes were (re)built was before Synapse 1.44, you'll have to
|
# indexes were (re)built was before Synapse 1.44, you'll have to
|
||||||
# rebuild the indexes in order to search through all known users.
|
# rebuild the indexes in order to search through all known users.
|
||||||
# These indexes are built the first time Synapse starts; admins can
|
# These indexes are built the first time Synapse starts; admins can
|
||||||
# manually trigger a rebuild following the instructions at
|
# manually trigger a rebuild via API following the instructions at
|
||||||
# https://matrix-org.github.io/synapse/latest/user_directory.html
|
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
|
||||||
#
|
#
|
||||||
# Uncomment to return search results containing all known users, even if that
|
# Uncomment to return search results containing all known users, even if that
|
||||||
# user does not share a room with the requester.
|
# user does not share a room with the requester.
|
||||||
|
|
|
@ -1,5 +1,4 @@
|
||||||
# Copyright 2014-2016 OpenMarket Ltd
|
# Copyright 2014-2021 The Matrix.org Foundation C.I.C.
|
||||||
# Copyright 2017, 2018 New Vector Ltd
|
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -120,16 +119,6 @@ class VerifyJsonRequest:
|
||||||
key_ids=key_ids,
|
key_ids=key_ids,
|
||||||
)
|
)
|
||||||
|
|
||||||
def to_fetch_key_request(self) -> "_FetchKeyRequest":
|
|
||||||
"""Create a key fetch request for all keys needed to satisfy the
|
|
||||||
verification request.
|
|
||||||
"""
|
|
||||||
return _FetchKeyRequest(
|
|
||||||
server_name=self.server_name,
|
|
||||||
minimum_valid_until_ts=self.minimum_valid_until_ts,
|
|
||||||
key_ids=self.key_ids,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class KeyLookupError(ValueError):
|
class KeyLookupError(ValueError):
|
||||||
pass
|
pass
|
||||||
|
@ -179,8 +168,22 @@ class Keyring:
|
||||||
clock=hs.get_clock(),
|
clock=hs.get_clock(),
|
||||||
process_batch_callback=self._inner_fetch_key_requests,
|
process_batch_callback=self._inner_fetch_key_requests,
|
||||||
)
|
)
|
||||||
self.verify_key = get_verify_key(hs.signing_key)
|
|
||||||
self.hostname = hs.hostname
|
self._hostname = hs.hostname
|
||||||
|
|
||||||
|
# build a FetchKeyResult for each of our own keys, to shortcircuit the
|
||||||
|
# fetcher.
|
||||||
|
self._local_verify_keys: Dict[str, FetchKeyResult] = {}
|
||||||
|
for key_id, key in hs.config.key.old_signing_keys.items():
|
||||||
|
self._local_verify_keys[key_id] = FetchKeyResult(
|
||||||
|
verify_key=key, valid_until_ts=key.expired_ts
|
||||||
|
)
|
||||||
|
|
||||||
|
vk = get_verify_key(hs.signing_key)
|
||||||
|
self._local_verify_keys[f"{vk.alg}:{vk.version}"] = FetchKeyResult(
|
||||||
|
verify_key=vk,
|
||||||
|
valid_until_ts=2 ** 63, # fake future timestamp
|
||||||
|
)
|
||||||
|
|
||||||
async def verify_json_for_server(
|
async def verify_json_for_server(
|
||||||
self,
|
self,
|
||||||
|
@ -267,22 +270,32 @@ class Keyring:
|
||||||
Codes.UNAUTHORIZED,
|
Codes.UNAUTHORIZED,
|
||||||
)
|
)
|
||||||
|
|
||||||
# If we are the originating server don't fetch verify key for self over federation
|
found_keys: Dict[str, FetchKeyResult] = {}
|
||||||
if verify_request.server_name == self.hostname:
|
|
||||||
await self._process_json(self.verify_key, verify_request)
|
|
||||||
return
|
|
||||||
|
|
||||||
# Add the keys we need to verify to the queue for retrieval. We queue
|
# If we are the originating server, short-circuit the key-fetch for any keys
|
||||||
# up requests for the same server so we don't end up with many in flight
|
# we already have
|
||||||
# requests for the same keys.
|
if verify_request.server_name == self._hostname:
|
||||||
key_request = verify_request.to_fetch_key_request()
|
for key_id in verify_request.key_ids:
|
||||||
found_keys_by_server = await self._server_queue.add_to_queue(
|
if key_id in self._local_verify_keys:
|
||||||
key_request, key=verify_request.server_name
|
found_keys[key_id] = self._local_verify_keys[key_id]
|
||||||
)
|
|
||||||
|
|
||||||
# Since we batch up requests the returned set of keys may contain keys
|
key_ids_to_find = set(verify_request.key_ids) - found_keys.keys()
|
||||||
# from other servers, so we pull out only the ones we care about.s
|
if key_ids_to_find:
|
||||||
found_keys = found_keys_by_server.get(verify_request.server_name, {})
|
# Add the keys we need to verify to the queue for retrieval. We queue
|
||||||
|
# up requests for the same server so we don't end up with many in flight
|
||||||
|
# requests for the same keys.
|
||||||
|
key_request = _FetchKeyRequest(
|
||||||
|
server_name=verify_request.server_name,
|
||||||
|
minimum_valid_until_ts=verify_request.minimum_valid_until_ts,
|
||||||
|
key_ids=list(key_ids_to_find),
|
||||||
|
)
|
||||||
|
found_keys_by_server = await self._server_queue.add_to_queue(
|
||||||
|
key_request, key=verify_request.server_name
|
||||||
|
)
|
||||||
|
|
||||||
|
# Since we batch up requests the returned set of keys may contain keys
|
||||||
|
# from other servers, so we pull out only the ones we care about.
|
||||||
|
found_keys.update(found_keys_by_server.get(verify_request.server_name, {}))
|
||||||
|
|
||||||
# Verify each signature we got valid keys for, raising if we can't
|
# Verify each signature we got valid keys for, raising if we can't
|
||||||
# verify any of them.
|
# verify any of them.
|
||||||
|
|
|
@ -128,14 +128,12 @@ class EventBuilder:
|
||||||
)
|
)
|
||||||
|
|
||||||
format_version = self.room_version.event_format
|
format_version = self.room_version.event_format
|
||||||
|
# The types of auth/prev events changes between event versions.
|
||||||
|
prev_events: Union[List[str], List[Tuple[str, Dict[str, str]]]]
|
||||||
|
auth_events: Union[List[str], List[Tuple[str, Dict[str, str]]]]
|
||||||
if format_version == EventFormatVersions.V1:
|
if format_version == EventFormatVersions.V1:
|
||||||
# The types of auth/prev events changes between event versions.
|
auth_events = await self._store.add_event_hashes(auth_event_ids)
|
||||||
auth_events: Union[
|
prev_events = await self._store.add_event_hashes(prev_event_ids)
|
||||||
List[str], List[Tuple[str, Dict[str, str]]]
|
|
||||||
] = await self._store.add_event_hashes(auth_event_ids)
|
|
||||||
prev_events: Union[
|
|
||||||
List[str], List[Tuple[str, Dict[str, str]]]
|
|
||||||
] = await self._store.add_event_hashes(prev_event_ids)
|
|
||||||
else:
|
else:
|
||||||
auth_events = auth_event_ids
|
auth_events = auth_event_ids
|
||||||
prev_events = prev_event_ids
|
prev_events = prev_event_ids
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2014-2016 OpenMarket Ltd
|
# Copyright 2014-2016 OpenMarket Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -392,15 +393,16 @@ class EventClientSerializer:
|
||||||
self,
|
self,
|
||||||
event: Union[JsonDict, EventBase],
|
event: Union[JsonDict, EventBase],
|
||||||
time_now: int,
|
time_now: int,
|
||||||
bundle_aggregations: bool = True,
|
bundle_relations: bool = True,
|
||||||
**kwargs: Any,
|
**kwargs: Any,
|
||||||
) -> JsonDict:
|
) -> JsonDict:
|
||||||
"""Serializes a single event.
|
"""Serializes a single event.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
event
|
event: The event being serialized.
|
||||||
time_now: The current time in milliseconds
|
time_now: The current time in milliseconds
|
||||||
bundle_aggregations: Whether to bundle in related events
|
bundle_relations: Whether to include the bundled relations for this
|
||||||
|
event.
|
||||||
**kwargs: Arguments to pass to `serialize_event`
|
**kwargs: Arguments to pass to `serialize_event`
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
|
@ -410,77 +412,93 @@ class EventClientSerializer:
|
||||||
if not isinstance(event, EventBase):
|
if not isinstance(event, EventBase):
|
||||||
return event
|
return event
|
||||||
|
|
||||||
event_id = event.event_id
|
|
||||||
serialized_event = serialize_event(event, time_now, **kwargs)
|
serialized_event = serialize_event(event, time_now, **kwargs)
|
||||||
|
|
||||||
# If MSC1849 is enabled then we need to look if there are any relations
|
# If MSC1849 is enabled then we need to look if there are any relations
|
||||||
# we need to bundle in with the event.
|
# we need to bundle in with the event.
|
||||||
# Do not bundle relations if the event has been redacted
|
# Do not bundle relations if the event has been redacted
|
||||||
if not event.internal_metadata.is_redacted() and (
|
if not event.internal_metadata.is_redacted() and (
|
||||||
self._msc1849_enabled and bundle_aggregations
|
self._msc1849_enabled and bundle_relations
|
||||||
):
|
):
|
||||||
annotations = await self.store.get_aggregation_groups_for_event(event_id)
|
await self._injected_bundled_relations(event, time_now, serialized_event)
|
||||||
references = await self.store.get_relations_for_event(
|
|
||||||
event_id, RelationTypes.REFERENCE, direction="f"
|
|
||||||
)
|
|
||||||
|
|
||||||
if annotations.chunk:
|
|
||||||
r = serialized_event["unsigned"].setdefault("m.relations", {})
|
|
||||||
r[RelationTypes.ANNOTATION] = annotations.to_dict()
|
|
||||||
|
|
||||||
if references.chunk:
|
|
||||||
r = serialized_event["unsigned"].setdefault("m.relations", {})
|
|
||||||
r[RelationTypes.REFERENCE] = references.to_dict()
|
|
||||||
|
|
||||||
edit = None
|
|
||||||
if event.type == EventTypes.Message:
|
|
||||||
edit = await self.store.get_applicable_edit(event_id)
|
|
||||||
|
|
||||||
if edit:
|
|
||||||
# If there is an edit replace the content, preserving existing
|
|
||||||
# relations.
|
|
||||||
|
|
||||||
# Ensure we take copies of the edit content, otherwise we risk modifying
|
|
||||||
# the original event.
|
|
||||||
edit_content = edit.content.copy()
|
|
||||||
|
|
||||||
# Unfreeze the event content if necessary, so that we may modify it below
|
|
||||||
edit_content = unfreeze(edit_content)
|
|
||||||
serialized_event["content"] = edit_content.get("m.new_content", {})
|
|
||||||
|
|
||||||
# Check for existing relations
|
|
||||||
relations = event.content.get("m.relates_to")
|
|
||||||
if relations:
|
|
||||||
# Keep the relations, ensuring we use a dict copy of the original
|
|
||||||
serialized_event["content"]["m.relates_to"] = relations.copy()
|
|
||||||
else:
|
|
||||||
serialized_event["content"].pop("m.relates_to", None)
|
|
||||||
|
|
||||||
r = serialized_event["unsigned"].setdefault("m.relations", {})
|
|
||||||
r[RelationTypes.REPLACE] = {
|
|
||||||
"event_id": edit.event_id,
|
|
||||||
"origin_server_ts": edit.origin_server_ts,
|
|
||||||
"sender": edit.sender,
|
|
||||||
}
|
|
||||||
|
|
||||||
# If this event is the start of a thread, include a summary of the replies.
|
|
||||||
if self._msc3440_enabled:
|
|
||||||
(
|
|
||||||
thread_count,
|
|
||||||
latest_thread_event,
|
|
||||||
) = await self.store.get_thread_summary(event_id)
|
|
||||||
if latest_thread_event:
|
|
||||||
r = serialized_event["unsigned"].setdefault("m.relations", {})
|
|
||||||
r[RelationTypes.THREAD] = {
|
|
||||||
# Don't bundle aggregations as this could recurse forever.
|
|
||||||
"latest_event": await self.serialize_event(
|
|
||||||
latest_thread_event, time_now, bundle_aggregations=False
|
|
||||||
),
|
|
||||||
"count": thread_count,
|
|
||||||
}
|
|
||||||
|
|
||||||
return serialized_event
|
return serialized_event
|
||||||
|
|
||||||
|
async def _injected_bundled_relations(
|
||||||
|
self, event: EventBase, time_now: int, serialized_event: JsonDict
|
||||||
|
) -> None:
|
||||||
|
"""Potentially injects bundled relations into the unsigned portion of the serialized event.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event: The event being serialized.
|
||||||
|
time_now: The current time in milliseconds
|
||||||
|
serialized_event: The serialized event which may be modified.
|
||||||
|
|
||||||
|
"""
|
||||||
|
event_id = event.event_id
|
||||||
|
|
||||||
|
# The bundled relations to include.
|
||||||
|
relations = {}
|
||||||
|
|
||||||
|
annotations = await self.store.get_aggregation_groups_for_event(event_id)
|
||||||
|
if annotations.chunk:
|
||||||
|
relations[RelationTypes.ANNOTATION] = annotations.to_dict()
|
||||||
|
|
||||||
|
references = await self.store.get_relations_for_event(
|
||||||
|
event_id, RelationTypes.REFERENCE, direction="f"
|
||||||
|
)
|
||||||
|
if references.chunk:
|
||||||
|
relations[RelationTypes.REFERENCE] = references.to_dict()
|
||||||
|
|
||||||
|
edit = None
|
||||||
|
if event.type == EventTypes.Message:
|
||||||
|
edit = await self.store.get_applicable_edit(event_id)
|
||||||
|
|
||||||
|
if edit:
|
||||||
|
# If there is an edit replace the content, preserving existing
|
||||||
|
# relations.
|
||||||
|
|
||||||
|
# Ensure we take copies of the edit content, otherwise we risk modifying
|
||||||
|
# the original event.
|
||||||
|
edit_content = edit.content.copy()
|
||||||
|
|
||||||
|
# Unfreeze the event content if necessary, so that we may modify it below
|
||||||
|
edit_content = unfreeze(edit_content)
|
||||||
|
serialized_event["content"] = edit_content.get("m.new_content", {})
|
||||||
|
|
||||||
|
# Check for existing relations
|
||||||
|
relates_to = event.content.get("m.relates_to")
|
||||||
|
if relates_to:
|
||||||
|
# Keep the relations, ensuring we use a dict copy of the original
|
||||||
|
serialized_event["content"]["m.relates_to"] = relates_to.copy()
|
||||||
|
else:
|
||||||
|
serialized_event["content"].pop("m.relates_to", None)
|
||||||
|
|
||||||
|
relations[RelationTypes.REPLACE] = {
|
||||||
|
"event_id": edit.event_id,
|
||||||
|
"origin_server_ts": edit.origin_server_ts,
|
||||||
|
"sender": edit.sender,
|
||||||
|
}
|
||||||
|
|
||||||
|
# If this event is the start of a thread, include a summary of the replies.
|
||||||
|
if self._msc3440_enabled:
|
||||||
|
(
|
||||||
|
thread_count,
|
||||||
|
latest_thread_event,
|
||||||
|
) = await self.store.get_thread_summary(event_id)
|
||||||
|
if latest_thread_event:
|
||||||
|
relations[RelationTypes.THREAD] = {
|
||||||
|
# Don't bundle relations as this could recurse forever.
|
||||||
|
"latest_event": await self.serialize_event(
|
||||||
|
latest_thread_event, time_now, bundle_relations=False
|
||||||
|
),
|
||||||
|
"count": thread_count,
|
||||||
|
}
|
||||||
|
|
||||||
|
# If any bundled relations were found, include them.
|
||||||
|
if relations:
|
||||||
|
serialized_event["unsigned"].setdefault("m.relations", {}).update(relations)
|
||||||
|
|
||||||
async def serialize_events(
|
async def serialize_events(
|
||||||
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
|
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
|
||||||
) -> List[JsonDict]:
|
) -> List[JsonDict]:
|
||||||
|
|
|
@ -277,6 +277,58 @@ class FederationClient(FederationBase):
|
||||||
|
|
||||||
return pdus
|
return pdus
|
||||||
|
|
||||||
|
async def get_pdu_from_destination_raw(
|
||||||
|
self,
|
||||||
|
destination: str,
|
||||||
|
event_id: str,
|
||||||
|
room_version: RoomVersion,
|
||||||
|
outlier: bool = False,
|
||||||
|
timeout: Optional[int] = None,
|
||||||
|
) -> Optional[EventBase]:
|
||||||
|
"""Requests the PDU with given origin and ID from the remote home
|
||||||
|
server. Does not have any caching or rate limiting!
|
||||||
|
|
||||||
|
Args:
|
||||||
|
destination: Which homeserver to query
|
||||||
|
event_id: event to fetch
|
||||||
|
room_version: version of the room
|
||||||
|
outlier: Indicates whether the PDU is an `outlier`, i.e. if
|
||||||
|
it's from an arbitrary point in the context as opposed to part
|
||||||
|
of the current block of PDUs. Defaults to `False`
|
||||||
|
timeout: How long to try (in ms) each destination for before
|
||||||
|
moving to the next destination. None indicates no timeout.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The requested PDU, or None if we were unable to find it.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
SynapseError, NotRetryingDestination, FederationDeniedError
|
||||||
|
"""
|
||||||
|
transaction_data = await self.transport_layer.get_event(
|
||||||
|
destination, event_id, timeout=timeout
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.debug(
|
||||||
|
"retrieved event id %s from %s: %r",
|
||||||
|
event_id,
|
||||||
|
destination,
|
||||||
|
transaction_data,
|
||||||
|
)
|
||||||
|
|
||||||
|
pdu_list: List[EventBase] = [
|
||||||
|
event_from_pdu_json(p, room_version, outlier=outlier)
|
||||||
|
for p in transaction_data["pdus"]
|
||||||
|
]
|
||||||
|
|
||||||
|
if pdu_list and pdu_list[0]:
|
||||||
|
pdu = pdu_list[0]
|
||||||
|
|
||||||
|
# Check signatures are correct.
|
||||||
|
signed_pdu = await self._check_sigs_and_hash(room_version, pdu)
|
||||||
|
return signed_pdu
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
async def get_pdu(
|
async def get_pdu(
|
||||||
self,
|
self,
|
||||||
destinations: Iterable[str],
|
destinations: Iterable[str],
|
||||||
|
@ -321,30 +373,14 @@ class FederationClient(FederationBase):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
try:
|
try:
|
||||||
transaction_data = await self.transport_layer.get_event(
|
signed_pdu = await self.get_pdu_from_destination_raw(
|
||||||
destination, event_id, timeout=timeout
|
destination=destination,
|
||||||
|
event_id=event_id,
|
||||||
|
room_version=room_version,
|
||||||
|
outlier=outlier,
|
||||||
|
timeout=timeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.debug(
|
|
||||||
"retrieved event id %s from %s: %r",
|
|
||||||
event_id,
|
|
||||||
destination,
|
|
||||||
transaction_data,
|
|
||||||
)
|
|
||||||
|
|
||||||
pdu_list: List[EventBase] = [
|
|
||||||
event_from_pdu_json(p, room_version, outlier=outlier)
|
|
||||||
for p in transaction_data["pdus"]
|
|
||||||
]
|
|
||||||
|
|
||||||
if pdu_list and pdu_list[0]:
|
|
||||||
pdu = pdu_list[0]
|
|
||||||
|
|
||||||
# Check signatures are correct.
|
|
||||||
signed_pdu = await self._check_sigs_and_hash(room_version, pdu)
|
|
||||||
|
|
||||||
break
|
|
||||||
|
|
||||||
pdu_attempts[destination] = now
|
pdu_attempts[destination] = now
|
||||||
|
|
||||||
except SynapseError as e:
|
except SynapseError as e:
|
||||||
|
|
|
@ -40,6 +40,8 @@ from typing import TYPE_CHECKING, Optional, Tuple
|
||||||
|
|
||||||
from signedjson.sign import sign_json
|
from signedjson.sign import sign_json
|
||||||
|
|
||||||
|
from twisted.internet.defer import Deferred
|
||||||
|
|
||||||
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
|
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.types import JsonDict, get_domain_from_id
|
from synapse.types import JsonDict, get_domain_from_id
|
||||||
|
@ -166,7 +168,7 @@ class GroupAttestionRenewer:
|
||||||
|
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
def _start_renew_attestations(self) -> None:
|
def _start_renew_attestations(self) -> "Deferred[None]":
|
||||||
return run_as_background_process("renew_attestations", self._renew_attestations)
|
return run_as_background_process("renew_attestations", self._renew_attestations)
|
||||||
|
|
||||||
async def _renew_attestations(self) -> None:
|
async def _renew_attestations(self) -> None:
|
||||||
|
|
|
@ -234,7 +234,7 @@ class ExfiltrationWriter(metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
@abc.abstractmethod
|
@abc.abstractmethod
|
||||||
def write_invite(
|
def write_invite(
|
||||||
self, room_id: str, event: EventBase, state: StateMap[dict]
|
self, room_id: str, event: EventBase, state: StateMap[EventBase]
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Write an invite for the room, with associated invite state.
|
"""Write an invite for the room, with associated invite state.
|
||||||
|
|
||||||
|
@ -248,7 +248,7 @@ class ExfiltrationWriter(metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
@abc.abstractmethod
|
@abc.abstractmethod
|
||||||
def write_knock(
|
def write_knock(
|
||||||
self, room_id: str, event: EventBase, state: StateMap[dict]
|
self, room_id: str, event: EventBase, state: StateMap[EventBase]
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Write a knock for the room, with associated knock state.
|
"""Write a knock for the room, with associated knock state.
|
||||||
|
|
||||||
|
|
|
@ -188,7 +188,7 @@ class ApplicationServicesHandler:
|
||||||
self,
|
self,
|
||||||
stream_key: str,
|
stream_key: str,
|
||||||
new_token: Union[int, RoomStreamToken],
|
new_token: Union[int, RoomStreamToken],
|
||||||
users: Optional[Collection[Union[str, UserID]]] = None,
|
users: Collection[Union[str, UserID]],
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
This is called by the notifier in the background when an ephemeral event is handled
|
This is called by the notifier in the background when an ephemeral event is handled
|
||||||
|
@ -203,7 +203,9 @@ class ApplicationServicesHandler:
|
||||||
value for `stream_key` will cause this function to return early.
|
value for `stream_key` will cause this function to return early.
|
||||||
|
|
||||||
Ephemeral events will only be pushed to appservices that have opted into
|
Ephemeral events will only be pushed to appservices that have opted into
|
||||||
them.
|
receiving them by setting `push_ephemeral` to true in their registration
|
||||||
|
file. Note that while MSC2409 is experimental, this option is called
|
||||||
|
`de.sorunome.msc2409.push_ephemeral`.
|
||||||
|
|
||||||
Appservices will only receive ephemeral events that fall within their
|
Appservices will only receive ephemeral events that fall within their
|
||||||
registered user and room namespaces.
|
registered user and room namespaces.
|
||||||
|
@ -214,6 +216,7 @@ class ApplicationServicesHandler:
|
||||||
if not self.notify_appservices:
|
if not self.notify_appservices:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# Ignore any unsupported streams
|
||||||
if stream_key not in ("typing_key", "receipt_key", "presence_key"):
|
if stream_key not in ("typing_key", "receipt_key", "presence_key"):
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -230,18 +233,25 @@ class ApplicationServicesHandler:
|
||||||
# Additional context: https://github.com/matrix-org/synapse/pull/11137
|
# Additional context: https://github.com/matrix-org/synapse/pull/11137
|
||||||
assert isinstance(new_token, int)
|
assert isinstance(new_token, int)
|
||||||
|
|
||||||
|
# Check whether there are any appservices which have registered to receive
|
||||||
|
# ephemeral events.
|
||||||
|
#
|
||||||
|
# Note that whether these events are actually relevant to these appservices
|
||||||
|
# is decided later on.
|
||||||
services = [
|
services = [
|
||||||
service
|
service
|
||||||
for service in self.store.get_app_services()
|
for service in self.store.get_app_services()
|
||||||
if service.supports_ephemeral
|
if service.supports_ephemeral
|
||||||
]
|
]
|
||||||
if not services:
|
if not services:
|
||||||
|
# Bail out early if none of the target appservices have explicitly registered
|
||||||
|
# to receive these ephemeral events.
|
||||||
return
|
return
|
||||||
|
|
||||||
# We only start a new background process if necessary rather than
|
# We only start a new background process if necessary rather than
|
||||||
# optimistically (to cut down on overhead).
|
# optimistically (to cut down on overhead).
|
||||||
self._notify_interested_services_ephemeral(
|
self._notify_interested_services_ephemeral(
|
||||||
services, stream_key, new_token, users or []
|
services, stream_key, new_token, users
|
||||||
)
|
)
|
||||||
|
|
||||||
@wrap_as_background_process("notify_interested_services_ephemeral")
|
@wrap_as_background_process("notify_interested_services_ephemeral")
|
||||||
|
@ -252,7 +262,7 @@ class ApplicationServicesHandler:
|
||||||
new_token: int,
|
new_token: int,
|
||||||
users: Collection[Union[str, UserID]],
|
users: Collection[Union[str, UserID]],
|
||||||
) -> None:
|
) -> None:
|
||||||
logger.debug("Checking interested services for %s" % (stream_key))
|
logger.debug("Checking interested services for %s", stream_key)
|
||||||
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
||||||
for service in services:
|
for service in services:
|
||||||
if stream_key == "typing_key":
|
if stream_key == "typing_key":
|
||||||
|
@ -345,6 +355,9 @@ class ApplicationServicesHandler:
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
service: The application service to check for which events it should receive.
|
service: The application service to check for which events it should receive.
|
||||||
|
new_token: A receipts event stream token. Purely used to double-check that the
|
||||||
|
from_token we pull from the database isn't greater than or equal to this
|
||||||
|
token. Prevents accidentally duplicating work.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A list of JSON dictionaries containing data derived from the read receipts that
|
A list of JSON dictionaries containing data derived from the read receipts that
|
||||||
|
@ -382,6 +395,9 @@ class ApplicationServicesHandler:
|
||||||
Args:
|
Args:
|
||||||
service: The application service that ephemeral events are being sent to.
|
service: The application service that ephemeral events are being sent to.
|
||||||
users: The users that should receive the presence update.
|
users: The users that should receive the presence update.
|
||||||
|
new_token: A presence update stream token. Purely used to double-check that the
|
||||||
|
from_token we pull from the database isn't greater than or equal to this
|
||||||
|
token. Prevents accidentally duplicating work.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A list of json dictionaries containing data derived from the presence events
|
A list of json dictionaries containing data derived from the presence events
|
||||||
|
|
|
@ -790,10 +790,10 @@ class AuthHandler:
|
||||||
(
|
(
|
||||||
new_refresh_token,
|
new_refresh_token,
|
||||||
new_refresh_token_id,
|
new_refresh_token_id,
|
||||||
) = await self.get_refresh_token_for_user_id(
|
) = await self.create_refresh_token_for_user_id(
|
||||||
user_id=existing_token.user_id, device_id=existing_token.device_id
|
user_id=existing_token.user_id, device_id=existing_token.device_id
|
||||||
)
|
)
|
||||||
access_token = await self.get_access_token_for_user_id(
|
access_token = await self.create_access_token_for_user_id(
|
||||||
user_id=existing_token.user_id,
|
user_id=existing_token.user_id,
|
||||||
device_id=existing_token.device_id,
|
device_id=existing_token.device_id,
|
||||||
valid_until_ms=valid_until_ms,
|
valid_until_ms=valid_until_ms,
|
||||||
|
@ -832,7 +832,7 @@ class AuthHandler:
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
async def get_refresh_token_for_user_id(
|
async def create_refresh_token_for_user_id(
|
||||||
self,
|
self,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
device_id: str,
|
device_id: str,
|
||||||
|
@ -855,7 +855,7 @@ class AuthHandler:
|
||||||
)
|
)
|
||||||
return refresh_token, refresh_token_id
|
return refresh_token, refresh_token_id
|
||||||
|
|
||||||
async def get_access_token_for_user_id(
|
async def create_access_token_for_user_id(
|
||||||
self,
|
self,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
device_id: Optional[str],
|
device_id: Optional[str],
|
||||||
|
@ -1828,13 +1828,6 @@ def load_single_legacy_password_auth_provider(
|
||||||
logger.error("Error while initializing %r: %s", module, e)
|
logger.error("Error while initializing %r: %s", module, e)
|
||||||
raise
|
raise
|
||||||
|
|
||||||
# The known hooks. If a module implements a method who's name appears in this set
|
|
||||||
# we'll want to register it
|
|
||||||
password_auth_provider_methods = {
|
|
||||||
"check_3pid_auth",
|
|
||||||
"on_logged_out",
|
|
||||||
}
|
|
||||||
|
|
||||||
# All methods that the module provides should be async, but this wasn't enforced
|
# All methods that the module provides should be async, but this wasn't enforced
|
||||||
# in the old module system, so we wrap them if needed
|
# in the old module system, so we wrap them if needed
|
||||||
def async_wrapper(f: Optional[Callable]) -> Optional[Callable[..., Awaitable]]:
|
def async_wrapper(f: Optional[Callable]) -> Optional[Callable[..., Awaitable]]:
|
||||||
|
@ -1919,11 +1912,14 @@ def load_single_legacy_password_auth_provider(
|
||||||
|
|
||||||
return run
|
return run
|
||||||
|
|
||||||
# populate hooks with the implemented methods, wrapped with async_wrapper
|
# If the module has these methods implemented, then we pull them out
|
||||||
hooks = {
|
# and register them as hooks.
|
||||||
hook: async_wrapper(getattr(provider, hook, None))
|
check_3pid_auth_hook: Optional[CHECK_3PID_AUTH_CALLBACK] = async_wrapper(
|
||||||
for hook in password_auth_provider_methods
|
getattr(provider, "check_3pid_auth", None)
|
||||||
}
|
)
|
||||||
|
on_logged_out_hook: Optional[ON_LOGGED_OUT_CALLBACK] = async_wrapper(
|
||||||
|
getattr(provider, "on_logged_out", None)
|
||||||
|
)
|
||||||
|
|
||||||
supported_login_types = {}
|
supported_login_types = {}
|
||||||
# call get_supported_login_types and add that to the dict
|
# call get_supported_login_types and add that to the dict
|
||||||
|
@ -1950,7 +1946,11 @@ def load_single_legacy_password_auth_provider(
|
||||||
# need to use a tuple here for ("password",) not a list since lists aren't hashable
|
# need to use a tuple here for ("password",) not a list since lists aren't hashable
|
||||||
auth_checkers[(LoginType.PASSWORD, ("password",))] = check_password
|
auth_checkers[(LoginType.PASSWORD, ("password",))] = check_password
|
||||||
|
|
||||||
api.register_password_auth_provider_callbacks(hooks, auth_checkers=auth_checkers)
|
api.register_password_auth_provider_callbacks(
|
||||||
|
check_3pid_auth=check_3pid_auth_hook,
|
||||||
|
on_logged_out=on_logged_out_hook,
|
||||||
|
auth_checkers=auth_checkers,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
CHECK_3PID_AUTH_CALLBACK = Callable[
|
CHECK_3PID_AUTH_CALLBACK = Callable[
|
||||||
|
|
|
@ -89,6 +89,13 @@ class DeviceMessageHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
async def on_direct_to_device_edu(self, origin: str, content: JsonDict) -> None:
|
async def on_direct_to_device_edu(self, origin: str, content: JsonDict) -> None:
|
||||||
|
"""
|
||||||
|
Handle receiving to-device messages from remote homeservers.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
origin: The remote homeserver.
|
||||||
|
content: The JSON dictionary containing the to-device messages.
|
||||||
|
"""
|
||||||
local_messages = {}
|
local_messages = {}
|
||||||
sender_user_id = content["sender"]
|
sender_user_id = content["sender"]
|
||||||
if origin != get_domain_from_id(sender_user_id):
|
if origin != get_domain_from_id(sender_user_id):
|
||||||
|
@ -135,12 +142,16 @@ class DeviceMessageHandler:
|
||||||
message_type, sender_user_id, by_device
|
message_type, sender_user_id, by_device
|
||||||
)
|
)
|
||||||
|
|
||||||
stream_id = await self.store.add_messages_from_remote_to_device_inbox(
|
# Add messages to the database.
|
||||||
|
# Retrieve the stream id of the last-processed to-device message.
|
||||||
|
last_stream_id = await self.store.add_messages_from_remote_to_device_inbox(
|
||||||
origin, message_id, local_messages
|
origin, message_id, local_messages
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Notify listeners that there are new to-device messages to process,
|
||||||
|
# handing them the latest stream id.
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"to_device_key", stream_id, users=local_messages.keys()
|
"to_device_key", last_stream_id, users=local_messages.keys()
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _check_for_unknown_devices(
|
async def _check_for_unknown_devices(
|
||||||
|
@ -195,6 +206,14 @@ class DeviceMessageHandler:
|
||||||
message_type: str,
|
message_type: str,
|
||||||
messages: Dict[str, Dict[str, JsonDict]],
|
messages: Dict[str, Dict[str, JsonDict]],
|
||||||
) -> None:
|
) -> None:
|
||||||
|
"""
|
||||||
|
Handle a request from a user to send to-device message(s).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
requester: The user that is sending the to-device messages.
|
||||||
|
message_type: The type of to-device messages that are being sent.
|
||||||
|
messages: A dictionary containing recipients mapped to messages intended for them.
|
||||||
|
"""
|
||||||
sender_user_id = requester.user.to_string()
|
sender_user_id = requester.user.to_string()
|
||||||
|
|
||||||
message_id = random_string(16)
|
message_id = random_string(16)
|
||||||
|
@ -257,12 +276,16 @@ class DeviceMessageHandler:
|
||||||
"org.matrix.opentracing_context": json_encoder.encode(context),
|
"org.matrix.opentracing_context": json_encoder.encode(context),
|
||||||
}
|
}
|
||||||
|
|
||||||
stream_id = await self.store.add_messages_to_device_inbox(
|
# Add messages to the database.
|
||||||
|
# Retrieve the stream id of the last-processed to-device message.
|
||||||
|
last_stream_id = await self.store.add_messages_to_device_inbox(
|
||||||
local_messages, remote_edu_contents
|
local_messages, remote_edu_contents
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Notify listeners that there are new to-device messages to process,
|
||||||
|
# handing them the latest stream id.
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
"to_device_key", stream_id, users=local_messages.keys()
|
"to_device_key", last_stream_id, users=local_messages.keys()
|
||||||
)
|
)
|
||||||
|
|
||||||
if self.federation_sender:
|
if self.federation_sender:
|
||||||
|
|
|
@ -206,6 +206,10 @@ class DirectoryHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
room_id = await self._delete_association(room_alias)
|
room_id = await self._delete_association(room_alias)
|
||||||
|
if room_id is None:
|
||||||
|
# It's possible someone else deleted the association after the
|
||||||
|
# checks above, but before we did the deletion.
|
||||||
|
raise NotFoundError("Unknown room alias")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
await self._update_canonical_alias(requester, user_id, room_id, room_alias)
|
await self._update_canonical_alias(requester, user_id, room_id, room_alias)
|
||||||
|
@ -227,7 +231,7 @@ class DirectoryHandler:
|
||||||
)
|
)
|
||||||
await self._delete_association(room_alias)
|
await self._delete_association(room_alias)
|
||||||
|
|
||||||
async def _delete_association(self, room_alias: RoomAlias) -> str:
|
async def _delete_association(self, room_alias: RoomAlias) -> Optional[str]:
|
||||||
if not self.hs.is_mine(room_alias):
|
if not self.hs.is_mine(room_alias):
|
||||||
raise SynapseError(400, "Room alias must be local")
|
raise SynapseError(400, "Room alias must be local")
|
||||||
|
|
||||||
|
|
|
@ -124,7 +124,7 @@ class EventStreamHandler:
|
||||||
as_client_event=as_client_event,
|
as_client_event=as_client_event,
|
||||||
# We don't bundle "live" events, as otherwise clients
|
# We don't bundle "live" events, as otherwise clients
|
||||||
# will end up double counting annotations.
|
# will end up double counting annotations.
|
||||||
bundle_aggregations=False,
|
bundle_relations=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
chunk = {
|
chunk = {
|
||||||
|
|
|
@ -981,8 +981,6 @@ class FederationEventHandler:
|
||||||
origin,
|
origin,
|
||||||
event,
|
event,
|
||||||
context,
|
context,
|
||||||
state=state,
|
|
||||||
backfilled=backfilled,
|
|
||||||
)
|
)
|
||||||
except AuthError as e:
|
except AuthError as e:
|
||||||
# FIXME richvdh 2021/10/07 I don't think this is reachable. Let's log it
|
# FIXME richvdh 2021/10/07 I don't think this is reachable. Let's log it
|
||||||
|
@ -1332,8 +1330,6 @@ class FederationEventHandler:
|
||||||
origin: str,
|
origin: str,
|
||||||
event: EventBase,
|
event: EventBase,
|
||||||
context: EventContext,
|
context: EventContext,
|
||||||
state: Optional[Iterable[EventBase]] = None,
|
|
||||||
backfilled: bool = False,
|
|
||||||
) -> EventContext:
|
) -> EventContext:
|
||||||
"""
|
"""
|
||||||
Checks whether an event should be rejected (for failing auth checks).
|
Checks whether an event should be rejected (for failing auth checks).
|
||||||
|
@ -1344,12 +1340,6 @@ class FederationEventHandler:
|
||||||
context:
|
context:
|
||||||
The event context.
|
The event context.
|
||||||
|
|
||||||
state:
|
|
||||||
The state events used to check the event for soft-fail. If this is
|
|
||||||
not provided the current state events will be used.
|
|
||||||
|
|
||||||
backfilled: True if the event was backfilled.
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The updated context object.
|
The updated context object.
|
||||||
|
|
||||||
|
|
|
@ -464,15 +464,6 @@ class IdentityHandler:
|
||||||
if next_link:
|
if next_link:
|
||||||
params["next_link"] = next_link
|
params["next_link"] = next_link
|
||||||
|
|
||||||
if self.hs.config.email.using_identity_server_from_trusted_list:
|
|
||||||
# Warn that a deprecated config option is in use
|
|
||||||
logger.warning(
|
|
||||||
'The config option "trust_identity_server_for_password_resets" '
|
|
||||||
'has been replaced by "account_threepid_delegate". '
|
|
||||||
"Please consult the sample config at docs/sample_config.yaml for "
|
|
||||||
"details and update your config file."
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
data = await self.http_client.post_json_get_json(
|
data = await self.http_client.post_json_get_json(
|
||||||
id_server + "/_matrix/identity/api/v1/validate/email/requestToken",
|
id_server + "/_matrix/identity/api/v1/validate/email/requestToken",
|
||||||
|
@ -517,15 +508,6 @@ class IdentityHandler:
|
||||||
if next_link:
|
if next_link:
|
||||||
params["next_link"] = next_link
|
params["next_link"] = next_link
|
||||||
|
|
||||||
if self.hs.config.email.using_identity_server_from_trusted_list:
|
|
||||||
# Warn that a deprecated config option is in use
|
|
||||||
logger.warning(
|
|
||||||
'The config option "trust_identity_server_for_password_resets" '
|
|
||||||
'has been replaced by "account_threepid_delegate". '
|
|
||||||
"Please consult the sample config at docs/sample_config.yaml for "
|
|
||||||
"details and update your config file."
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
data = await self.http_client.post_json_get_json(
|
data = await self.http_client.post_json_get_json(
|
||||||
id_server + "/_matrix/identity/api/v1/validate/msisdn/requestToken",
|
id_server + "/_matrix/identity/api/v1/validate/msisdn/requestToken",
|
||||||
|
|
|
@ -252,7 +252,7 @@ class MessageHandler:
|
||||||
now,
|
now,
|
||||||
# We don't bother bundling aggregations in when asked for state
|
# We don't bother bundling aggregations in when asked for state
|
||||||
# events, as clients won't use them.
|
# events, as clients won't use them.
|
||||||
bundle_aggregations=False,
|
bundle_relations=False,
|
||||||
)
|
)
|
||||||
return events
|
return events
|
||||||
|
|
||||||
|
@ -1003,13 +1003,52 @@ class EventCreationHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
self.validator.validate_new(event, self.config)
|
self.validator.validate_new(event, self.config)
|
||||||
|
await self._validate_event_relation(event)
|
||||||
|
logger.debug("Created event %s", event.event_id)
|
||||||
|
|
||||||
|
return event, context
|
||||||
|
|
||||||
|
async def _validate_event_relation(self, event: EventBase) -> None:
|
||||||
|
"""
|
||||||
|
Ensure the relation data on a new event is not bogus.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event: The event being created.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
SynapseError if the event is invalid.
|
||||||
|
"""
|
||||||
|
|
||||||
|
relation = event.content.get("m.relates_to")
|
||||||
|
if not relation:
|
||||||
|
return
|
||||||
|
|
||||||
|
relation_type = relation.get("rel_type")
|
||||||
|
if not relation_type:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Ensure the parent is real.
|
||||||
|
relates_to = relation.get("event_id")
|
||||||
|
if not relates_to:
|
||||||
|
return
|
||||||
|
|
||||||
|
parent_event = await self.store.get_event(relates_to, allow_none=True)
|
||||||
|
if parent_event:
|
||||||
|
# And in the same room.
|
||||||
|
if parent_event.room_id != event.room_id:
|
||||||
|
raise SynapseError(400, "Relations must be in the same room")
|
||||||
|
|
||||||
|
else:
|
||||||
|
# There must be some reason that the client knows the event exists,
|
||||||
|
# see if there are existing relations. If so, assume everything is fine.
|
||||||
|
if not await self.store.event_is_target_of_relation(relates_to):
|
||||||
|
# Otherwise, the client can't know about the parent event!
|
||||||
|
raise SynapseError(400, "Can't send relation to unknown event")
|
||||||
|
|
||||||
# If this event is an annotation then we check that that the sender
|
# If this event is an annotation then we check that that the sender
|
||||||
# can't annotate the same way twice (e.g. stops users from liking an
|
# can't annotate the same way twice (e.g. stops users from liking an
|
||||||
# event multiple times).
|
# event multiple times).
|
||||||
relation = event.content.get("m.relates_to", {})
|
if relation_type == RelationTypes.ANNOTATION:
|
||||||
if relation.get("rel_type") == RelationTypes.ANNOTATION:
|
|
||||||
relates_to = relation["event_id"]
|
|
||||||
aggregation_key = relation["key"]
|
aggregation_key = relation["key"]
|
||||||
|
|
||||||
already_exists = await self.store.has_user_annotated_event(
|
already_exists = await self.store.has_user_annotated_event(
|
||||||
|
@ -1018,9 +1057,12 @@ class EventCreationHandler:
|
||||||
if already_exists:
|
if already_exists:
|
||||||
raise SynapseError(400, "Can't send same reaction twice")
|
raise SynapseError(400, "Can't send same reaction twice")
|
||||||
|
|
||||||
logger.debug("Created event %s", event.event_id)
|
# Don't attempt to start a thread if the parent event is a relation.
|
||||||
|
elif relation_type == RelationTypes.THREAD:
|
||||||
return event, context
|
if await self.store.event_includes_relation(relates_to):
|
||||||
|
raise SynapseError(
|
||||||
|
400, "Cannot start threads from an event with a relation"
|
||||||
|
)
|
||||||
|
|
||||||
@measure_func("handle_new_client_event")
|
@measure_func("handle_new_client_event")
|
||||||
async def handle_new_client_event(
|
async def handle_new_client_event(
|
||||||
|
|
|
@ -13,7 +13,7 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Any, Dict, Optional, Set
|
from typing import TYPE_CHECKING, Any, Collection, Dict, List, Optional, Set
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
|
@ -22,7 +22,7 @@ from twisted.python.failure import Failure
|
||||||
from synapse.api.constants import EventTypes, Membership
|
from synapse.api.constants import EventTypes, Membership
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
from synapse.api.filtering import Filter
|
from synapse.api.filtering import Filter
|
||||||
from synapse.logging.context import run_in_background
|
from synapse.handlers.room import ShutdownRoomResponse
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.storage.state import StateFilter
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.streams.config import PaginationConfig
|
from synapse.streams.config import PaginationConfig
|
||||||
|
@ -56,11 +56,62 @@ class PurgeStatus:
|
||||||
STATUS_FAILED: "failed",
|
STATUS_FAILED: "failed",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Save the error message if an error occurs
|
||||||
|
error: str = ""
|
||||||
|
|
||||||
# Tracks whether this request has completed. One of STATUS_{ACTIVE,COMPLETE,FAILED}.
|
# Tracks whether this request has completed. One of STATUS_{ACTIVE,COMPLETE,FAILED}.
|
||||||
status: int = STATUS_ACTIVE
|
status: int = STATUS_ACTIVE
|
||||||
|
|
||||||
def asdict(self) -> JsonDict:
|
def asdict(self) -> JsonDict:
|
||||||
return {"status": PurgeStatus.STATUS_TEXT[self.status]}
|
ret = {"status": PurgeStatus.STATUS_TEXT[self.status]}
|
||||||
|
if self.error:
|
||||||
|
ret["error"] = self.error
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(slots=True, auto_attribs=True)
|
||||||
|
class DeleteStatus:
|
||||||
|
"""Object tracking the status of a delete room request
|
||||||
|
|
||||||
|
This class contains information on the progress of a delete room request, for
|
||||||
|
return by get_delete_status.
|
||||||
|
"""
|
||||||
|
|
||||||
|
STATUS_PURGING = 0
|
||||||
|
STATUS_COMPLETE = 1
|
||||||
|
STATUS_FAILED = 2
|
||||||
|
STATUS_SHUTTING_DOWN = 3
|
||||||
|
|
||||||
|
STATUS_TEXT = {
|
||||||
|
STATUS_PURGING: "purging",
|
||||||
|
STATUS_COMPLETE: "complete",
|
||||||
|
STATUS_FAILED: "failed",
|
||||||
|
STATUS_SHUTTING_DOWN: "shutting_down",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Tracks whether this request has completed.
|
||||||
|
# One of STATUS_{PURGING,COMPLETE,FAILED,SHUTTING_DOWN}.
|
||||||
|
status: int = STATUS_PURGING
|
||||||
|
|
||||||
|
# Save the error message if an error occurs
|
||||||
|
error: str = ""
|
||||||
|
|
||||||
|
# Saves the result of an action to give it back to REST API
|
||||||
|
shutdown_room: ShutdownRoomResponse = {
|
||||||
|
"kicked_users": [],
|
||||||
|
"failed_to_kick_users": [],
|
||||||
|
"local_aliases": [],
|
||||||
|
"new_room_id": None,
|
||||||
|
}
|
||||||
|
|
||||||
|
def asdict(self) -> JsonDict:
|
||||||
|
ret = {
|
||||||
|
"status": DeleteStatus.STATUS_TEXT[self.status],
|
||||||
|
"shutdown_room": self.shutdown_room,
|
||||||
|
}
|
||||||
|
if self.error:
|
||||||
|
ret["error"] = self.error
|
||||||
|
return ret
|
||||||
|
|
||||||
|
|
||||||
class PaginationHandler:
|
class PaginationHandler:
|
||||||
|
@ -70,6 +121,9 @@ class PaginationHandler:
|
||||||
paginating during a purge.
|
paginating during a purge.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
# when to remove a completed deletion/purge from the results map
|
||||||
|
CLEAR_PURGE_AFTER_MS = 1000 * 3600 * 24 # 24 hours
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
|
@ -78,11 +132,18 @@ class PaginationHandler:
|
||||||
self.state_store = self.storage.state
|
self.state_store = self.storage.state
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self._server_name = hs.hostname
|
self._server_name = hs.hostname
|
||||||
|
self._room_shutdown_handler = hs.get_room_shutdown_handler()
|
||||||
|
|
||||||
self.pagination_lock = ReadWriteLock()
|
self.pagination_lock = ReadWriteLock()
|
||||||
|
# IDs of rooms in which there currently an active purge *or delete* operation.
|
||||||
self._purges_in_progress_by_room: Set[str] = set()
|
self._purges_in_progress_by_room: Set[str] = set()
|
||||||
# map from purge id to PurgeStatus
|
# map from purge id to PurgeStatus
|
||||||
self._purges_by_id: Dict[str, PurgeStatus] = {}
|
self._purges_by_id: Dict[str, PurgeStatus] = {}
|
||||||
|
# map from purge id to DeleteStatus
|
||||||
|
self._delete_by_id: Dict[str, DeleteStatus] = {}
|
||||||
|
# map from room id to delete ids
|
||||||
|
# Dict[`room_id`, List[`delete_id`]]
|
||||||
|
self._delete_by_room: Dict[str, List[str]] = {}
|
||||||
self._event_serializer = hs.get_event_client_serializer()
|
self._event_serializer = hs.get_event_client_serializer()
|
||||||
|
|
||||||
self._retention_default_max_lifetime = (
|
self._retention_default_max_lifetime = (
|
||||||
|
@ -265,8 +326,13 @@ class PaginationHandler:
|
||||||
logger.info("[purge] starting purge_id %s", purge_id)
|
logger.info("[purge] starting purge_id %s", purge_id)
|
||||||
|
|
||||||
self._purges_by_id[purge_id] = PurgeStatus()
|
self._purges_by_id[purge_id] = PurgeStatus()
|
||||||
run_in_background(
|
run_as_background_process(
|
||||||
self._purge_history, purge_id, room_id, token, delete_local_events
|
"purge_history",
|
||||||
|
self._purge_history,
|
||||||
|
purge_id,
|
||||||
|
room_id,
|
||||||
|
token,
|
||||||
|
delete_local_events,
|
||||||
)
|
)
|
||||||
return purge_id
|
return purge_id
|
||||||
|
|
||||||
|
@ -276,7 +342,7 @@ class PaginationHandler:
|
||||||
"""Carry out a history purge on a room.
|
"""Carry out a history purge on a room.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
purge_id: The id for this purge
|
purge_id: The ID for this purge.
|
||||||
room_id: The room to purge from
|
room_id: The room to purge from
|
||||||
token: topological token to delete events before
|
token: topological token to delete events before
|
||||||
delete_local_events: True to delete local events as well as remote ones
|
delete_local_events: True to delete local events as well as remote ones
|
||||||
|
@ -295,6 +361,7 @@ class PaginationHandler:
|
||||||
"[purge] failed", exc_info=(f.type, f.value, f.getTracebackObject()) # type: ignore
|
"[purge] failed", exc_info=(f.type, f.value, f.getTracebackObject()) # type: ignore
|
||||||
)
|
)
|
||||||
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
|
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
|
||||||
|
self._purges_by_id[purge_id].error = f.getErrorMessage()
|
||||||
finally:
|
finally:
|
||||||
self._purges_in_progress_by_room.discard(room_id)
|
self._purges_in_progress_by_room.discard(room_id)
|
||||||
|
|
||||||
|
@ -302,7 +369,9 @@ class PaginationHandler:
|
||||||
def clear_purge() -> None:
|
def clear_purge() -> None:
|
||||||
del self._purges_by_id[purge_id]
|
del self._purges_by_id[purge_id]
|
||||||
|
|
||||||
self.hs.get_reactor().callLater(24 * 3600, clear_purge)
|
self.hs.get_reactor().callLater(
|
||||||
|
PaginationHandler.CLEAR_PURGE_AFTER_MS / 1000, clear_purge
|
||||||
|
)
|
||||||
|
|
||||||
def get_purge_status(self, purge_id: str) -> Optional[PurgeStatus]:
|
def get_purge_status(self, purge_id: str) -> Optional[PurgeStatus]:
|
||||||
"""Get the current status of an active purge
|
"""Get the current status of an active purge
|
||||||
|
@ -312,8 +381,25 @@ class PaginationHandler:
|
||||||
"""
|
"""
|
||||||
return self._purges_by_id.get(purge_id)
|
return self._purges_by_id.get(purge_id)
|
||||||
|
|
||||||
|
def get_delete_status(self, delete_id: str) -> Optional[DeleteStatus]:
|
||||||
|
"""Get the current status of an active deleting
|
||||||
|
|
||||||
|
Args:
|
||||||
|
delete_id: delete_id returned by start_shutdown_and_purge_room
|
||||||
|
"""
|
||||||
|
return self._delete_by_id.get(delete_id)
|
||||||
|
|
||||||
|
def get_delete_ids_by_room(self, room_id: str) -> Optional[Collection[str]]:
|
||||||
|
"""Get all active delete ids by room
|
||||||
|
|
||||||
|
Args:
|
||||||
|
room_id: room_id that is deleted
|
||||||
|
"""
|
||||||
|
return self._delete_by_room.get(room_id)
|
||||||
|
|
||||||
async def purge_room(self, room_id: str, force: bool = False) -> None:
|
async def purge_room(self, room_id: str, force: bool = False) -> None:
|
||||||
"""Purge the given room from the database.
|
"""Purge the given room from the database.
|
||||||
|
This function is part the delete room v1 API.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
room_id: room to be purged
|
room_id: room to be purged
|
||||||
|
@ -424,7 +510,7 @@ class PaginationHandler:
|
||||||
|
|
||||||
if events:
|
if events:
|
||||||
if event_filter:
|
if event_filter:
|
||||||
events = event_filter.filter(events)
|
events = await event_filter.filter(events)
|
||||||
|
|
||||||
events = await filter_events_for_client(
|
events = await filter_events_for_client(
|
||||||
self.storage, user_id, events, is_peeking=(member_event_id is None)
|
self.storage, user_id, events, is_peeking=(member_event_id is None)
|
||||||
|
@ -472,3 +558,192 @@ class PaginationHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
return chunk
|
return chunk
|
||||||
|
|
||||||
|
async def _shutdown_and_purge_room(
|
||||||
|
self,
|
||||||
|
delete_id: str,
|
||||||
|
room_id: str,
|
||||||
|
requester_user_id: str,
|
||||||
|
new_room_user_id: Optional[str] = None,
|
||||||
|
new_room_name: Optional[str] = None,
|
||||||
|
message: Optional[str] = None,
|
||||||
|
block: bool = False,
|
||||||
|
purge: bool = True,
|
||||||
|
force_purge: bool = False,
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Shuts down and purges a room.
|
||||||
|
|
||||||
|
See `RoomShutdownHandler.shutdown_room` for details of creation of the new room
|
||||||
|
|
||||||
|
Args:
|
||||||
|
delete_id: The ID for this delete.
|
||||||
|
room_id: The ID of the room to shut down.
|
||||||
|
requester_user_id:
|
||||||
|
User who requested the action. Will be recorded as putting the room on the
|
||||||
|
blocking list.
|
||||||
|
new_room_user_id:
|
||||||
|
If set, a new room will be created with this user ID
|
||||||
|
as the creator and admin, and all users in the old room will be
|
||||||
|
moved into that room. If not set, no new room will be created
|
||||||
|
and the users will just be removed from the old room.
|
||||||
|
new_room_name:
|
||||||
|
A string representing the name of the room that new users will
|
||||||
|
be invited to. Defaults to `Content Violation Notification`
|
||||||
|
message:
|
||||||
|
A string containing the first message that will be sent as
|
||||||
|
`new_room_user_id` in the new room. Ideally this will clearly
|
||||||
|
convey why the original room was shut down.
|
||||||
|
Defaults to `Sharing illegal content on this server is not
|
||||||
|
permitted and rooms in violation will be blocked.`
|
||||||
|
block:
|
||||||
|
If set to `true`, this room will be added to a blocking list,
|
||||||
|
preventing future attempts to join the room. Defaults to `false`.
|
||||||
|
purge:
|
||||||
|
If set to `true`, purge the given room from the database.
|
||||||
|
force_purge:
|
||||||
|
If set to `true`, the room will be purged from database
|
||||||
|
also if it fails to remove some users from room.
|
||||||
|
|
||||||
|
Saves a `RoomShutdownHandler.ShutdownRoomResponse` in `DeleteStatus`:
|
||||||
|
"""
|
||||||
|
|
||||||
|
self._purges_in_progress_by_room.add(room_id)
|
||||||
|
try:
|
||||||
|
with await self.pagination_lock.write(room_id):
|
||||||
|
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_SHUTTING_DOWN
|
||||||
|
self._delete_by_id[
|
||||||
|
delete_id
|
||||||
|
].shutdown_room = await self._room_shutdown_handler.shutdown_room(
|
||||||
|
room_id=room_id,
|
||||||
|
requester_user_id=requester_user_id,
|
||||||
|
new_room_user_id=new_room_user_id,
|
||||||
|
new_room_name=new_room_name,
|
||||||
|
message=message,
|
||||||
|
block=block,
|
||||||
|
)
|
||||||
|
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_PURGING
|
||||||
|
|
||||||
|
if purge:
|
||||||
|
logger.info("starting purge room_id %s", room_id)
|
||||||
|
|
||||||
|
# first check that we have no users in this room
|
||||||
|
if not force_purge:
|
||||||
|
joined = await self.store.is_host_joined(
|
||||||
|
room_id, self._server_name
|
||||||
|
)
|
||||||
|
if joined:
|
||||||
|
raise SynapseError(
|
||||||
|
400, "Users are still joined to this room"
|
||||||
|
)
|
||||||
|
|
||||||
|
await self.storage.purge_events.purge_room(room_id)
|
||||||
|
|
||||||
|
logger.info("complete")
|
||||||
|
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_COMPLETE
|
||||||
|
except Exception:
|
||||||
|
f = Failure()
|
||||||
|
logger.error(
|
||||||
|
"failed",
|
||||||
|
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
|
||||||
|
)
|
||||||
|
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_FAILED
|
||||||
|
self._delete_by_id[delete_id].error = f.getErrorMessage()
|
||||||
|
finally:
|
||||||
|
self._purges_in_progress_by_room.discard(room_id)
|
||||||
|
|
||||||
|
# remove the delete from the list 24 hours after it completes
|
||||||
|
def clear_delete() -> None:
|
||||||
|
del self._delete_by_id[delete_id]
|
||||||
|
self._delete_by_room[room_id].remove(delete_id)
|
||||||
|
if not self._delete_by_room[room_id]:
|
||||||
|
del self._delete_by_room[room_id]
|
||||||
|
|
||||||
|
self.hs.get_reactor().callLater(
|
||||||
|
PaginationHandler.CLEAR_PURGE_AFTER_MS / 1000, clear_delete
|
||||||
|
)
|
||||||
|
|
||||||
|
def start_shutdown_and_purge_room(
|
||||||
|
self,
|
||||||
|
room_id: str,
|
||||||
|
requester_user_id: str,
|
||||||
|
new_room_user_id: Optional[str] = None,
|
||||||
|
new_room_name: Optional[str] = None,
|
||||||
|
message: Optional[str] = None,
|
||||||
|
block: bool = False,
|
||||||
|
purge: bool = True,
|
||||||
|
force_purge: bool = False,
|
||||||
|
) -> str:
|
||||||
|
"""Start off shut down and purge on a room.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
room_id: The ID of the room to shut down.
|
||||||
|
requester_user_id:
|
||||||
|
User who requested the action and put the room on the
|
||||||
|
blocking list.
|
||||||
|
new_room_user_id:
|
||||||
|
If set, a new room will be created with this user ID
|
||||||
|
as the creator and admin, and all users in the old room will be
|
||||||
|
moved into that room. If not set, no new room will be created
|
||||||
|
and the users will just be removed from the old room.
|
||||||
|
new_room_name:
|
||||||
|
A string representing the name of the room that new users will
|
||||||
|
be invited to. Defaults to `Content Violation Notification`
|
||||||
|
message:
|
||||||
|
A string containing the first message that will be sent as
|
||||||
|
`new_room_user_id` in the new room. Ideally this will clearly
|
||||||
|
convey why the original room was shut down.
|
||||||
|
Defaults to `Sharing illegal content on this server is not
|
||||||
|
permitted and rooms in violation will be blocked.`
|
||||||
|
block:
|
||||||
|
If set to `true`, this room will be added to a blocking list,
|
||||||
|
preventing future attempts to join the room. Defaults to `false`.
|
||||||
|
purge:
|
||||||
|
If set to `true`, purge the given room from the database.
|
||||||
|
force_purge:
|
||||||
|
If set to `true`, the room will be purged from database
|
||||||
|
also if it fails to remove some users from room.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
unique ID for this delete transaction.
|
||||||
|
"""
|
||||||
|
if room_id in self._purges_in_progress_by_room:
|
||||||
|
raise SynapseError(
|
||||||
|
400, "History purge already in progress for %s" % (room_id,)
|
||||||
|
)
|
||||||
|
|
||||||
|
# This check is double to `RoomShutdownHandler.shutdown_room`
|
||||||
|
# But here the requester get a direct response / error with HTTP request
|
||||||
|
# and do not have to check the purge status
|
||||||
|
if new_room_user_id is not None:
|
||||||
|
if not self.hs.is_mine_id(new_room_user_id):
|
||||||
|
raise SynapseError(
|
||||||
|
400, "User must be our own: %s" % (new_room_user_id,)
|
||||||
|
)
|
||||||
|
|
||||||
|
delete_id = random_string(16)
|
||||||
|
|
||||||
|
# we log the delete_id here so that it can be tied back to the
|
||||||
|
# request id in the log lines.
|
||||||
|
logger.info(
|
||||||
|
"starting shutdown room_id %s with delete_id %s",
|
||||||
|
room_id,
|
||||||
|
delete_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
self._delete_by_id[delete_id] = DeleteStatus()
|
||||||
|
self._delete_by_room.setdefault(room_id, []).append(delete_id)
|
||||||
|
run_as_background_process(
|
||||||
|
"shutdown_and_purge_room",
|
||||||
|
self._shutdown_and_purge_room,
|
||||||
|
delete_id,
|
||||||
|
room_id,
|
||||||
|
requester_user_id,
|
||||||
|
new_room_user_id,
|
||||||
|
new_room_name,
|
||||||
|
message,
|
||||||
|
block,
|
||||||
|
purge,
|
||||||
|
force_purge,
|
||||||
|
)
|
||||||
|
return delete_id
|
||||||
|
|
|
@ -116,7 +116,9 @@ class RegistrationHandler:
|
||||||
self.pusher_pool = hs.get_pusherpool()
|
self.pusher_pool = hs.get_pusherpool()
|
||||||
|
|
||||||
self.session_lifetime = hs.config.registration.session_lifetime
|
self.session_lifetime = hs.config.registration.session_lifetime
|
||||||
self.access_token_lifetime = hs.config.registration.access_token_lifetime
|
self.refreshable_access_token_lifetime = (
|
||||||
|
hs.config.registration.refreshable_access_token_lifetime
|
||||||
|
)
|
||||||
|
|
||||||
init_counters_for_auth_provider("")
|
init_counters_for_auth_provider("")
|
||||||
|
|
||||||
|
@ -820,13 +822,15 @@ class RegistrationHandler:
|
||||||
(
|
(
|
||||||
refresh_token,
|
refresh_token,
|
||||||
refresh_token_id,
|
refresh_token_id,
|
||||||
) = await self._auth_handler.get_refresh_token_for_user_id(
|
) = await self._auth_handler.create_refresh_token_for_user_id(
|
||||||
user_id,
|
user_id,
|
||||||
device_id=registered_device_id,
|
device_id=registered_device_id,
|
||||||
)
|
)
|
||||||
valid_until_ms = self.clock.time_msec() + self.access_token_lifetime
|
valid_until_ms = (
|
||||||
|
self.clock.time_msec() + self.refreshable_access_token_lifetime
|
||||||
|
)
|
||||||
|
|
||||||
access_token = await self._auth_handler.get_access_token_for_user_id(
|
access_token = await self._auth_handler.create_access_token_for_user_id(
|
||||||
user_id,
|
user_id,
|
||||||
device_id=registered_device_id,
|
device_id=registered_device_id,
|
||||||
valid_until_ms=valid_until_ms,
|
valid_until_ms=valid_until_ms,
|
||||||
|
|
|
@ -12,8 +12,7 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
"""Contains functions for performing events on rooms."""
|
"""Contains functions for performing actions on rooms."""
|
||||||
|
|
||||||
import itertools
|
import itertools
|
||||||
import logging
|
import logging
|
||||||
import math
|
import math
|
||||||
|
@ -31,6 +30,8 @@ from typing import (
|
||||||
Tuple,
|
Tuple,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
from typing_extensions import TypedDict
|
||||||
|
|
||||||
from synapse.api.constants import (
|
from synapse.api.constants import (
|
||||||
EventContentFields,
|
EventContentFields,
|
||||||
EventTypes,
|
EventTypes,
|
||||||
|
@ -784,8 +785,11 @@ class RoomCreationHandler:
|
||||||
raise SynapseError(403, "Room visibility value not allowed.")
|
raise SynapseError(403, "Room visibility value not allowed.")
|
||||||
|
|
||||||
if is_public:
|
if is_public:
|
||||||
|
room_aliases = []
|
||||||
|
if room_alias:
|
||||||
|
room_aliases.append(room_alias.to_string())
|
||||||
if not self.config.roomdirectory.is_publishing_room_allowed(
|
if not self.config.roomdirectory.is_publishing_room_allowed(
|
||||||
user_id, room_id, room_alias
|
user_id, room_id, room_aliases
|
||||||
):
|
):
|
||||||
# Let's just return a generic message, as there may be all sorts of
|
# Let's just return a generic message, as there may be all sorts of
|
||||||
# reasons why we said no. TODO: Allow configurable error messages
|
# reasons why we said no. TODO: Allow configurable error messages
|
||||||
|
@ -1168,8 +1172,10 @@ class RoomContextHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
if event_filter:
|
if event_filter:
|
||||||
results["events_before"] = event_filter.filter(results["events_before"])
|
results["events_before"] = await event_filter.filter(
|
||||||
results["events_after"] = event_filter.filter(results["events_after"])
|
results["events_before"]
|
||||||
|
)
|
||||||
|
results["events_after"] = await event_filter.filter(results["events_after"])
|
||||||
|
|
||||||
results["events_before"] = await filter_evts(results["events_before"])
|
results["events_before"] = await filter_evts(results["events_before"])
|
||||||
results["events_after"] = await filter_evts(results["events_after"])
|
results["events_after"] = await filter_evts(results["events_after"])
|
||||||
|
@ -1205,7 +1211,7 @@ class RoomContextHandler:
|
||||||
|
|
||||||
state_events = list(state[last_event_id].values())
|
state_events = list(state[last_event_id].values())
|
||||||
if event_filter:
|
if event_filter:
|
||||||
state_events = event_filter.filter(state_events)
|
state_events = await event_filter.filter(state_events)
|
||||||
|
|
||||||
results["state"] = await filter_evts(state_events)
|
results["state"] = await filter_evts(state_events)
|
||||||
|
|
||||||
|
@ -1285,8 +1291,25 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
|
||||||
return self.store.get_room_events_max_id(room_id)
|
return self.store.get_room_events_max_id(room_id)
|
||||||
|
|
||||||
|
|
||||||
class RoomShutdownHandler:
|
class ShutdownRoomResponse(TypedDict):
|
||||||
|
"""
|
||||||
|
Attributes:
|
||||||
|
kicked_users: An array of users (`user_id`) that were kicked.
|
||||||
|
failed_to_kick_users:
|
||||||
|
An array of users (`user_id`) that that were not kicked.
|
||||||
|
local_aliases:
|
||||||
|
An array of strings representing the local aliases that were
|
||||||
|
migrated from the old room to the new.
|
||||||
|
new_room_id: A string representing the room ID of the new room.
|
||||||
|
"""
|
||||||
|
|
||||||
|
kicked_users: List[str]
|
||||||
|
failed_to_kick_users: List[str]
|
||||||
|
local_aliases: List[str]
|
||||||
|
new_room_id: Optional[str]
|
||||||
|
|
||||||
|
|
||||||
|
class RoomShutdownHandler:
|
||||||
DEFAULT_MESSAGE = (
|
DEFAULT_MESSAGE = (
|
||||||
"Sharing illegal content on this server is not permitted and rooms in"
|
"Sharing illegal content on this server is not permitted and rooms in"
|
||||||
" violation will be blocked."
|
" violation will be blocked."
|
||||||
|
@ -1299,7 +1322,6 @@ class RoomShutdownHandler:
|
||||||
self._room_creation_handler = hs.get_room_creation_handler()
|
self._room_creation_handler = hs.get_room_creation_handler()
|
||||||
self._replication = hs.get_replication_data_handler()
|
self._replication = hs.get_replication_data_handler()
|
||||||
self.event_creation_handler = hs.get_event_creation_handler()
|
self.event_creation_handler = hs.get_event_creation_handler()
|
||||||
self.state = hs.get_state_handler()
|
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
async def shutdown_room(
|
async def shutdown_room(
|
||||||
|
@ -1310,7 +1332,7 @@ class RoomShutdownHandler:
|
||||||
new_room_name: Optional[str] = None,
|
new_room_name: Optional[str] = None,
|
||||||
message: Optional[str] = None,
|
message: Optional[str] = None,
|
||||||
block: bool = False,
|
block: bool = False,
|
||||||
) -> dict:
|
) -> ShutdownRoomResponse:
|
||||||
"""
|
"""
|
||||||
Shuts down a room. Moves all local users and room aliases automatically
|
Shuts down a room. Moves all local users and room aliases automatically
|
||||||
to a new room if `new_room_user_id` is set. Otherwise local users only
|
to a new room if `new_room_user_id` is set. Otherwise local users only
|
||||||
|
@ -1344,8 +1366,13 @@ class RoomShutdownHandler:
|
||||||
Defaults to `Sharing illegal content on this server is not
|
Defaults to `Sharing illegal content on this server is not
|
||||||
permitted and rooms in violation will be blocked.`
|
permitted and rooms in violation will be blocked.`
|
||||||
block:
|
block:
|
||||||
If set to `true`, this room will be added to a blocking list,
|
If set to `True`, users will be prevented from joining the old
|
||||||
preventing future attempts to join the room. Defaults to `false`.
|
room. This option can also be used to pre-emptively block a room,
|
||||||
|
even if it's unknown to this homeserver. In this case, the room
|
||||||
|
will be blocked, and no further action will be taken. If `False`,
|
||||||
|
attempting to delete an unknown room is invalid.
|
||||||
|
|
||||||
|
Defaults to `False`.
|
||||||
|
|
||||||
Returns: a dict containing the following keys:
|
Returns: a dict containing the following keys:
|
||||||
kicked_users: An array of users (`user_id`) that were kicked.
|
kicked_users: An array of users (`user_id`) that were kicked.
|
||||||
|
@ -1354,7 +1381,9 @@ class RoomShutdownHandler:
|
||||||
local_aliases:
|
local_aliases:
|
||||||
An array of strings representing the local aliases that were
|
An array of strings representing the local aliases that were
|
||||||
migrated from the old room to the new.
|
migrated from the old room to the new.
|
||||||
new_room_id: A string representing the room ID of the new room.
|
new_room_id:
|
||||||
|
A string representing the room ID of the new room, or None if
|
||||||
|
no such room was created.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if not new_room_name:
|
if not new_room_name:
|
||||||
|
@ -1365,14 +1394,28 @@ class RoomShutdownHandler:
|
||||||
if not RoomID.is_valid(room_id):
|
if not RoomID.is_valid(room_id):
|
||||||
raise SynapseError(400, "%s is not a legal room ID" % (room_id,))
|
raise SynapseError(400, "%s is not a legal room ID" % (room_id,))
|
||||||
|
|
||||||
if not await self.store.get_room(room_id):
|
# Action the block first (even if the room doesn't exist yet)
|
||||||
raise NotFoundError("Unknown room id %s" % (room_id,))
|
|
||||||
|
|
||||||
# This will work even if the room is already blocked, but that is
|
|
||||||
# desirable in case the first attempt at blocking the room failed below.
|
|
||||||
if block:
|
if block:
|
||||||
|
# This will work even if the room is already blocked, but that is
|
||||||
|
# desirable in case the first attempt at blocking the room failed below.
|
||||||
await self.store.block_room(room_id, requester_user_id)
|
await self.store.block_room(room_id, requester_user_id)
|
||||||
|
|
||||||
|
if not await self.store.get_room(room_id):
|
||||||
|
if block:
|
||||||
|
# We allow you to block an unknown room.
|
||||||
|
return {
|
||||||
|
"kicked_users": [],
|
||||||
|
"failed_to_kick_users": [],
|
||||||
|
"local_aliases": [],
|
||||||
|
"new_room_id": None,
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
# But if you don't want to preventatively block another room,
|
||||||
|
# this function can't do anything useful.
|
||||||
|
raise NotFoundError(
|
||||||
|
"Cannot shut down room: unknown room id %s" % (room_id,)
|
||||||
|
)
|
||||||
|
|
||||||
if new_room_user_id is not None:
|
if new_room_user_id is not None:
|
||||||
if not self.hs.is_mine_id(new_room_user_id):
|
if not self.hs.is_mine_id(new_room_user_id):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
|
|
|
@ -223,6 +223,7 @@ class RoomBatchHandler:
|
||||||
action=membership,
|
action=membership,
|
||||||
content=event_dict["content"],
|
content=event_dict["content"],
|
||||||
outlier=True,
|
outlier=True,
|
||||||
|
historical=True,
|
||||||
prev_event_ids=[prev_event_id_for_state_chain],
|
prev_event_ids=[prev_event_id_for_state_chain],
|
||||||
# Make sure to use a copy of this list because we modify it
|
# Make sure to use a copy of this list because we modify it
|
||||||
# later in the loop here. Otherwise it will be the same
|
# later in the loop here. Otherwise it will be the same
|
||||||
|
@ -242,6 +243,7 @@ class RoomBatchHandler:
|
||||||
),
|
),
|
||||||
event_dict,
|
event_dict,
|
||||||
outlier=True,
|
outlier=True,
|
||||||
|
historical=True,
|
||||||
prev_event_ids=[prev_event_id_for_state_chain],
|
prev_event_ids=[prev_event_id_for_state_chain],
|
||||||
# Make sure to use a copy of this list because we modify it
|
# Make sure to use a copy of this list because we modify it
|
||||||
# later in the loop here. Otherwise it will be the same
|
# later in the loop here. Otherwise it will be the same
|
||||||
|
|
|
@ -268,6 +268,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
content: Optional[dict] = None,
|
content: Optional[dict] = None,
|
||||||
require_consent: bool = True,
|
require_consent: bool = True,
|
||||||
outlier: bool = False,
|
outlier: bool = False,
|
||||||
|
historical: bool = False,
|
||||||
) -> Tuple[str, int]:
|
) -> Tuple[str, int]:
|
||||||
"""
|
"""
|
||||||
Internal membership update function to get an existing event or create
|
Internal membership update function to get an existing event or create
|
||||||
|
@ -293,6 +294,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
outlier: Indicates whether the event is an `outlier`, i.e. if
|
outlier: Indicates whether the event is an `outlier`, i.e. if
|
||||||
it's from an arbitrary point and floating in the DAG as
|
it's from an arbitrary point and floating in the DAG as
|
||||||
opposed to being inline with the current DAG.
|
opposed to being inline with the current DAG.
|
||||||
|
historical: Indicates whether the message is being inserted
|
||||||
|
back in time around some existing events. This is used to skip
|
||||||
|
a few checks and mark the event as backfilled.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of event ID and stream ordering position
|
Tuple of event ID and stream ordering position
|
||||||
|
@ -337,6 +341,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
auth_event_ids=auth_event_ids,
|
auth_event_ids=auth_event_ids,
|
||||||
require_consent=require_consent,
|
require_consent=require_consent,
|
||||||
outlier=outlier,
|
outlier=outlier,
|
||||||
|
historical=historical,
|
||||||
)
|
)
|
||||||
|
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
prev_state_ids = await context.get_prev_state_ids()
|
||||||
|
@ -433,6 +438,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
new_room: bool = False,
|
new_room: bool = False,
|
||||||
require_consent: bool = True,
|
require_consent: bool = True,
|
||||||
outlier: bool = False,
|
outlier: bool = False,
|
||||||
|
historical: bool = False,
|
||||||
prev_event_ids: Optional[List[str]] = None,
|
prev_event_ids: Optional[List[str]] = None,
|
||||||
auth_event_ids: Optional[List[str]] = None,
|
auth_event_ids: Optional[List[str]] = None,
|
||||||
) -> Tuple[str, int]:
|
) -> Tuple[str, int]:
|
||||||
|
@ -454,6 +460,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
outlier: Indicates whether the event is an `outlier`, i.e. if
|
outlier: Indicates whether the event is an `outlier`, i.e. if
|
||||||
it's from an arbitrary point and floating in the DAG as
|
it's from an arbitrary point and floating in the DAG as
|
||||||
opposed to being inline with the current DAG.
|
opposed to being inline with the current DAG.
|
||||||
|
historical: Indicates whether the message is being inserted
|
||||||
|
back in time around some existing events. This is used to skip
|
||||||
|
a few checks and mark the event as backfilled.
|
||||||
prev_event_ids: The event IDs to use as the prev events
|
prev_event_ids: The event IDs to use as the prev events
|
||||||
auth_event_ids:
|
auth_event_ids:
|
||||||
The event ids to use as the auth_events for the new event.
|
The event ids to use as the auth_events for the new event.
|
||||||
|
@ -487,6 +496,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
new_room=new_room,
|
new_room=new_room,
|
||||||
require_consent=require_consent,
|
require_consent=require_consent,
|
||||||
outlier=outlier,
|
outlier=outlier,
|
||||||
|
historical=historical,
|
||||||
prev_event_ids=prev_event_ids,
|
prev_event_ids=prev_event_ids,
|
||||||
auth_event_ids=auth_event_ids,
|
auth_event_ids=auth_event_ids,
|
||||||
)
|
)
|
||||||
|
@ -507,6 +517,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
new_room: bool = False,
|
new_room: bool = False,
|
||||||
require_consent: bool = True,
|
require_consent: bool = True,
|
||||||
outlier: bool = False,
|
outlier: bool = False,
|
||||||
|
historical: bool = False,
|
||||||
prev_event_ids: Optional[List[str]] = None,
|
prev_event_ids: Optional[List[str]] = None,
|
||||||
auth_event_ids: Optional[List[str]] = None,
|
auth_event_ids: Optional[List[str]] = None,
|
||||||
) -> Tuple[str, int]:
|
) -> Tuple[str, int]:
|
||||||
|
@ -530,6 +541,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
outlier: Indicates whether the event is an `outlier`, i.e. if
|
outlier: Indicates whether the event is an `outlier`, i.e. if
|
||||||
it's from an arbitrary point and floating in the DAG as
|
it's from an arbitrary point and floating in the DAG as
|
||||||
opposed to being inline with the current DAG.
|
opposed to being inline with the current DAG.
|
||||||
|
historical: Indicates whether the message is being inserted
|
||||||
|
back in time around some existing events. This is used to skip
|
||||||
|
a few checks and mark the event as backfilled.
|
||||||
prev_event_ids: The event IDs to use as the prev events
|
prev_event_ids: The event IDs to use as the prev events
|
||||||
auth_event_ids:
|
auth_event_ids:
|
||||||
The event ids to use as the auth_events for the new event.
|
The event ids to use as the auth_events for the new event.
|
||||||
|
@ -657,6 +671,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
content=content,
|
content=content,
|
||||||
require_consent=require_consent,
|
require_consent=require_consent,
|
||||||
outlier=outlier,
|
outlier=outlier,
|
||||||
|
historical=historical,
|
||||||
)
|
)
|
||||||
|
|
||||||
latest_event_ids = await self.store.get_prev_events_for_room(room_id)
|
latest_event_ids = await self.store.get_prev_events_for_room(room_id)
|
||||||
|
|
|
@ -97,7 +97,7 @@ class RoomSummaryHandler:
|
||||||
# If a user tries to fetch the same page multiple times in quick succession,
|
# If a user tries to fetch the same page multiple times in quick succession,
|
||||||
# only process the first attempt and return its result to subsequent requests.
|
# only process the first attempt and return its result to subsequent requests.
|
||||||
self._pagination_response_cache: ResponseCache[
|
self._pagination_response_cache: ResponseCache[
|
||||||
Tuple[str, bool, Optional[int], Optional[int], Optional[str]]
|
Tuple[str, str, bool, Optional[int], Optional[int], Optional[str]]
|
||||||
] = ResponseCache(
|
] = ResponseCache(
|
||||||
hs.get_clock(),
|
hs.get_clock(),
|
||||||
"get_room_hierarchy",
|
"get_room_hierarchy",
|
||||||
|
@ -282,7 +282,14 @@ class RoomSummaryHandler:
|
||||||
# This is due to the pagination process mutating internal state, attempting
|
# This is due to the pagination process mutating internal state, attempting
|
||||||
# to process multiple requests for the same page will result in errors.
|
# to process multiple requests for the same page will result in errors.
|
||||||
return await self._pagination_response_cache.wrap(
|
return await self._pagination_response_cache.wrap(
|
||||||
(requested_room_id, suggested_only, max_depth, limit, from_token),
|
(
|
||||||
|
requester,
|
||||||
|
requested_room_id,
|
||||||
|
suggested_only,
|
||||||
|
max_depth,
|
||||||
|
limit,
|
||||||
|
from_token,
|
||||||
|
),
|
||||||
self._get_room_hierarchy,
|
self._get_room_hierarchy,
|
||||||
requester,
|
requester,
|
||||||
requested_room_id,
|
requested_room_id,
|
||||||
|
|
|
@ -180,7 +180,7 @@ class SearchHandler:
|
||||||
% (set(group_keys) - {"room_id", "sender"},),
|
% (set(group_keys) - {"room_id", "sender"},),
|
||||||
)
|
)
|
||||||
|
|
||||||
search_filter = Filter(filter_dict)
|
search_filter = Filter(self.hs, filter_dict)
|
||||||
|
|
||||||
# TODO: Search through left rooms too
|
# TODO: Search through left rooms too
|
||||||
rooms = await self.store.get_rooms_for_local_user_where_membership_is(
|
rooms = await self.store.get_rooms_for_local_user_where_membership_is(
|
||||||
|
@ -242,7 +242,7 @@ class SearchHandler:
|
||||||
|
|
||||||
rank_map.update({r["event"].event_id: r["rank"] for r in results})
|
rank_map.update({r["event"].event_id: r["rank"] for r in results})
|
||||||
|
|
||||||
filtered_events = search_filter.filter([r["event"] for r in results])
|
filtered_events = await search_filter.filter([r["event"] for r in results])
|
||||||
|
|
||||||
events = await filter_events_for_client(
|
events = await filter_events_for_client(
|
||||||
self.storage, user.to_string(), filtered_events
|
self.storage, user.to_string(), filtered_events
|
||||||
|
@ -292,7 +292,9 @@ class SearchHandler:
|
||||||
|
|
||||||
rank_map.update({r["event"].event_id: r["rank"] for r in results})
|
rank_map.update({r["event"].event_id: r["rank"] for r in results})
|
||||||
|
|
||||||
filtered_events = search_filter.filter([r["event"] for r in results])
|
filtered_events = await search_filter.filter(
|
||||||
|
[r["event"] for r in results]
|
||||||
|
)
|
||||||
|
|
||||||
events = await filter_events_for_client(
|
events = await filter_events_for_client(
|
||||||
self.storage, user.to_string(), filtered_events
|
self.storage, user.to_string(), filtered_events
|
||||||
|
|
|
@ -510,7 +510,7 @@ class SyncHandler:
|
||||||
log_kv({"limited": limited})
|
log_kv({"limited": limited})
|
||||||
|
|
||||||
if potential_recents:
|
if potential_recents:
|
||||||
recents = sync_config.filter_collection.filter_room_timeline(
|
recents = await sync_config.filter_collection.filter_room_timeline(
|
||||||
potential_recents
|
potential_recents
|
||||||
)
|
)
|
||||||
log_kv({"recents_after_sync_filtering": len(recents)})
|
log_kv({"recents_after_sync_filtering": len(recents)})
|
||||||
|
@ -575,8 +575,8 @@ class SyncHandler:
|
||||||
|
|
||||||
log_kv({"loaded_recents": len(events)})
|
log_kv({"loaded_recents": len(events)})
|
||||||
|
|
||||||
loaded_recents = sync_config.filter_collection.filter_room_timeline(
|
loaded_recents = (
|
||||||
events
|
await sync_config.filter_collection.filter_room_timeline(events)
|
||||||
)
|
)
|
||||||
|
|
||||||
log_kv({"loaded_recents_after_sync_filtering": len(loaded_recents)})
|
log_kv({"loaded_recents_after_sync_filtering": len(loaded_recents)})
|
||||||
|
@ -1015,7 +1015,7 @@ class SyncHandler:
|
||||||
|
|
||||||
return {
|
return {
|
||||||
(e.type, e.state_key): e
|
(e.type, e.state_key): e
|
||||||
for e in sync_config.filter_collection.filter_room_state(
|
for e in await sync_config.filter_collection.filter_room_state(
|
||||||
list(state.values())
|
list(state.values())
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
@ -1382,7 +1382,7 @@ class SyncHandler:
|
||||||
sync_config.user
|
sync_config.user
|
||||||
)
|
)
|
||||||
|
|
||||||
account_data_for_user = sync_config.filter_collection.filter_account_data(
|
account_data_for_user = await sync_config.filter_collection.filter_account_data(
|
||||||
[
|
[
|
||||||
{"type": account_data_type, "content": content}
|
{"type": account_data_type, "content": content}
|
||||||
for account_data_type, content in account_data.items()
|
for account_data_type, content in account_data.items()
|
||||||
|
@ -1447,7 +1447,7 @@ class SyncHandler:
|
||||||
# Deduplicate the presence entries so that there's at most one per user
|
# Deduplicate the presence entries so that there's at most one per user
|
||||||
presence = list({p.user_id: p for p in presence}.values())
|
presence = list({p.user_id: p for p in presence}.values())
|
||||||
|
|
||||||
presence = sync_config.filter_collection.filter_presence(presence)
|
presence = await sync_config.filter_collection.filter_presence(presence)
|
||||||
|
|
||||||
sync_result_builder.presence = presence
|
sync_result_builder.presence = presence
|
||||||
|
|
||||||
|
@ -2020,12 +2020,14 @@ class SyncHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
account_data_events = (
|
account_data_events = (
|
||||||
sync_config.filter_collection.filter_room_account_data(
|
await sync_config.filter_collection.filter_room_account_data(
|
||||||
account_data_events
|
account_data_events
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
ephemeral = sync_config.filter_collection.filter_room_ephemeral(ephemeral)
|
ephemeral = await sync_config.filter_collection.filter_room_ephemeral(
|
||||||
|
ephemeral
|
||||||
|
)
|
||||||
|
|
||||||
if not (
|
if not (
|
||||||
always_include
|
always_include
|
||||||
|
|
|
@ -90,7 +90,7 @@ class FollowerTypingHandler:
|
||||||
self.wheel_timer = WheelTimer(bucket_size=5000)
|
self.wheel_timer = WheelTimer(bucket_size=5000)
|
||||||
|
|
||||||
@wrap_as_background_process("typing._handle_timeouts")
|
@wrap_as_background_process("typing._handle_timeouts")
|
||||||
def _handle_timeouts(self) -> None:
|
async def _handle_timeouts(self) -> None:
|
||||||
logger.debug("Checking for typing timeouts")
|
logger.debug("Checking for typing timeouts")
|
||||||
|
|
||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
|
|
|
@ -98,7 +98,7 @@ def return_json_error(f: failure.Failure, request: SynapseRequest) -> None:
|
||||||
"Failed handle request via %r: %r",
|
"Failed handle request via %r: %r",
|
||||||
request.request_metrics.name,
|
request.request_metrics.name,
|
||||||
request,
|
request,
|
||||||
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
|
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore[arg-type]
|
||||||
)
|
)
|
||||||
|
|
||||||
# Only respond with an error response if we haven't already started writing,
|
# Only respond with an error response if we haven't already started writing,
|
||||||
|
@ -150,7 +150,7 @@ def return_html_error(
|
||||||
logger.error(
|
logger.error(
|
||||||
"Failed handle request %r",
|
"Failed handle request %r",
|
||||||
request,
|
request,
|
||||||
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
|
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore[arg-type]
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
code = HTTPStatus.INTERNAL_SERVER_ERROR
|
code = HTTPStatus.INTERNAL_SERVER_ERROR
|
||||||
|
@ -159,7 +159,7 @@ def return_html_error(
|
||||||
logger.error(
|
logger.error(
|
||||||
"Failed handle request %r",
|
"Failed handle request %r",
|
||||||
request,
|
request,
|
||||||
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
|
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore[arg-type]
|
||||||
)
|
)
|
||||||
|
|
||||||
if isinstance(error_template, str):
|
if isinstance(error_template, str):
|
||||||
|
|
|
@ -3,7 +3,7 @@ import time
|
||||||
from logging import Handler, LogRecord
|
from logging import Handler, LogRecord
|
||||||
from logging.handlers import MemoryHandler
|
from logging.handlers import MemoryHandler
|
||||||
from threading import Thread
|
from threading import Thread
|
||||||
from typing import Optional
|
from typing import Optional, cast
|
||||||
|
|
||||||
from twisted.internet.interfaces import IReactorCore
|
from twisted.internet.interfaces import IReactorCore
|
||||||
|
|
||||||
|
@ -56,7 +56,7 @@ class PeriodicallyFlushingMemoryHandler(MemoryHandler):
|
||||||
if reactor is None:
|
if reactor is None:
|
||||||
from twisted.internet import reactor as global_reactor
|
from twisted.internet import reactor as global_reactor
|
||||||
|
|
||||||
reactor_to_use = global_reactor # type: ignore[assignment]
|
reactor_to_use = cast(IReactorCore, global_reactor)
|
||||||
else:
|
else:
|
||||||
reactor_to_use = reactor
|
reactor_to_use = reactor
|
||||||
|
|
||||||
|
|
|
@ -20,10 +20,25 @@ import os
|
||||||
import platform
|
import platform
|
||||||
import threading
|
import threading
|
||||||
import time
|
import time
|
||||||
from typing import Callable, Dict, Iterable, Mapping, Optional, Tuple, Union
|
from typing import (
|
||||||
|
Any,
|
||||||
|
Callable,
|
||||||
|
Dict,
|
||||||
|
Generic,
|
||||||
|
Iterable,
|
||||||
|
Mapping,
|
||||||
|
Optional,
|
||||||
|
Sequence,
|
||||||
|
Set,
|
||||||
|
Tuple,
|
||||||
|
Type,
|
||||||
|
TypeVar,
|
||||||
|
Union,
|
||||||
|
cast,
|
||||||
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
from prometheus_client import Counter, Gauge, Histogram
|
from prometheus_client import CollectorRegistry, Counter, Gauge, Histogram, Metric
|
||||||
from prometheus_client.core import (
|
from prometheus_client.core import (
|
||||||
REGISTRY,
|
REGISTRY,
|
||||||
CounterMetricFamily,
|
CounterMetricFamily,
|
||||||
|
@ -32,6 +47,7 @@ from prometheus_client.core import (
|
||||||
)
|
)
|
||||||
|
|
||||||
from twisted.internet import reactor
|
from twisted.internet import reactor
|
||||||
|
from twisted.internet.base import ReactorBase
|
||||||
from twisted.python.threadpool import ThreadPool
|
from twisted.python.threadpool import ThreadPool
|
||||||
|
|
||||||
import synapse
|
import synapse
|
||||||
|
@ -54,7 +70,7 @@ HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
|
||||||
|
|
||||||
class RegistryProxy:
|
class RegistryProxy:
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def collect():
|
def collect() -> Iterable[Metric]:
|
||||||
for metric in REGISTRY.collect():
|
for metric in REGISTRY.collect():
|
||||||
if not metric.name.startswith("__"):
|
if not metric.name.startswith("__"):
|
||||||
yield metric
|
yield metric
|
||||||
|
@ -74,7 +90,7 @@ class LaterGauge:
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
def collect(self):
|
def collect(self) -> Iterable[Metric]:
|
||||||
|
|
||||||
g = GaugeMetricFamily(self.name, self.desc, labels=self.labels)
|
g = GaugeMetricFamily(self.name, self.desc, labels=self.labels)
|
||||||
|
|
||||||
|
@ -93,10 +109,10 @@ class LaterGauge:
|
||||||
|
|
||||||
yield g
|
yield g
|
||||||
|
|
||||||
def __attrs_post_init__(self):
|
def __attrs_post_init__(self) -> None:
|
||||||
self._register()
|
self._register()
|
||||||
|
|
||||||
def _register(self):
|
def _register(self) -> None:
|
||||||
if self.name in all_gauges.keys():
|
if self.name in all_gauges.keys():
|
||||||
logger.warning("%s already registered, reregistering" % (self.name,))
|
logger.warning("%s already registered, reregistering" % (self.name,))
|
||||||
REGISTRY.unregister(all_gauges.pop(self.name))
|
REGISTRY.unregister(all_gauges.pop(self.name))
|
||||||
|
@ -105,7 +121,12 @@ class LaterGauge:
|
||||||
all_gauges[self.name] = self
|
all_gauges[self.name] = self
|
||||||
|
|
||||||
|
|
||||||
class InFlightGauge:
|
# `MetricsEntry` only makes sense when it is a `Protocol`,
|
||||||
|
# but `Protocol` can't be used as a `TypeVar` bound.
|
||||||
|
MetricsEntry = TypeVar("MetricsEntry")
|
||||||
|
|
||||||
|
|
||||||
|
class InFlightGauge(Generic[MetricsEntry]):
|
||||||
"""Tracks number of things (e.g. requests, Measure blocks, etc) in flight
|
"""Tracks number of things (e.g. requests, Measure blocks, etc) in flight
|
||||||
at any given time.
|
at any given time.
|
||||||
|
|
||||||
|
@ -115,14 +136,19 @@ class InFlightGauge:
|
||||||
callbacks.
|
callbacks.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
name (str)
|
name
|
||||||
desc (str)
|
desc
|
||||||
labels (list[str])
|
labels
|
||||||
sub_metrics (list[str]): A list of sub metrics that the callbacks
|
sub_metrics: A list of sub metrics that the callbacks will update.
|
||||||
will update.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, name, desc, labels, sub_metrics):
|
def __init__(
|
||||||
|
self,
|
||||||
|
name: str,
|
||||||
|
desc: str,
|
||||||
|
labels: Sequence[str],
|
||||||
|
sub_metrics: Sequence[str],
|
||||||
|
):
|
||||||
self.name = name
|
self.name = name
|
||||||
self.desc = desc
|
self.desc = desc
|
||||||
self.labels = labels
|
self.labels = labels
|
||||||
|
@ -130,19 +156,25 @@ class InFlightGauge:
|
||||||
|
|
||||||
# Create a class which have the sub_metrics values as attributes, which
|
# Create a class which have the sub_metrics values as attributes, which
|
||||||
# default to 0 on initialization. Used to pass to registered callbacks.
|
# default to 0 on initialization. Used to pass to registered callbacks.
|
||||||
self._metrics_class = attr.make_class(
|
self._metrics_class: Type[MetricsEntry] = attr.make_class(
|
||||||
"_MetricsEntry", attrs={x: attr.ib(0) for x in sub_metrics}, slots=True
|
"_MetricsEntry", attrs={x: attr.ib(0) for x in sub_metrics}, slots=True
|
||||||
)
|
)
|
||||||
|
|
||||||
# Counts number of in flight blocks for a given set of label values
|
# Counts number of in flight blocks for a given set of label values
|
||||||
self._registrations: Dict = {}
|
self._registrations: Dict[
|
||||||
|
Tuple[str, ...], Set[Callable[[MetricsEntry], None]]
|
||||||
|
] = {}
|
||||||
|
|
||||||
# Protects access to _registrations
|
# Protects access to _registrations
|
||||||
self._lock = threading.Lock()
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
self._register_with_collector()
|
self._register_with_collector()
|
||||||
|
|
||||||
def register(self, key, callback):
|
def register(
|
||||||
|
self,
|
||||||
|
key: Tuple[str, ...],
|
||||||
|
callback: Callable[[MetricsEntry], None],
|
||||||
|
) -> None:
|
||||||
"""Registers that we've entered a new block with labels `key`.
|
"""Registers that we've entered a new block with labels `key`.
|
||||||
|
|
||||||
`callback` gets called each time the metrics are collected. The same
|
`callback` gets called each time the metrics are collected. The same
|
||||||
|
@ -158,13 +190,17 @@ class InFlightGauge:
|
||||||
with self._lock:
|
with self._lock:
|
||||||
self._registrations.setdefault(key, set()).add(callback)
|
self._registrations.setdefault(key, set()).add(callback)
|
||||||
|
|
||||||
def unregister(self, key, callback):
|
def unregister(
|
||||||
|
self,
|
||||||
|
key: Tuple[str, ...],
|
||||||
|
callback: Callable[[MetricsEntry], None],
|
||||||
|
) -> None:
|
||||||
"""Registers that we've exited a block with labels `key`."""
|
"""Registers that we've exited a block with labels `key`."""
|
||||||
|
|
||||||
with self._lock:
|
with self._lock:
|
||||||
self._registrations.setdefault(key, set()).discard(callback)
|
self._registrations.setdefault(key, set()).discard(callback)
|
||||||
|
|
||||||
def collect(self):
|
def collect(self) -> Iterable[Metric]:
|
||||||
"""Called by prometheus client when it reads metrics.
|
"""Called by prometheus client when it reads metrics.
|
||||||
|
|
||||||
Note: may be called by a separate thread.
|
Note: may be called by a separate thread.
|
||||||
|
@ -200,7 +236,7 @@ class InFlightGauge:
|
||||||
gauge.add_metric(key, getattr(metrics, name))
|
gauge.add_metric(key, getattr(metrics, name))
|
||||||
yield gauge
|
yield gauge
|
||||||
|
|
||||||
def _register_with_collector(self):
|
def _register_with_collector(self) -> None:
|
||||||
if self.name in all_gauges.keys():
|
if self.name in all_gauges.keys():
|
||||||
logger.warning("%s already registered, reregistering" % (self.name,))
|
logger.warning("%s already registered, reregistering" % (self.name,))
|
||||||
REGISTRY.unregister(all_gauges.pop(self.name))
|
REGISTRY.unregister(all_gauges.pop(self.name))
|
||||||
|
@ -230,7 +266,7 @@ class GaugeBucketCollector:
|
||||||
name: str,
|
name: str,
|
||||||
documentation: str,
|
documentation: str,
|
||||||
buckets: Iterable[float],
|
buckets: Iterable[float],
|
||||||
registry=REGISTRY,
|
registry: CollectorRegistry = REGISTRY,
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
|
@ -257,12 +293,12 @@ class GaugeBucketCollector:
|
||||||
|
|
||||||
registry.register(self)
|
registry.register(self)
|
||||||
|
|
||||||
def collect(self):
|
def collect(self) -> Iterable[Metric]:
|
||||||
# Don't report metrics unless we've already collected some data
|
# Don't report metrics unless we've already collected some data
|
||||||
if self._metric is not None:
|
if self._metric is not None:
|
||||||
yield self._metric
|
yield self._metric
|
||||||
|
|
||||||
def update_data(self, values: Iterable[float]):
|
def update_data(self, values: Iterable[float]) -> None:
|
||||||
"""Update the data to be reported by the metric
|
"""Update the data to be reported by the metric
|
||||||
|
|
||||||
The existing data is cleared, and each measurement in the input is assigned
|
The existing data is cleared, and each measurement in the input is assigned
|
||||||
|
@ -304,7 +340,7 @@ class GaugeBucketCollector:
|
||||||
|
|
||||||
|
|
||||||
class CPUMetrics:
|
class CPUMetrics:
|
||||||
def __init__(self):
|
def __init__(self) -> None:
|
||||||
ticks_per_sec = 100
|
ticks_per_sec = 100
|
||||||
try:
|
try:
|
||||||
# Try and get the system config
|
# Try and get the system config
|
||||||
|
@ -314,7 +350,7 @@ class CPUMetrics:
|
||||||
|
|
||||||
self.ticks_per_sec = ticks_per_sec
|
self.ticks_per_sec = ticks_per_sec
|
||||||
|
|
||||||
def collect(self):
|
def collect(self) -> Iterable[Metric]:
|
||||||
if not HAVE_PROC_SELF_STAT:
|
if not HAVE_PROC_SELF_STAT:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -364,7 +400,7 @@ gc_time = Histogram(
|
||||||
|
|
||||||
|
|
||||||
class GCCounts:
|
class GCCounts:
|
||||||
def collect(self):
|
def collect(self) -> Iterable[Metric]:
|
||||||
cm = GaugeMetricFamily("python_gc_counts", "GC object counts", labels=["gen"])
|
cm = GaugeMetricFamily("python_gc_counts", "GC object counts", labels=["gen"])
|
||||||
for n, m in enumerate(gc.get_count()):
|
for n, m in enumerate(gc.get_count()):
|
||||||
cm.add_metric([str(n)], m)
|
cm.add_metric([str(n)], m)
|
||||||
|
@ -382,7 +418,7 @@ if not running_on_pypy:
|
||||||
|
|
||||||
|
|
||||||
class PyPyGCStats:
|
class PyPyGCStats:
|
||||||
def collect(self):
|
def collect(self) -> Iterable[Metric]:
|
||||||
|
|
||||||
# @stats is a pretty-printer object with __str__() returning a nice table,
|
# @stats is a pretty-printer object with __str__() returning a nice table,
|
||||||
# plus some fields that contain data from that table.
|
# plus some fields that contain data from that table.
|
||||||
|
@ -565,7 +601,7 @@ def register_threadpool(name: str, threadpool: ThreadPool) -> None:
|
||||||
|
|
||||||
|
|
||||||
class ReactorLastSeenMetric:
|
class ReactorLastSeenMetric:
|
||||||
def collect(self):
|
def collect(self) -> Iterable[Metric]:
|
||||||
cm = GaugeMetricFamily(
|
cm = GaugeMetricFamily(
|
||||||
"python_twisted_reactor_last_seen",
|
"python_twisted_reactor_last_seen",
|
||||||
"Seconds since the Twisted reactor was last seen",
|
"Seconds since the Twisted reactor was last seen",
|
||||||
|
@ -584,9 +620,12 @@ MIN_TIME_BETWEEN_GCS = (1.0, 10.0, 30.0)
|
||||||
_last_gc = [0.0, 0.0, 0.0]
|
_last_gc = [0.0, 0.0, 0.0]
|
||||||
|
|
||||||
|
|
||||||
def runUntilCurrentTimer(reactor, func):
|
F = TypeVar("F", bound=Callable[..., Any])
|
||||||
|
|
||||||
|
|
||||||
|
def runUntilCurrentTimer(reactor: ReactorBase, func: F) -> F:
|
||||||
@functools.wraps(func)
|
@functools.wraps(func)
|
||||||
def f(*args, **kwargs):
|
def f(*args: Any, **kwargs: Any) -> Any:
|
||||||
now = reactor.seconds()
|
now = reactor.seconds()
|
||||||
num_pending = 0
|
num_pending = 0
|
||||||
|
|
||||||
|
@ -649,7 +688,7 @@ def runUntilCurrentTimer(reactor, func):
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
return f
|
return cast(F, f)
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
@ -677,5 +716,5 @@ __all__ = [
|
||||||
"start_http_server",
|
"start_http_server",
|
||||||
"LaterGauge",
|
"LaterGauge",
|
||||||
"InFlightGauge",
|
"InFlightGauge",
|
||||||
"BucketCollector",
|
"GaugeBucketCollector",
|
||||||
]
|
]
|
||||||
|
|
|
@ -25,27 +25,25 @@ import math
|
||||||
import threading
|
import threading
|
||||||
from http.server import BaseHTTPRequestHandler, HTTPServer
|
from http.server import BaseHTTPRequestHandler, HTTPServer
|
||||||
from socketserver import ThreadingMixIn
|
from socketserver import ThreadingMixIn
|
||||||
from typing import Dict, List
|
from typing import Any, Dict, List, Type, Union
|
||||||
from urllib.parse import parse_qs, urlparse
|
from urllib.parse import parse_qs, urlparse
|
||||||
|
|
||||||
from prometheus_client import REGISTRY
|
from prometheus_client import REGISTRY, CollectorRegistry
|
||||||
|
from prometheus_client.core import Sample
|
||||||
|
|
||||||
from twisted.web.resource import Resource
|
from twisted.web.resource import Resource
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse.util import caches
|
from synapse.util import caches
|
||||||
|
|
||||||
CONTENT_TYPE_LATEST = "text/plain; version=0.0.4; charset=utf-8"
|
CONTENT_TYPE_LATEST = "text/plain; version=0.0.4; charset=utf-8"
|
||||||
|
|
||||||
|
|
||||||
INF = float("inf")
|
def floatToGoString(d: Union[int, float]) -> str:
|
||||||
MINUS_INF = float("-inf")
|
|
||||||
|
|
||||||
|
|
||||||
def floatToGoString(d):
|
|
||||||
d = float(d)
|
d = float(d)
|
||||||
if d == INF:
|
if d == math.inf:
|
||||||
return "+Inf"
|
return "+Inf"
|
||||||
elif d == MINUS_INF:
|
elif d == -math.inf:
|
||||||
return "-Inf"
|
return "-Inf"
|
||||||
elif math.isnan(d):
|
elif math.isnan(d):
|
||||||
return "NaN"
|
return "NaN"
|
||||||
|
@ -60,7 +58,7 @@ def floatToGoString(d):
|
||||||
return s
|
return s
|
||||||
|
|
||||||
|
|
||||||
def sample_line(line, name):
|
def sample_line(line: Sample, name: str) -> str:
|
||||||
if line.labels:
|
if line.labels:
|
||||||
labelstr = "{{{0}}}".format(
|
labelstr = "{{{0}}}".format(
|
||||||
",".join(
|
",".join(
|
||||||
|
@ -82,7 +80,7 @@ def sample_line(line, name):
|
||||||
return "{}{} {}{}\n".format(name, labelstr, floatToGoString(line.value), timestamp)
|
return "{}{} {}{}\n".format(name, labelstr, floatToGoString(line.value), timestamp)
|
||||||
|
|
||||||
|
|
||||||
def generate_latest(registry, emit_help=False):
|
def generate_latest(registry: CollectorRegistry, emit_help: bool = False) -> bytes:
|
||||||
|
|
||||||
# Trigger the cache metrics to be rescraped, which updates the common
|
# Trigger the cache metrics to be rescraped, which updates the common
|
||||||
# metrics but do not produce metrics themselves
|
# metrics but do not produce metrics themselves
|
||||||
|
@ -187,7 +185,7 @@ class MetricsHandler(BaseHTTPRequestHandler):
|
||||||
|
|
||||||
registry = REGISTRY
|
registry = REGISTRY
|
||||||
|
|
||||||
def do_GET(self):
|
def do_GET(self) -> None:
|
||||||
registry = self.registry
|
registry = self.registry
|
||||||
params = parse_qs(urlparse(self.path).query)
|
params = parse_qs(urlparse(self.path).query)
|
||||||
|
|
||||||
|
@ -207,11 +205,11 @@ class MetricsHandler(BaseHTTPRequestHandler):
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
self.wfile.write(output)
|
self.wfile.write(output)
|
||||||
|
|
||||||
def log_message(self, format, *args):
|
def log_message(self, format: str, *args: Any) -> None:
|
||||||
"""Log nothing."""
|
"""Log nothing."""
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def factory(cls, registry):
|
def factory(cls, registry: CollectorRegistry) -> Type:
|
||||||
"""Returns a dynamic MetricsHandler class tied
|
"""Returns a dynamic MetricsHandler class tied
|
||||||
to the passed registry.
|
to the passed registry.
|
||||||
"""
|
"""
|
||||||
|
@ -236,7 +234,9 @@ class _ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
|
||||||
daemon_threads = True
|
daemon_threads = True
|
||||||
|
|
||||||
|
|
||||||
def start_http_server(port, addr="", registry=REGISTRY):
|
def start_http_server(
|
||||||
|
port: int, addr: str = "", registry: CollectorRegistry = REGISTRY
|
||||||
|
) -> None:
|
||||||
"""Starts an HTTP server for prometheus metrics as a daemon thread"""
|
"""Starts an HTTP server for prometheus metrics as a daemon thread"""
|
||||||
CustomMetricsHandler = MetricsHandler.factory(registry)
|
CustomMetricsHandler = MetricsHandler.factory(registry)
|
||||||
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
|
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
|
||||||
|
@ -252,10 +252,10 @@ class MetricsResource(Resource):
|
||||||
|
|
||||||
isLeaf = True
|
isLeaf = True
|
||||||
|
|
||||||
def __init__(self, registry=REGISTRY):
|
def __init__(self, registry: CollectorRegistry = REGISTRY):
|
||||||
self.registry = registry
|
self.registry = registry
|
||||||
|
|
||||||
def render_GET(self, request):
|
def render_GET(self, request: Request) -> bytes:
|
||||||
request.setHeader(b"Content-Type", CONTENT_TYPE_LATEST.encode("ascii"))
|
request.setHeader(b"Content-Type", CONTENT_TYPE_LATEST.encode("ascii"))
|
||||||
response = generate_latest(self.registry)
|
response = generate_latest(self.registry)
|
||||||
request.setHeader(b"Content-Length", str(len(response)))
|
request.setHeader(b"Content-Length", str(len(response)))
|
||||||
|
|
|
@ -15,19 +15,37 @@
|
||||||
import logging
|
import logging
|
||||||
import threading
|
import threading
|
||||||
from functools import wraps
|
from functools import wraps
|
||||||
from typing import TYPE_CHECKING, Dict, Optional, Set, Union
|
from types import TracebackType
|
||||||
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Any,
|
||||||
|
Awaitable,
|
||||||
|
Callable,
|
||||||
|
Dict,
|
||||||
|
Iterable,
|
||||||
|
Optional,
|
||||||
|
Set,
|
||||||
|
Type,
|
||||||
|
TypeVar,
|
||||||
|
Union,
|
||||||
|
cast,
|
||||||
|
)
|
||||||
|
|
||||||
|
from prometheus_client import Metric
|
||||||
from prometheus_client.core import REGISTRY, Counter, Gauge
|
from prometheus_client.core import REGISTRY, Counter, Gauge
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.logging.context import LoggingContext, PreserveLoggingContext
|
from synapse.logging.context import (
|
||||||
|
ContextResourceUsage,
|
||||||
|
LoggingContext,
|
||||||
|
PreserveLoggingContext,
|
||||||
|
)
|
||||||
from synapse.logging.opentracing import (
|
from synapse.logging.opentracing import (
|
||||||
SynapseTags,
|
SynapseTags,
|
||||||
noop_context_manager,
|
noop_context_manager,
|
||||||
start_active_span,
|
start_active_span,
|
||||||
)
|
)
|
||||||
from synapse.util.async_helpers import maybe_awaitable
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
import resource
|
import resource
|
||||||
|
@ -116,7 +134,7 @@ class _Collector:
|
||||||
before they are returned.
|
before they are returned.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def collect(self):
|
def collect(self) -> Iterable[Metric]:
|
||||||
global _background_processes_active_since_last_scrape
|
global _background_processes_active_since_last_scrape
|
||||||
|
|
||||||
# We swap out the _background_processes set with an empty one so that
|
# We swap out the _background_processes set with an empty one so that
|
||||||
|
@ -144,12 +162,12 @@ REGISTRY.register(_Collector())
|
||||||
|
|
||||||
|
|
||||||
class _BackgroundProcess:
|
class _BackgroundProcess:
|
||||||
def __init__(self, desc, ctx):
|
def __init__(self, desc: str, ctx: LoggingContext):
|
||||||
self.desc = desc
|
self.desc = desc
|
||||||
self._context = ctx
|
self._context = ctx
|
||||||
self._reported_stats = None
|
self._reported_stats: Optional[ContextResourceUsage] = None
|
||||||
|
|
||||||
def update_metrics(self):
|
def update_metrics(self) -> None:
|
||||||
"""Updates the metrics with values from this process."""
|
"""Updates the metrics with values from this process."""
|
||||||
new_stats = self._context.get_resource_usage()
|
new_stats = self._context.get_resource_usage()
|
||||||
if self._reported_stats is None:
|
if self._reported_stats is None:
|
||||||
|
@ -169,7 +187,16 @@ class _BackgroundProcess:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def run_as_background_process(desc: str, func, *args, bg_start_span=True, **kwargs):
|
R = TypeVar("R")
|
||||||
|
|
||||||
|
|
||||||
|
def run_as_background_process(
|
||||||
|
desc: str,
|
||||||
|
func: Callable[..., Awaitable[Optional[R]]],
|
||||||
|
*args: Any,
|
||||||
|
bg_start_span: bool = True,
|
||||||
|
**kwargs: Any,
|
||||||
|
) -> "defer.Deferred[Optional[R]]":
|
||||||
"""Run the given function in its own logcontext, with resource metrics
|
"""Run the given function in its own logcontext, with resource metrics
|
||||||
|
|
||||||
This should be used to wrap processes which are fired off to run in the
|
This should be used to wrap processes which are fired off to run in the
|
||||||
|
@ -189,11 +216,13 @@ def run_as_background_process(desc: str, func, *args, bg_start_span=True, **kwar
|
||||||
args: positional args for func
|
args: positional args for func
|
||||||
kwargs: keyword args for func
|
kwargs: keyword args for func
|
||||||
|
|
||||||
Returns: Deferred which returns the result of func, but note that it does not
|
Returns:
|
||||||
follow the synapse logcontext rules.
|
Deferred which returns the result of func, or `None` if func raises.
|
||||||
|
Note that the returned Deferred does not follow the synapse logcontext
|
||||||
|
rules.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
async def run():
|
async def run() -> Optional[R]:
|
||||||
with _bg_metrics_lock:
|
with _bg_metrics_lock:
|
||||||
count = _background_process_counts.get(desc, 0)
|
count = _background_process_counts.get(desc, 0)
|
||||||
_background_process_counts[desc] = count + 1
|
_background_process_counts[desc] = count + 1
|
||||||
|
@ -210,12 +239,13 @@ def run_as_background_process(desc: str, func, *args, bg_start_span=True, **kwar
|
||||||
else:
|
else:
|
||||||
ctx = noop_context_manager()
|
ctx = noop_context_manager()
|
||||||
with ctx:
|
with ctx:
|
||||||
return await maybe_awaitable(func(*args, **kwargs))
|
return await func(*args, **kwargs)
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception(
|
logger.exception(
|
||||||
"Background process '%s' threw an exception",
|
"Background process '%s' threw an exception",
|
||||||
desc,
|
desc,
|
||||||
)
|
)
|
||||||
|
return None
|
||||||
finally:
|
finally:
|
||||||
_background_process_in_flight_count.labels(desc).dec()
|
_background_process_in_flight_count.labels(desc).dec()
|
||||||
|
|
||||||
|
@ -225,19 +255,24 @@ def run_as_background_process(desc: str, func, *args, bg_start_span=True, **kwar
|
||||||
return defer.ensureDeferred(run())
|
return defer.ensureDeferred(run())
|
||||||
|
|
||||||
|
|
||||||
def wrap_as_background_process(desc):
|
F = TypeVar("F", bound=Callable[..., Awaitable[Optional[Any]]])
|
||||||
|
|
||||||
|
|
||||||
|
def wrap_as_background_process(desc: str) -> Callable[[F], F]:
|
||||||
"""Decorator that wraps a function that gets called as a background
|
"""Decorator that wraps a function that gets called as a background
|
||||||
process.
|
process.
|
||||||
|
|
||||||
Equivalent of calling the function with `run_as_background_process`
|
Equivalent to calling the function with `run_as_background_process`
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def wrap_as_background_process_inner(func):
|
def wrap_as_background_process_inner(func: F) -> F:
|
||||||
@wraps(func)
|
@wraps(func)
|
||||||
def wrap_as_background_process_inner_2(*args, **kwargs):
|
def wrap_as_background_process_inner_2(
|
||||||
|
*args: Any, **kwargs: Any
|
||||||
|
) -> "defer.Deferred[Optional[R]]":
|
||||||
return run_as_background_process(desc, func, *args, **kwargs)
|
return run_as_background_process(desc, func, *args, **kwargs)
|
||||||
|
|
||||||
return wrap_as_background_process_inner_2
|
return cast(F, wrap_as_background_process_inner_2)
|
||||||
|
|
||||||
return wrap_as_background_process_inner
|
return wrap_as_background_process_inner
|
||||||
|
|
||||||
|
@ -265,7 +300,7 @@ class BackgroundProcessLoggingContext(LoggingContext):
|
||||||
super().__init__("%s-%s" % (name, instance_id))
|
super().__init__("%s-%s" % (name, instance_id))
|
||||||
self._proc = _BackgroundProcess(name, self)
|
self._proc = _BackgroundProcess(name, self)
|
||||||
|
|
||||||
def start(self, rusage: "Optional[resource.struct_rusage]"):
|
def start(self, rusage: "Optional[resource.struct_rusage]") -> None:
|
||||||
"""Log context has started running (again)."""
|
"""Log context has started running (again)."""
|
||||||
|
|
||||||
super().start(rusage)
|
super().start(rusage)
|
||||||
|
@ -276,7 +311,12 @@ class BackgroundProcessLoggingContext(LoggingContext):
|
||||||
with _bg_metrics_lock:
|
with _bg_metrics_lock:
|
||||||
_background_processes_active_since_last_scrape.add(self._proc)
|
_background_processes_active_since_last_scrape.add(self._proc)
|
||||||
|
|
||||||
def __exit__(self, type, value, traceback) -> None:
|
def __exit__(
|
||||||
|
self,
|
||||||
|
type: Optional[Type[BaseException]],
|
||||||
|
value: Optional[BaseException],
|
||||||
|
traceback: Optional[TracebackType],
|
||||||
|
) -> None:
|
||||||
"""Log context has finished."""
|
"""Log context has finished."""
|
||||||
|
|
||||||
super().__exit__(type, value, traceback)
|
super().__exit__(type, value, traceback)
|
||||||
|
|
|
@ -16,14 +16,16 @@ import ctypes
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
from typing import Optional
|
from typing import Iterable, Optional
|
||||||
|
|
||||||
|
from prometheus_client import Metric
|
||||||
|
|
||||||
from synapse.metrics import REGISTRY, GaugeMetricFamily
|
from synapse.metrics import REGISTRY, GaugeMetricFamily
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def _setup_jemalloc_stats():
|
def _setup_jemalloc_stats() -> None:
|
||||||
"""Checks to see if jemalloc is loaded, and hooks up a collector to record
|
"""Checks to see if jemalloc is loaded, and hooks up a collector to record
|
||||||
statistics exposed by jemalloc.
|
statistics exposed by jemalloc.
|
||||||
"""
|
"""
|
||||||
|
@ -135,7 +137,7 @@ def _setup_jemalloc_stats():
|
||||||
class JemallocCollector:
|
class JemallocCollector:
|
||||||
"""Metrics for internal jemalloc stats."""
|
"""Metrics for internal jemalloc stats."""
|
||||||
|
|
||||||
def collect(self):
|
def collect(self) -> Iterable[Metric]:
|
||||||
_jemalloc_refresh_stats()
|
_jemalloc_refresh_stats()
|
||||||
|
|
||||||
g = GaugeMetricFamily(
|
g = GaugeMetricFamily(
|
||||||
|
@ -185,7 +187,7 @@ def _setup_jemalloc_stats():
|
||||||
logger.debug("Added jemalloc stats")
|
logger.debug("Added jemalloc stats")
|
||||||
|
|
||||||
|
|
||||||
def setup_jemalloc_stats():
|
def setup_jemalloc_stats() -> None:
|
||||||
"""Try to setup jemalloc stats, if jemalloc is loaded."""
|
"""Try to setup jemalloc stats, if jemalloc is loaded."""
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -31,11 +31,48 @@ import attr
|
||||||
import jinja2
|
import jinja2
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.web.resource import IResource
|
from twisted.web.resource import Resource
|
||||||
|
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.events.presence_router import PresenceRouter
|
from synapse.events.presence_router import (
|
||||||
|
GET_INTERESTED_USERS_CALLBACK,
|
||||||
|
GET_USERS_FOR_STATES_CALLBACK,
|
||||||
|
PresenceRouter,
|
||||||
|
)
|
||||||
|
from synapse.events.spamcheck import (
|
||||||
|
CHECK_EVENT_FOR_SPAM_CALLBACK,
|
||||||
|
CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK,
|
||||||
|
CHECK_REGISTRATION_FOR_SPAM_CALLBACK,
|
||||||
|
CHECK_USERNAME_FOR_SPAM_CALLBACK,
|
||||||
|
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK,
|
||||||
|
USER_MAY_CREATE_ROOM_CALLBACK,
|
||||||
|
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK,
|
||||||
|
USER_MAY_INVITE_CALLBACK,
|
||||||
|
USER_MAY_JOIN_ROOM_CALLBACK,
|
||||||
|
USER_MAY_PUBLISH_ROOM_CALLBACK,
|
||||||
|
USER_MAY_SEND_3PID_INVITE_CALLBACK,
|
||||||
|
)
|
||||||
|
from synapse.events.third_party_rules import (
|
||||||
|
CHECK_EVENT_ALLOWED_CALLBACK,
|
||||||
|
CHECK_THREEPID_CAN_BE_INVITED_CALLBACK,
|
||||||
|
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK,
|
||||||
|
ON_CREATE_ROOM_CALLBACK,
|
||||||
|
ON_NEW_EVENT_CALLBACK,
|
||||||
|
)
|
||||||
|
from synapse.handlers.account_validity import (
|
||||||
|
IS_USER_EXPIRED_CALLBACK,
|
||||||
|
ON_LEGACY_ADMIN_REQUEST,
|
||||||
|
ON_LEGACY_RENEW_CALLBACK,
|
||||||
|
ON_LEGACY_SEND_MAIL_CALLBACK,
|
||||||
|
ON_USER_REGISTRATION_CALLBACK,
|
||||||
|
)
|
||||||
|
from synapse.handlers.auth import (
|
||||||
|
CHECK_3PID_AUTH_CALLBACK,
|
||||||
|
CHECK_AUTH_CALLBACK,
|
||||||
|
ON_LOGGED_OUT_CALLBACK,
|
||||||
|
AuthHandler,
|
||||||
|
)
|
||||||
from synapse.http.client import SimpleHttpClient
|
from synapse.http.client import SimpleHttpClient
|
||||||
from synapse.http.server import (
|
from synapse.http.server import (
|
||||||
DirectServeHtmlResource,
|
DirectServeHtmlResource,
|
||||||
|
@ -114,7 +151,7 @@ class ModuleApi:
|
||||||
can register new users etc if necessary.
|
can register new users etc if necessary.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer", auth_handler):
|
def __init__(self, hs: "HomeServer", auth_handler: AuthHandler) -> None:
|
||||||
self._hs = hs
|
self._hs = hs
|
||||||
|
|
||||||
# TODO: Fix this type hint once the types for the data stores have been ironed
|
# TODO: Fix this type hint once the types for the data stores have been ironed
|
||||||
|
@ -156,47 +193,121 @@ class ModuleApi:
|
||||||
#################################################################################
|
#################################################################################
|
||||||
# The following methods should only be called during the module's initialisation.
|
# The following methods should only be called during the module's initialisation.
|
||||||
|
|
||||||
@property
|
def register_spam_checker_callbacks(
|
||||||
def register_spam_checker_callbacks(self):
|
self,
|
||||||
|
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
|
||||||
|
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
|
||||||
|
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
|
||||||
|
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
|
||||||
|
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
|
||||||
|
user_may_create_room_with_invites: Optional[
|
||||||
|
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
|
||||||
|
] = None,
|
||||||
|
user_may_create_room_alias: Optional[
|
||||||
|
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
|
||||||
|
] = None,
|
||||||
|
user_may_publish_room: Optional[USER_MAY_PUBLISH_ROOM_CALLBACK] = None,
|
||||||
|
check_username_for_spam: Optional[CHECK_USERNAME_FOR_SPAM_CALLBACK] = None,
|
||||||
|
check_registration_for_spam: Optional[
|
||||||
|
CHECK_REGISTRATION_FOR_SPAM_CALLBACK
|
||||||
|
] = None,
|
||||||
|
check_media_file_for_spam: Optional[CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK] = None,
|
||||||
|
) -> None:
|
||||||
"""Registers callbacks for spam checking capabilities.
|
"""Registers callbacks for spam checking capabilities.
|
||||||
|
|
||||||
Added in Synapse v1.37.0.
|
Added in Synapse v1.37.0.
|
||||||
"""
|
"""
|
||||||
return self._spam_checker.register_callbacks
|
return self._spam_checker.register_callbacks(
|
||||||
|
check_event_for_spam=check_event_for_spam,
|
||||||
|
user_may_join_room=user_may_join_room,
|
||||||
|
user_may_invite=user_may_invite,
|
||||||
|
user_may_send_3pid_invite=user_may_send_3pid_invite,
|
||||||
|
user_may_create_room=user_may_create_room,
|
||||||
|
user_may_create_room_with_invites=user_may_create_room_with_invites,
|
||||||
|
user_may_create_room_alias=user_may_create_room_alias,
|
||||||
|
user_may_publish_room=user_may_publish_room,
|
||||||
|
check_username_for_spam=check_username_for_spam,
|
||||||
|
check_registration_for_spam=check_registration_for_spam,
|
||||||
|
check_media_file_for_spam=check_media_file_for_spam,
|
||||||
|
)
|
||||||
|
|
||||||
@property
|
def register_account_validity_callbacks(
|
||||||
def register_account_validity_callbacks(self):
|
self,
|
||||||
|
is_user_expired: Optional[IS_USER_EXPIRED_CALLBACK] = None,
|
||||||
|
on_user_registration: Optional[ON_USER_REGISTRATION_CALLBACK] = None,
|
||||||
|
on_legacy_send_mail: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None,
|
||||||
|
on_legacy_renew: Optional[ON_LEGACY_RENEW_CALLBACK] = None,
|
||||||
|
on_legacy_admin_request: Optional[ON_LEGACY_ADMIN_REQUEST] = None,
|
||||||
|
) -> None:
|
||||||
"""Registers callbacks for account validity capabilities.
|
"""Registers callbacks for account validity capabilities.
|
||||||
|
|
||||||
Added in Synapse v1.39.0.
|
Added in Synapse v1.39.0.
|
||||||
"""
|
"""
|
||||||
return self._account_validity_handler.register_account_validity_callbacks
|
return self._account_validity_handler.register_account_validity_callbacks(
|
||||||
|
is_user_expired=is_user_expired,
|
||||||
|
on_user_registration=on_user_registration,
|
||||||
|
on_legacy_send_mail=on_legacy_send_mail,
|
||||||
|
on_legacy_renew=on_legacy_renew,
|
||||||
|
on_legacy_admin_request=on_legacy_admin_request,
|
||||||
|
)
|
||||||
|
|
||||||
@property
|
def register_third_party_rules_callbacks(
|
||||||
def register_third_party_rules_callbacks(self):
|
self,
|
||||||
|
check_event_allowed: Optional[CHECK_EVENT_ALLOWED_CALLBACK] = None,
|
||||||
|
on_create_room: Optional[ON_CREATE_ROOM_CALLBACK] = None,
|
||||||
|
check_threepid_can_be_invited: Optional[
|
||||||
|
CHECK_THREEPID_CAN_BE_INVITED_CALLBACK
|
||||||
|
] = None,
|
||||||
|
check_visibility_can_be_modified: Optional[
|
||||||
|
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK
|
||||||
|
] = None,
|
||||||
|
on_new_event: Optional[ON_NEW_EVENT_CALLBACK] = None,
|
||||||
|
) -> None:
|
||||||
"""Registers callbacks for third party event rules capabilities.
|
"""Registers callbacks for third party event rules capabilities.
|
||||||
|
|
||||||
Added in Synapse v1.39.0.
|
Added in Synapse v1.39.0.
|
||||||
"""
|
"""
|
||||||
return self._third_party_event_rules.register_third_party_rules_callbacks
|
return self._third_party_event_rules.register_third_party_rules_callbacks(
|
||||||
|
check_event_allowed=check_event_allowed,
|
||||||
|
on_create_room=on_create_room,
|
||||||
|
check_threepid_can_be_invited=check_threepid_can_be_invited,
|
||||||
|
check_visibility_can_be_modified=check_visibility_can_be_modified,
|
||||||
|
on_new_event=on_new_event,
|
||||||
|
)
|
||||||
|
|
||||||
@property
|
def register_presence_router_callbacks(
|
||||||
def register_presence_router_callbacks(self):
|
self,
|
||||||
|
get_users_for_states: Optional[GET_USERS_FOR_STATES_CALLBACK] = None,
|
||||||
|
get_interested_users: Optional[GET_INTERESTED_USERS_CALLBACK] = None,
|
||||||
|
) -> None:
|
||||||
"""Registers callbacks for presence router capabilities.
|
"""Registers callbacks for presence router capabilities.
|
||||||
|
|
||||||
Added in Synapse v1.42.0.
|
Added in Synapse v1.42.0.
|
||||||
"""
|
"""
|
||||||
return self._presence_router.register_presence_router_callbacks
|
return self._presence_router.register_presence_router_callbacks(
|
||||||
|
get_users_for_states=get_users_for_states,
|
||||||
|
get_interested_users=get_interested_users,
|
||||||
|
)
|
||||||
|
|
||||||
@property
|
def register_password_auth_provider_callbacks(
|
||||||
def register_password_auth_provider_callbacks(self):
|
self,
|
||||||
|
check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None,
|
||||||
|
on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None,
|
||||||
|
auth_checkers: Optional[
|
||||||
|
Dict[Tuple[str, Tuple[str, ...]], CHECK_AUTH_CALLBACK]
|
||||||
|
] = None,
|
||||||
|
) -> None:
|
||||||
"""Registers callbacks for password auth provider capabilities.
|
"""Registers callbacks for password auth provider capabilities.
|
||||||
|
|
||||||
Added in Synapse v1.46.0.
|
Added in Synapse v1.46.0.
|
||||||
"""
|
"""
|
||||||
return self._password_auth_provider.register_password_auth_provider_callbacks
|
return self._password_auth_provider.register_password_auth_provider_callbacks(
|
||||||
|
check_3pid_auth=check_3pid_auth,
|
||||||
|
on_logged_out=on_logged_out,
|
||||||
|
auth_checkers=auth_checkers,
|
||||||
|
)
|
||||||
|
|
||||||
def register_web_resource(self, path: str, resource: IResource):
|
def register_web_resource(self, path: str, resource: Resource):
|
||||||
"""Registers a web resource to be served at the given path.
|
"""Registers a web resource to be served at the given path.
|
||||||
|
|
||||||
This function should be called during initialisation of the module.
|
This function should be called during initialisation of the module.
|
||||||
|
@ -216,7 +327,7 @@ class ModuleApi:
|
||||||
# The following methods can be called by the module at any point in time.
|
# The following methods can be called by the module at any point in time.
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def http_client(self):
|
def http_client(self) -> SimpleHttpClient:
|
||||||
"""Allows making outbound HTTP requests to remote resources.
|
"""Allows making outbound HTTP requests to remote resources.
|
||||||
|
|
||||||
An instance of synapse.http.client.SimpleHttpClient
|
An instance of synapse.http.client.SimpleHttpClient
|
||||||
|
@ -226,7 +337,7 @@ class ModuleApi:
|
||||||
return self._http_client
|
return self._http_client
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def public_room_list_manager(self):
|
def public_room_list_manager(self) -> "PublicRoomListManager":
|
||||||
"""Allows adding to, removing from and checking the status of rooms in the
|
"""Allows adding to, removing from and checking the status of rooms in the
|
||||||
public room list.
|
public room list.
|
||||||
|
|
||||||
|
@ -309,7 +420,7 @@ class ModuleApi:
|
||||||
"""
|
"""
|
||||||
return await self._store.is_server_admin(UserID.from_string(user_id))
|
return await self._store.is_server_admin(UserID.from_string(user_id))
|
||||||
|
|
||||||
def get_qualified_user_id(self, username):
|
def get_qualified_user_id(self, username: str) -> str:
|
||||||
"""Qualify a user id, if necessary
|
"""Qualify a user id, if necessary
|
||||||
|
|
||||||
Takes a user id provided by the user and adds the @ and :domain to
|
Takes a user id provided by the user and adds the @ and :domain to
|
||||||
|
@ -318,7 +429,7 @@ class ModuleApi:
|
||||||
Added in Synapse v0.25.0.
|
Added in Synapse v0.25.0.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
username (str): provided user id
|
username: provided user id
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
str: qualified @user:id
|
str: qualified @user:id
|
||||||
|
@ -357,13 +468,13 @@ class ModuleApi:
|
||||||
"""
|
"""
|
||||||
return await self._store.user_get_threepids(user_id)
|
return await self._store.user_get_threepids(user_id)
|
||||||
|
|
||||||
def check_user_exists(self, user_id):
|
def check_user_exists(self, user_id: str):
|
||||||
"""Check if user exists.
|
"""Check if user exists.
|
||||||
|
|
||||||
Added in Synapse v0.25.0.
|
Added in Synapse v0.25.0.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
user_id (str): Complete @user:id
|
user_id: Complete @user:id
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred[str|None]: Canonical (case-corrected) user_id, or None
|
Deferred[str|None]: Canonical (case-corrected) user_id, or None
|
||||||
|
@ -903,7 +1014,7 @@ class ModuleApi:
|
||||||
A list containing the loaded templates, with the orders matching the one of
|
A list containing the loaded templates, with the orders matching the one of
|
||||||
the filenames parameter.
|
the filenames parameter.
|
||||||
"""
|
"""
|
||||||
return self._hs.config.read_templates(
|
return self._hs.config.server.read_templates(
|
||||||
filenames,
|
filenames,
|
||||||
(td for td in (self.custom_template_dir, custom_template_directory) if td),
|
(td for td in (self.custom_template_dir, custom_template_directory) if td),
|
||||||
)
|
)
|
||||||
|
|
|
@ -20,7 +20,7 @@ from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
|
|
||||||
from twisted.internet.protocol import Factory
|
from twisted.internet.protocol import ServerFactory
|
||||||
|
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.replication.tcp.commands import PositionCommand
|
from synapse.replication.tcp.commands import PositionCommand
|
||||||
|
@ -38,7 +38,7 @@ stream_updates_counter = Counter(
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class ReplicationStreamProtocolFactory(Factory):
|
class ReplicationStreamProtocolFactory(ServerFactory):
|
||||||
"""Factory for new replication connections."""
|
"""Factory for new replication connections."""
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
|
|
@ -12,7 +12,7 @@
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Callable
|
||||||
|
|
||||||
from synapse.http.server import HttpServer, JsonResource
|
from synapse.http.server import HttpServer, JsonResource
|
||||||
from synapse.rest import admin
|
from synapse.rest import admin
|
||||||
|
@ -62,6 +62,8 @@ from synapse.rest.client import (
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
RegisterServletsFunc = Callable[["HomeServer", HttpServer], None]
|
||||||
|
|
||||||
|
|
||||||
class ClientRestResource(JsonResource):
|
class ClientRestResource(JsonResource):
|
||||||
"""Matrix Client API REST resource.
|
"""Matrix Client API REST resource.
|
||||||
|
|
|
@ -28,6 +28,7 @@ from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
|
||||||
from synapse.rest.admin.background_updates import (
|
from synapse.rest.admin.background_updates import (
|
||||||
BackgroundUpdateEnabledRestServlet,
|
BackgroundUpdateEnabledRestServlet,
|
||||||
BackgroundUpdateRestServlet,
|
BackgroundUpdateRestServlet,
|
||||||
|
BackgroundUpdateStartJobRestServlet,
|
||||||
)
|
)
|
||||||
from synapse.rest.admin.devices import (
|
from synapse.rest.admin.devices import (
|
||||||
DeleteDevicesRestServlet,
|
DeleteDevicesRestServlet,
|
||||||
|
@ -46,6 +47,9 @@ from synapse.rest.admin.registration_tokens import (
|
||||||
RegistrationTokenRestServlet,
|
RegistrationTokenRestServlet,
|
||||||
)
|
)
|
||||||
from synapse.rest.admin.rooms import (
|
from synapse.rest.admin.rooms import (
|
||||||
|
BlockRoomRestServlet,
|
||||||
|
DeleteRoomStatusByDeleteIdRestServlet,
|
||||||
|
DeleteRoomStatusByRoomIdRestServlet,
|
||||||
ForwardExtremitiesRestServlet,
|
ForwardExtremitiesRestServlet,
|
||||||
JoinRoomAliasServlet,
|
JoinRoomAliasServlet,
|
||||||
ListRoomRestServlet,
|
ListRoomRestServlet,
|
||||||
|
@ -53,6 +57,7 @@ from synapse.rest.admin.rooms import (
|
||||||
RoomEventContextServlet,
|
RoomEventContextServlet,
|
||||||
RoomMembersRestServlet,
|
RoomMembersRestServlet,
|
||||||
RoomRestServlet,
|
RoomRestServlet,
|
||||||
|
RoomRestV2Servlet,
|
||||||
RoomStateRestServlet,
|
RoomStateRestServlet,
|
||||||
)
|
)
|
||||||
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
|
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
|
||||||
|
@ -220,10 +225,14 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
Register all the admin servlets.
|
Register all the admin servlets.
|
||||||
"""
|
"""
|
||||||
register_servlets_for_client_rest_resource(hs, http_server)
|
register_servlets_for_client_rest_resource(hs, http_server)
|
||||||
|
BlockRoomRestServlet(hs).register(http_server)
|
||||||
ListRoomRestServlet(hs).register(http_server)
|
ListRoomRestServlet(hs).register(http_server)
|
||||||
RoomStateRestServlet(hs).register(http_server)
|
RoomStateRestServlet(hs).register(http_server)
|
||||||
RoomRestServlet(hs).register(http_server)
|
RoomRestServlet(hs).register(http_server)
|
||||||
|
RoomRestV2Servlet(hs).register(http_server)
|
||||||
RoomMembersRestServlet(hs).register(http_server)
|
RoomMembersRestServlet(hs).register(http_server)
|
||||||
|
DeleteRoomStatusByDeleteIdRestServlet(hs).register(http_server)
|
||||||
|
DeleteRoomStatusByRoomIdRestServlet(hs).register(http_server)
|
||||||
JoinRoomAliasServlet(hs).register(http_server)
|
JoinRoomAliasServlet(hs).register(http_server)
|
||||||
VersionServlet(hs).register(http_server)
|
VersionServlet(hs).register(http_server)
|
||||||
UserAdminServlet(hs).register(http_server)
|
UserAdminServlet(hs).register(http_server)
|
||||||
|
@ -253,6 +262,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
SendServerNoticeServlet(hs).register(http_server)
|
SendServerNoticeServlet(hs).register(http_server)
|
||||||
BackgroundUpdateEnabledRestServlet(hs).register(http_server)
|
BackgroundUpdateEnabledRestServlet(hs).register(http_server)
|
||||||
BackgroundUpdateRestServlet(hs).register(http_server)
|
BackgroundUpdateRestServlet(hs).register(http_server)
|
||||||
|
BackgroundUpdateStartJobRestServlet(hs).register(http_server)
|
||||||
|
|
||||||
|
|
||||||
def register_servlets_for_client_rest_resource(
|
def register_servlets_for_client_rest_resource(
|
||||||
|
|
|
@ -12,10 +12,15 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
from synapse.http.servlet import (
|
||||||
|
RestServlet,
|
||||||
|
assert_params_in_dict,
|
||||||
|
parse_json_object_from_request,
|
||||||
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.rest.admin._base import admin_patterns, assert_user_is_admin
|
from synapse.rest.admin._base import admin_patterns, assert_user_is_admin
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
|
@ -29,37 +34,36 @@ logger = logging.getLogger(__name__)
|
||||||
class BackgroundUpdateEnabledRestServlet(RestServlet):
|
class BackgroundUpdateEnabledRestServlet(RestServlet):
|
||||||
"""Allows temporarily disabling background updates"""
|
"""Allows temporarily disabling background updates"""
|
||||||
|
|
||||||
PATTERNS = admin_patterns("/background_updates/enabled")
|
PATTERNS = admin_patterns("/background_updates/enabled$")
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.group_server = hs.get_groups_server_handler()
|
self._auth = hs.get_auth()
|
||||||
self.is_mine_id = hs.is_mine_id
|
self._data_stores = hs.get_datastores()
|
||||||
self.auth = hs.get_auth()
|
|
||||||
|
|
||||||
self.data_stores = hs.get_datastores()
|
|
||||||
|
|
||||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self._auth.get_user_by_req(request)
|
||||||
await assert_user_is_admin(self.auth, requester.user)
|
await assert_user_is_admin(self._auth, requester.user)
|
||||||
|
|
||||||
# We need to check that all configured databases have updates enabled.
|
# We need to check that all configured databases have updates enabled.
|
||||||
# (They *should* all be in sync.)
|
# (They *should* all be in sync.)
|
||||||
enabled = all(db.updates.enabled for db in self.data_stores.databases)
|
enabled = all(db.updates.enabled for db in self._data_stores.databases)
|
||||||
|
|
||||||
return 200, {"enabled": enabled}
|
return HTTPStatus.OK, {"enabled": enabled}
|
||||||
|
|
||||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self._auth.get_user_by_req(request)
|
||||||
await assert_user_is_admin(self.auth, requester.user)
|
await assert_user_is_admin(self._auth, requester.user)
|
||||||
|
|
||||||
body = parse_json_object_from_request(request)
|
body = parse_json_object_from_request(request)
|
||||||
|
|
||||||
enabled = body.get("enabled", True)
|
enabled = body.get("enabled", True)
|
||||||
|
|
||||||
if not isinstance(enabled, bool):
|
if not isinstance(enabled, bool):
|
||||||
raise SynapseError(400, "'enabled' parameter must be a boolean")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "'enabled' parameter must be a boolean"
|
||||||
|
)
|
||||||
|
|
||||||
for db in self.data_stores.databases:
|
for db in self._data_stores.databases:
|
||||||
db.updates.enabled = enabled
|
db.updates.enabled = enabled
|
||||||
|
|
||||||
# If we're re-enabling them ensure that we start the background
|
# If we're re-enabling them ensure that we start the background
|
||||||
|
@ -67,32 +71,29 @@ class BackgroundUpdateEnabledRestServlet(RestServlet):
|
||||||
if enabled:
|
if enabled:
|
||||||
db.updates.start_doing_background_updates()
|
db.updates.start_doing_background_updates()
|
||||||
|
|
||||||
return 200, {"enabled": enabled}
|
return HTTPStatus.OK, {"enabled": enabled}
|
||||||
|
|
||||||
|
|
||||||
class BackgroundUpdateRestServlet(RestServlet):
|
class BackgroundUpdateRestServlet(RestServlet):
|
||||||
"""Fetch information about background updates"""
|
"""Fetch information about background updates"""
|
||||||
|
|
||||||
PATTERNS = admin_patterns("/background_updates/status")
|
PATTERNS = admin_patterns("/background_updates/status$")
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.group_server = hs.get_groups_server_handler()
|
self._auth = hs.get_auth()
|
||||||
self.is_mine_id = hs.is_mine_id
|
self._data_stores = hs.get_datastores()
|
||||||
self.auth = hs.get_auth()
|
|
||||||
|
|
||||||
self.data_stores = hs.get_datastores()
|
|
||||||
|
|
||||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self._auth.get_user_by_req(request)
|
||||||
await assert_user_is_admin(self.auth, requester.user)
|
await assert_user_is_admin(self._auth, requester.user)
|
||||||
|
|
||||||
# We need to check that all configured databases have updates enabled.
|
# We need to check that all configured databases have updates enabled.
|
||||||
# (They *should* all be in sync.)
|
# (They *should* all be in sync.)
|
||||||
enabled = all(db.updates.enabled for db in self.data_stores.databases)
|
enabled = all(db.updates.enabled for db in self._data_stores.databases)
|
||||||
|
|
||||||
current_updates = {}
|
current_updates = {}
|
||||||
|
|
||||||
for db in self.data_stores.databases:
|
for db in self._data_stores.databases:
|
||||||
update = db.updates.get_current_update()
|
update = db.updates.get_current_update()
|
||||||
if not update:
|
if not update:
|
||||||
continue
|
continue
|
||||||
|
@ -104,4 +105,72 @@ class BackgroundUpdateRestServlet(RestServlet):
|
||||||
"average_items_per_ms": update.average_items_per_ms(),
|
"average_items_per_ms": update.average_items_per_ms(),
|
||||||
}
|
}
|
||||||
|
|
||||||
return 200, {"enabled": enabled, "current_updates": current_updates}
|
return HTTPStatus.OK, {"enabled": enabled, "current_updates": current_updates}
|
||||||
|
|
||||||
|
|
||||||
|
class BackgroundUpdateStartJobRestServlet(RestServlet):
|
||||||
|
"""Allows to start specific background updates"""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/background_updates/start_job")
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
self._store = hs.get_datastore()
|
||||||
|
|
||||||
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
|
requester = await self._auth.get_user_by_req(request)
|
||||||
|
await assert_user_is_admin(self._auth, requester.user)
|
||||||
|
|
||||||
|
body = parse_json_object_from_request(request)
|
||||||
|
assert_params_in_dict(body, ["job_name"])
|
||||||
|
|
||||||
|
job_name = body["job_name"]
|
||||||
|
|
||||||
|
if job_name == "populate_stats_process_rooms":
|
||||||
|
jobs = [
|
||||||
|
{
|
||||||
|
"update_name": "populate_stats_process_rooms",
|
||||||
|
"progress_json": "{}",
|
||||||
|
},
|
||||||
|
]
|
||||||
|
elif job_name == "regenerate_directory":
|
||||||
|
jobs = [
|
||||||
|
{
|
||||||
|
"update_name": "populate_user_directory_createtables",
|
||||||
|
"progress_json": "{}",
|
||||||
|
"depends_on": "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"update_name": "populate_user_directory_process_rooms",
|
||||||
|
"progress_json": "{}",
|
||||||
|
"depends_on": "populate_user_directory_createtables",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"update_name": "populate_user_directory_process_users",
|
||||||
|
"progress_json": "{}",
|
||||||
|
"depends_on": "populate_user_directory_process_rooms",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"update_name": "populate_user_directory_cleanup",
|
||||||
|
"progress_json": "{}",
|
||||||
|
"depends_on": "populate_user_directory_process_users",
|
||||||
|
},
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid job_name")
|
||||||
|
|
||||||
|
try:
|
||||||
|
await self._store.db_pool.simple_insert_many(
|
||||||
|
table="background_updates",
|
||||||
|
values=jobs,
|
||||||
|
desc=f"admin_api_run_{job_name}",
|
||||||
|
)
|
||||||
|
except self._store.db_pool.engine.module.IntegrityError:
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Job %s is already in queue of background updates." % (job_name,),
|
||||||
|
)
|
||||||
|
|
||||||
|
self._store.db_pool.updates.start_doing_background_updates()
|
||||||
|
|
||||||
|
return HTTPStatus.OK, {}
|
||||||
|
|
|
@ -13,7 +13,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
from http import HTTPStatus
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, List, Optional, Tuple
|
from typing import TYPE_CHECKING, List, Optional, Tuple, cast
|
||||||
from urllib import parse as urlparse
|
from urllib import parse as urlparse
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes, JoinRules, Membership
|
from synapse.api.constants import EventTypes, JoinRules, Membership
|
||||||
|
@ -34,7 +34,7 @@ from synapse.rest.admin._base import (
|
||||||
assert_user_is_admin,
|
assert_user_is_admin,
|
||||||
)
|
)
|
||||||
from synapse.storage.databases.main.room import RoomSortOrder
|
from synapse.storage.databases.main.room import RoomSortOrder
|
||||||
from synapse.types import JsonDict, UserID, create_requester
|
from synapse.types import JsonDict, RoomID, UserID, create_requester
|
||||||
from synapse.util import json_decoder
|
from synapse.util import json_decoder
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
@ -46,6 +46,138 @@ if TYPE_CHECKING:
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class RoomRestV2Servlet(RestServlet):
|
||||||
|
"""Delete a room from server asynchronously with a background task.
|
||||||
|
|
||||||
|
It is a combination and improvement of shutdown and purge room.
|
||||||
|
|
||||||
|
Shuts down a room by removing all local users from the room.
|
||||||
|
Blocking all future invites and joins to the room is optional.
|
||||||
|
|
||||||
|
If desired any local aliases will be repointed to a new room
|
||||||
|
created by `new_room_user_id` and kicked users will be auto-
|
||||||
|
joined to the new room.
|
||||||
|
|
||||||
|
If 'purge' is true, it will remove all traces of a room from the database.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/rooms/(?P<room_id>[^/]+)$", "v2")
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
self._store = hs.get_datastore()
|
||||||
|
self._pagination_handler = hs.get_pagination_handler()
|
||||||
|
|
||||||
|
async def on_DELETE(
|
||||||
|
self, request: SynapseRequest, room_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
|
||||||
|
requester = await self._auth.get_user_by_req(request)
|
||||||
|
await assert_user_is_admin(self._auth, requester.user)
|
||||||
|
|
||||||
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
|
block = content.get("block", False)
|
||||||
|
if not isinstance(block, bool):
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Param 'block' must be a boolean, if given",
|
||||||
|
Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
purge = content.get("purge", True)
|
||||||
|
if not isinstance(purge, bool):
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Param 'purge' must be a boolean, if given",
|
||||||
|
Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
force_purge = content.get("force_purge", False)
|
||||||
|
if not isinstance(force_purge, bool):
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Param 'force_purge' must be a boolean, if given",
|
||||||
|
Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not RoomID.is_valid(room_id):
|
||||||
|
raise SynapseError(400, "%s is not a legal room ID" % (room_id,))
|
||||||
|
|
||||||
|
if not await self._store.get_room(room_id):
|
||||||
|
raise NotFoundError("Unknown room id %s" % (room_id,))
|
||||||
|
|
||||||
|
delete_id = self._pagination_handler.start_shutdown_and_purge_room(
|
||||||
|
room_id=room_id,
|
||||||
|
new_room_user_id=content.get("new_room_user_id"),
|
||||||
|
new_room_name=content.get("room_name"),
|
||||||
|
message=content.get("message"),
|
||||||
|
requester_user_id=requester.user.to_string(),
|
||||||
|
block=block,
|
||||||
|
purge=purge,
|
||||||
|
force_purge=force_purge,
|
||||||
|
)
|
||||||
|
|
||||||
|
return 200, {"delete_id": delete_id}
|
||||||
|
|
||||||
|
|
||||||
|
class DeleteRoomStatusByRoomIdRestServlet(RestServlet):
|
||||||
|
"""Get the status of the delete room background task."""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/rooms/(?P<room_id>[^/]+)/delete_status$", "v2")
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
self._pagination_handler = hs.get_pagination_handler()
|
||||||
|
|
||||||
|
async def on_GET(
|
||||||
|
self, request: SynapseRequest, room_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
|
||||||
|
await assert_requester_is_admin(self._auth, request)
|
||||||
|
|
||||||
|
if not RoomID.is_valid(room_id):
|
||||||
|
raise SynapseError(400, "%s is not a legal room ID" % (room_id,))
|
||||||
|
|
||||||
|
delete_ids = self._pagination_handler.get_delete_ids_by_room(room_id)
|
||||||
|
if delete_ids is None:
|
||||||
|
raise NotFoundError("No delete task for room_id '%s' found" % room_id)
|
||||||
|
|
||||||
|
response = []
|
||||||
|
for delete_id in delete_ids:
|
||||||
|
delete = self._pagination_handler.get_delete_status(delete_id)
|
||||||
|
if delete:
|
||||||
|
response += [
|
||||||
|
{
|
||||||
|
"delete_id": delete_id,
|
||||||
|
**delete.asdict(),
|
||||||
|
}
|
||||||
|
]
|
||||||
|
return 200, {"results": cast(JsonDict, response)}
|
||||||
|
|
||||||
|
|
||||||
|
class DeleteRoomStatusByDeleteIdRestServlet(RestServlet):
|
||||||
|
"""Get the status of the delete room background task."""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/rooms/delete_status/(?P<delete_id>[^/]+)$", "v2")
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
self._pagination_handler = hs.get_pagination_handler()
|
||||||
|
|
||||||
|
async def on_GET(
|
||||||
|
self, request: SynapseRequest, delete_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
|
||||||
|
await assert_requester_is_admin(self._auth, request)
|
||||||
|
|
||||||
|
delete_status = self._pagination_handler.get_delete_status(delete_id)
|
||||||
|
if delete_status is None:
|
||||||
|
raise NotFoundError("delete id '%s' not found" % delete_id)
|
||||||
|
|
||||||
|
return 200, cast(JsonDict, delete_status.asdict())
|
||||||
|
|
||||||
|
|
||||||
class ListRoomRestServlet(RestServlet):
|
class ListRoomRestServlet(RestServlet):
|
||||||
"""
|
"""
|
||||||
List all rooms that are known to the homeserver. Results are returned
|
List all rooms that are known to the homeserver. Results are returned
|
||||||
|
@ -239,9 +371,22 @@ class RoomRestServlet(RestServlet):
|
||||||
|
|
||||||
# Purge room
|
# Purge room
|
||||||
if purge:
|
if purge:
|
||||||
await pagination_handler.purge_room(room_id, force=force_purge)
|
try:
|
||||||
|
await pagination_handler.purge_room(room_id, force=force_purge)
|
||||||
|
except NotFoundError:
|
||||||
|
if block:
|
||||||
|
# We can block unknown rooms with this endpoint, in which case
|
||||||
|
# a failed purge is expected.
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
# But otherwise, we expect this purge to have succeeded.
|
||||||
|
raise
|
||||||
|
|
||||||
return 200, ret
|
# Cast safety: cast away the knowledge that this is a TypedDict.
|
||||||
|
# See https://github.com/python/mypy/issues/4976#issuecomment-579883622
|
||||||
|
# for some discussion on why this is necessary. Either way,
|
||||||
|
# `ret` is an opaque dictionary blob as far as the rest of the app cares.
|
||||||
|
return 200, cast(JsonDict, ret)
|
||||||
|
|
||||||
|
|
||||||
class RoomMembersRestServlet(RestServlet):
|
class RoomMembersRestServlet(RestServlet):
|
||||||
|
@ -303,7 +448,7 @@ class RoomStateRestServlet(RestServlet):
|
||||||
now,
|
now,
|
||||||
# We don't bother bundling aggregations in when asked for state
|
# We don't bother bundling aggregations in when asked for state
|
||||||
# events, as clients won't use them.
|
# events, as clients won't use them.
|
||||||
bundle_aggregations=False,
|
bundle_relations=False,
|
||||||
)
|
)
|
||||||
ret = {"state": room_state}
|
ret = {"state": room_state}
|
||||||
|
|
||||||
|
@ -583,6 +728,7 @@ class RoomEventContextServlet(RestServlet):
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
self._hs = hs
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.room_context_handler = hs.get_room_context_handler()
|
self.room_context_handler = hs.get_room_context_handler()
|
||||||
self._event_serializer = hs.get_event_client_serializer()
|
self._event_serializer = hs.get_event_client_serializer()
|
||||||
|
@ -600,7 +746,9 @@ class RoomEventContextServlet(RestServlet):
|
||||||
filter_str = parse_string(request, "filter", encoding="utf-8")
|
filter_str = parse_string(request, "filter", encoding="utf-8")
|
||||||
if filter_str:
|
if filter_str:
|
||||||
filter_json = urlparse.unquote(filter_str)
|
filter_json = urlparse.unquote(filter_str)
|
||||||
event_filter: Optional[Filter] = Filter(json_decoder.decode(filter_json))
|
event_filter: Optional[Filter] = Filter(
|
||||||
|
self._hs, json_decoder.decode(filter_json)
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
event_filter = None
|
event_filter = None
|
||||||
|
|
||||||
|
@ -630,7 +778,70 @@ class RoomEventContextServlet(RestServlet):
|
||||||
results["state"],
|
results["state"],
|
||||||
time_now,
|
time_now,
|
||||||
# No need to bundle aggregations for state events
|
# No need to bundle aggregations for state events
|
||||||
bundle_aggregations=False,
|
bundle_relations=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, results
|
return 200, results
|
||||||
|
|
||||||
|
|
||||||
|
class BlockRoomRestServlet(RestServlet):
|
||||||
|
"""
|
||||||
|
Manage blocking of rooms.
|
||||||
|
On PUT: Add or remove a room from blocking list.
|
||||||
|
On GET: Get blocking status of room and user who has blocked this room.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/rooms/(?P<room_id>[^/]+)/block$")
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
self._store = hs.get_datastore()
|
||||||
|
|
||||||
|
async def on_GET(
|
||||||
|
self, request: SynapseRequest, room_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
await assert_requester_is_admin(self._auth, request)
|
||||||
|
|
||||||
|
if not RoomID.is_valid(room_id):
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "%s is not a legal room ID" % (room_id,)
|
||||||
|
)
|
||||||
|
|
||||||
|
blocked_by = await self._store.room_is_blocked_by(room_id)
|
||||||
|
# Test `not None` if `user_id` is an empty string
|
||||||
|
# if someone add manually an entry in database
|
||||||
|
if blocked_by is not None:
|
||||||
|
response = {"block": True, "user_id": blocked_by}
|
||||||
|
else:
|
||||||
|
response = {"block": False}
|
||||||
|
|
||||||
|
return HTTPStatus.OK, response
|
||||||
|
|
||||||
|
async def on_PUT(
|
||||||
|
self, request: SynapseRequest, room_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
requester = await self._auth.get_user_by_req(request)
|
||||||
|
await assert_user_is_admin(self._auth, requester.user)
|
||||||
|
|
||||||
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
|
if not RoomID.is_valid(room_id):
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "%s is not a legal room ID" % (room_id,)
|
||||||
|
)
|
||||||
|
|
||||||
|
assert_params_in_dict(content, ["block"])
|
||||||
|
block = content.get("block")
|
||||||
|
if not isinstance(block, bool):
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Param 'block' must be a boolean.",
|
||||||
|
Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
|
if block:
|
||||||
|
await self._store.block_room(room_id, requester.user.to_string())
|
||||||
|
else:
|
||||||
|
await self._store.unblock_room(room_id)
|
||||||
|
|
||||||
|
return HTTPStatus.OK, {"block": block}
|
||||||
|
|
|
@ -898,7 +898,7 @@ class UserTokenRestServlet(RestServlet):
|
||||||
if auth_user.to_string() == user_id:
|
if auth_user.to_string() == user_id:
|
||||||
raise SynapseError(400, "Cannot use admin API to login as self")
|
raise SynapseError(400, "Cannot use admin API to login as self")
|
||||||
|
|
||||||
token = await self.auth_handler.get_access_token_for_user_id(
|
token = await self.auth_handler.create_access_token_for_user_id(
|
||||||
user_id=auth_user.to_string(),
|
user_id=auth_user.to_string(),
|
||||||
device_id=None,
|
device_id=None,
|
||||||
valid_until_ms=valid_until_ms,
|
valid_until_ms=valid_until_ms,
|
||||||
|
@ -909,7 +909,7 @@ class UserTokenRestServlet(RestServlet):
|
||||||
|
|
||||||
|
|
||||||
class ShadowBanRestServlet(RestServlet):
|
class ShadowBanRestServlet(RestServlet):
|
||||||
"""An admin API for shadow-banning a user.
|
"""An admin API for controlling whether a user is shadow-banned.
|
||||||
|
|
||||||
A shadow-banned users receives successful responses to their client-server
|
A shadow-banned users receives successful responses to their client-server
|
||||||
API requests, but the events are not propagated into rooms.
|
API requests, but the events are not propagated into rooms.
|
||||||
|
@ -917,11 +917,19 @@ class ShadowBanRestServlet(RestServlet):
|
||||||
Shadow-banning a user should be used as a tool of last resort and may lead
|
Shadow-banning a user should be used as a tool of last resort and may lead
|
||||||
to confusing or broken behaviour for the client.
|
to confusing or broken behaviour for the client.
|
||||||
|
|
||||||
Example:
|
Example of shadow-banning a user:
|
||||||
|
|
||||||
POST /_synapse/admin/v1/users/@test:example.com/shadow_ban
|
POST /_synapse/admin/v1/users/@test:example.com/shadow_ban
|
||||||
{}
|
{}
|
||||||
|
|
||||||
|
200 OK
|
||||||
|
{}
|
||||||
|
|
||||||
|
Example of removing a user from being shadow-banned:
|
||||||
|
|
||||||
|
DELETE /_synapse/admin/v1/users/@test:example.com/shadow_ban
|
||||||
|
{}
|
||||||
|
|
||||||
200 OK
|
200 OK
|
||||||
{}
|
{}
|
||||||
"""
|
"""
|
||||||
|
@ -945,6 +953,18 @@ class ShadowBanRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
|
async def on_DELETE(
|
||||||
|
self, request: SynapseRequest, user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
|
if not self.hs.is_mine_id(user_id):
|
||||||
|
raise SynapseError(400, "Only local users can be shadow-banned")
|
||||||
|
|
||||||
|
await self.store.set_shadow_banned(UserID.from_string(user_id), False)
|
||||||
|
|
||||||
|
return 200, {}
|
||||||
|
|
||||||
|
|
||||||
class RateLimitRestServlet(RestServlet):
|
class RateLimitRestServlet(RestServlet):
|
||||||
"""An admin API to override ratelimiting for an user.
|
"""An admin API to override ratelimiting for an user.
|
||||||
|
|
|
@ -27,7 +27,7 @@ logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
def client_patterns(
|
def client_patterns(
|
||||||
path_regex: str,
|
path_regex: str,
|
||||||
releases: Iterable[int] = (0,),
|
releases: Iterable[str] = ("r0", "v3"),
|
||||||
unstable: bool = True,
|
unstable: bool = True,
|
||||||
v1: bool = False,
|
v1: bool = False,
|
||||||
) -> Iterable[Pattern]:
|
) -> Iterable[Pattern]:
|
||||||
|
@ -52,7 +52,7 @@ def client_patterns(
|
||||||
v1_prefix = CLIENT_API_PREFIX + "/api/v1"
|
v1_prefix = CLIENT_API_PREFIX + "/api/v1"
|
||||||
patterns.append(re.compile("^" + v1_prefix + path_regex))
|
patterns.append(re.compile("^" + v1_prefix + path_regex))
|
||||||
for release in releases:
|
for release in releases:
|
||||||
new_prefix = CLIENT_API_PREFIX + "/r%d" % (release,)
|
new_prefix = CLIENT_API_PREFIX + f"/{release}"
|
||||||
patterns.append(re.compile("^" + new_prefix + path_regex))
|
patterns.append(re.compile("^" + new_prefix + path_regex))
|
||||||
|
|
||||||
return patterns
|
return patterns
|
||||||
|
|
|
@ -262,7 +262,7 @@ class SigningKeyUploadServlet(RestServlet):
|
||||||
}
|
}
|
||||||
"""
|
"""
|
||||||
|
|
||||||
PATTERNS = client_patterns("/keys/device_signing/upload$", releases=())
|
PATTERNS = client_patterns("/keys/device_signing/upload$", releases=("v3",))
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
|
|
@ -61,7 +61,8 @@ class LoginRestServlet(RestServlet):
|
||||||
TOKEN_TYPE = "m.login.token"
|
TOKEN_TYPE = "m.login.token"
|
||||||
JWT_TYPE = "org.matrix.login.jwt"
|
JWT_TYPE = "org.matrix.login.jwt"
|
||||||
JWT_TYPE_DEPRECATED = "m.login.jwt"
|
JWT_TYPE_DEPRECATED = "m.login.jwt"
|
||||||
APPSERVICE_TYPE = "uk.half-shot.msc2778.login.application_service"
|
APPSERVICE_TYPE = "m.login.application_service"
|
||||||
|
APPSERVICE_TYPE_UNSTABLE = "uk.half-shot.msc2778.login.application_service"
|
||||||
REFRESH_TOKEN_PARAM = "org.matrix.msc2918.refresh_token"
|
REFRESH_TOKEN_PARAM = "org.matrix.msc2918.refresh_token"
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
@ -71,6 +72,7 @@ class LoginRestServlet(RestServlet):
|
||||||
# JWT configuration variables.
|
# JWT configuration variables.
|
||||||
self.jwt_enabled = hs.config.jwt.jwt_enabled
|
self.jwt_enabled = hs.config.jwt.jwt_enabled
|
||||||
self.jwt_secret = hs.config.jwt.jwt_secret
|
self.jwt_secret = hs.config.jwt.jwt_secret
|
||||||
|
self.jwt_subject_claim = hs.config.jwt.jwt_subject_claim
|
||||||
self.jwt_algorithm = hs.config.jwt.jwt_algorithm
|
self.jwt_algorithm = hs.config.jwt.jwt_algorithm
|
||||||
self.jwt_issuer = hs.config.jwt.jwt_issuer
|
self.jwt_issuer = hs.config.jwt.jwt_issuer
|
||||||
self.jwt_audiences = hs.config.jwt.jwt_audiences
|
self.jwt_audiences = hs.config.jwt.jwt_audiences
|
||||||
|
@ -79,7 +81,9 @@ class LoginRestServlet(RestServlet):
|
||||||
self.saml2_enabled = hs.config.saml2.saml2_enabled
|
self.saml2_enabled = hs.config.saml2.saml2_enabled
|
||||||
self.cas_enabled = hs.config.cas.cas_enabled
|
self.cas_enabled = hs.config.cas.cas_enabled
|
||||||
self.oidc_enabled = hs.config.oidc.oidc_enabled
|
self.oidc_enabled = hs.config.oidc.oidc_enabled
|
||||||
self._msc2918_enabled = hs.config.registration.access_token_lifetime is not None
|
self._msc2918_enabled = (
|
||||||
|
hs.config.registration.refreshable_access_token_lifetime is not None
|
||||||
|
)
|
||||||
|
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
|
|
||||||
|
@ -143,6 +147,7 @@ class LoginRestServlet(RestServlet):
|
||||||
flows.extend({"type": t} for t in self.auth_handler.get_supported_login_types())
|
flows.extend({"type": t} for t in self.auth_handler.get_supported_login_types())
|
||||||
|
|
||||||
flows.append({"type": LoginRestServlet.APPSERVICE_TYPE})
|
flows.append({"type": LoginRestServlet.APPSERVICE_TYPE})
|
||||||
|
flows.append({"type": LoginRestServlet.APPSERVICE_TYPE_UNSTABLE})
|
||||||
|
|
||||||
return 200, {"flows": flows}
|
return 200, {"flows": flows}
|
||||||
|
|
||||||
|
@ -159,7 +164,10 @@ class LoginRestServlet(RestServlet):
|
||||||
should_issue_refresh_token = False
|
should_issue_refresh_token = False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if login_submission["type"] == LoginRestServlet.APPSERVICE_TYPE:
|
if login_submission["type"] in (
|
||||||
|
LoginRestServlet.APPSERVICE_TYPE,
|
||||||
|
LoginRestServlet.APPSERVICE_TYPE_UNSTABLE,
|
||||||
|
):
|
||||||
appservice = self.auth.get_appservice_by_req(request)
|
appservice = self.auth.get_appservice_by_req(request)
|
||||||
|
|
||||||
if appservice.is_rate_limited():
|
if appservice.is_rate_limited():
|
||||||
|
@ -408,7 +416,7 @@ class LoginRestServlet(RestServlet):
|
||||||
errcode=Codes.FORBIDDEN,
|
errcode=Codes.FORBIDDEN,
|
||||||
)
|
)
|
||||||
|
|
||||||
user = payload.get("sub", None)
|
user = payload.get(self.jwt_subject_claim, None)
|
||||||
if user is None:
|
if user is None:
|
||||||
raise LoginError(403, "Invalid JWT", errcode=Codes.FORBIDDEN)
|
raise LoginError(403, "Invalid JWT", errcode=Codes.FORBIDDEN)
|
||||||
|
|
||||||
|
@ -447,7 +455,9 @@ class RefreshTokenServlet(RestServlet):
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self._auth_handler = hs.get_auth_handler()
|
self._auth_handler = hs.get_auth_handler()
|
||||||
self._clock = hs.get_clock()
|
self._clock = hs.get_clock()
|
||||||
self.access_token_lifetime = hs.config.registration.access_token_lifetime
|
self.refreshable_access_token_lifetime = (
|
||||||
|
hs.config.registration.refreshable_access_token_lifetime
|
||||||
|
)
|
||||||
|
|
||||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
refresh_submission = parse_json_object_from_request(request)
|
refresh_submission = parse_json_object_from_request(request)
|
||||||
|
@ -457,7 +467,9 @@ class RefreshTokenServlet(RestServlet):
|
||||||
if not isinstance(token, str):
|
if not isinstance(token, str):
|
||||||
raise SynapseError(400, "Invalid param: refresh_token", Codes.INVALID_PARAM)
|
raise SynapseError(400, "Invalid param: refresh_token", Codes.INVALID_PARAM)
|
||||||
|
|
||||||
valid_until_ms = self._clock.time_msec() + self.access_token_lifetime
|
valid_until_ms = (
|
||||||
|
self._clock.time_msec() + self.refreshable_access_token_lifetime
|
||||||
|
)
|
||||||
access_token, refresh_token = await self._auth_handler.refresh_token(
|
access_token, refresh_token = await self._auth_handler.refresh_token(
|
||||||
token, valid_until_ms
|
token, valid_until_ms
|
||||||
)
|
)
|
||||||
|
@ -556,7 +568,7 @@ class CasTicketServlet(RestServlet):
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
LoginRestServlet(hs).register(http_server)
|
LoginRestServlet(hs).register(http_server)
|
||||||
if hs.config.registration.access_token_lifetime is not None:
|
if hs.config.registration.refreshable_access_token_lifetime is not None:
|
||||||
RefreshTokenServlet(hs).register(http_server)
|
RefreshTokenServlet(hs).register(http_server)
|
||||||
SsoRedirectServlet(hs).register(http_server)
|
SsoRedirectServlet(hs).register(http_server)
|
||||||
if hs.config.cas.cas_enabled:
|
if hs.config.cas.cas_enabled:
|
||||||
|
|
|
@ -420,7 +420,9 @@ class RegisterRestServlet(RestServlet):
|
||||||
self.password_policy_handler = hs.get_password_policy_handler()
|
self.password_policy_handler = hs.get_password_policy_handler()
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self._registration_enabled = self.hs.config.registration.enable_registration
|
self._registration_enabled = self.hs.config.registration.enable_registration
|
||||||
self._msc2918_enabled = hs.config.registration.access_token_lifetime is not None
|
self._msc2918_enabled = (
|
||||||
|
hs.config.registration.refreshable_access_token_lifetime is not None
|
||||||
|
)
|
||||||
|
|
||||||
self._registration_flows = _calculate_registration_flows(
|
self._registration_flows = _calculate_registration_flows(
|
||||||
hs.config, self.auth_handler
|
hs.config, self.auth_handler
|
||||||
|
|
|
@ -224,17 +224,17 @@ class RelationPaginationServlet(RestServlet):
|
||||||
)
|
)
|
||||||
|
|
||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
# We set bundle_aggregations to False when retrieving the original
|
# We set bundle_relations to False when retrieving the original
|
||||||
# event because we want the content before relations were applied to
|
# event because we want the content before relations were applied to
|
||||||
# it.
|
# it.
|
||||||
original_event = await self._event_serializer.serialize_event(
|
original_event = await self._event_serializer.serialize_event(
|
||||||
event, now, bundle_aggregations=False
|
event, now, bundle_relations=False
|
||||||
)
|
)
|
||||||
# Similarly, we don't allow relations to be applied to relations, so we
|
# Similarly, we don't allow relations to be applied to relations, so we
|
||||||
# return the original relations without any aggregations on top of them
|
# return the original relations without any aggregations on top of them
|
||||||
# here.
|
# here.
|
||||||
serialized_events = await self._event_serializer.serialize_events(
|
serialized_events = await self._event_serializer.serialize_events(
|
||||||
events, now, bundle_aggregations=False
|
events, now, bundle_relations=False
|
||||||
)
|
)
|
||||||
|
|
||||||
return_value = pagination_chunk.to_dict()
|
return_value = pagination_chunk.to_dict()
|
||||||
|
@ -298,7 +298,9 @@ class RelationAggregationPaginationServlet(RestServlet):
|
||||||
raise SynapseError(404, "Unknown parent event.")
|
raise SynapseError(404, "Unknown parent event.")
|
||||||
|
|
||||||
if relation_type not in (RelationTypes.ANNOTATION, None):
|
if relation_type not in (RelationTypes.ANNOTATION, None):
|
||||||
raise SynapseError(400, "Relation type must be 'annotation'")
|
raise SynapseError(
|
||||||
|
400, f"Relation type must be '{RelationTypes.ANNOTATION}'"
|
||||||
|
)
|
||||||
|
|
||||||
limit = parse_integer(request, "limit", default=5)
|
limit = parse_integer(request, "limit", default=5)
|
||||||
from_token_str = parse_string(request, "from")
|
from_token_str = parse_string(request, "from")
|
||||||
|
|
|
@ -554,6 +554,7 @@ class RoomMessageListRestServlet(RestServlet):
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
self._hs = hs
|
||||||
self.pagination_handler = hs.get_pagination_handler()
|
self.pagination_handler = hs.get_pagination_handler()
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
|
@ -571,7 +572,9 @@ class RoomMessageListRestServlet(RestServlet):
|
||||||
filter_str = parse_string(request, "filter", encoding="utf-8")
|
filter_str = parse_string(request, "filter", encoding="utf-8")
|
||||||
if filter_str:
|
if filter_str:
|
||||||
filter_json = urlparse.unquote(filter_str)
|
filter_json = urlparse.unquote(filter_str)
|
||||||
event_filter: Optional[Filter] = Filter(json_decoder.decode(filter_json))
|
event_filter: Optional[Filter] = Filter(
|
||||||
|
self._hs, json_decoder.decode(filter_json)
|
||||||
|
)
|
||||||
if (
|
if (
|
||||||
event_filter
|
event_filter
|
||||||
and event_filter.filter_json.get("event_format", "client")
|
and event_filter.filter_json.get("event_format", "client")
|
||||||
|
@ -676,6 +679,7 @@ class RoomEventContextServlet(RestServlet):
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
self._hs = hs
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.room_context_handler = hs.get_room_context_handler()
|
self.room_context_handler = hs.get_room_context_handler()
|
||||||
self._event_serializer = hs.get_event_client_serializer()
|
self._event_serializer = hs.get_event_client_serializer()
|
||||||
|
@ -692,7 +696,9 @@ class RoomEventContextServlet(RestServlet):
|
||||||
filter_str = parse_string(request, "filter", encoding="utf-8")
|
filter_str = parse_string(request, "filter", encoding="utf-8")
|
||||||
if filter_str:
|
if filter_str:
|
||||||
filter_json = urlparse.unquote(filter_str)
|
filter_json = urlparse.unquote(filter_str)
|
||||||
event_filter: Optional[Filter] = Filter(json_decoder.decode(filter_json))
|
event_filter: Optional[Filter] = Filter(
|
||||||
|
self._hs, json_decoder.decode(filter_json)
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
event_filter = None
|
event_filter = None
|
||||||
|
|
||||||
|
@ -717,7 +723,7 @@ class RoomEventContextServlet(RestServlet):
|
||||||
results["state"],
|
results["state"],
|
||||||
time_now,
|
time_now,
|
||||||
# No need to bundle aggregations for state events
|
# No need to bundle aggregations for state events
|
||||||
bundle_aggregations=False,
|
bundle_relations=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, results
|
return 200, results
|
||||||
|
|
|
@ -29,7 +29,7 @@ from typing import (
|
||||||
|
|
||||||
from synapse.api.constants import Membership, PresenceState
|
from synapse.api.constants import Membership, PresenceState
|
||||||
from synapse.api.errors import Codes, StoreError, SynapseError
|
from synapse.api.errors import Codes, StoreError, SynapseError
|
||||||
from synapse.api.filtering import DEFAULT_FILTER_COLLECTION, FilterCollection
|
from synapse.api.filtering import FilterCollection
|
||||||
from synapse.api.presence import UserPresenceState
|
from synapse.api.presence import UserPresenceState
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.events.utils import (
|
from synapse.events.utils import (
|
||||||
|
@ -150,7 +150,7 @@ class SyncRestServlet(RestServlet):
|
||||||
request_key = (user, timeout, since, filter_id, full_state, device_id)
|
request_key = (user, timeout, since, filter_id, full_state, device_id)
|
||||||
|
|
||||||
if filter_id is None:
|
if filter_id is None:
|
||||||
filter_collection = DEFAULT_FILTER_COLLECTION
|
filter_collection = self.filtering.DEFAULT_FILTER_COLLECTION
|
||||||
elif filter_id.startswith("{"):
|
elif filter_id.startswith("{"):
|
||||||
try:
|
try:
|
||||||
filter_object = json_decoder.decode(filter_id)
|
filter_object = json_decoder.decode(filter_id)
|
||||||
|
@ -160,7 +160,7 @@ class SyncRestServlet(RestServlet):
|
||||||
except Exception:
|
except Exception:
|
||||||
raise SynapseError(400, "Invalid filter JSON")
|
raise SynapseError(400, "Invalid filter JSON")
|
||||||
self.filtering.check_valid_filter(filter_object)
|
self.filtering.check_valid_filter(filter_object)
|
||||||
filter_collection = FilterCollection(filter_object)
|
filter_collection = FilterCollection(self.hs, filter_object)
|
||||||
else:
|
else:
|
||||||
try:
|
try:
|
||||||
filter_collection = await self.filtering.get_user_filter(
|
filter_collection = await self.filtering.get_user_filter(
|
||||||
|
@ -522,7 +522,7 @@ class SyncRestServlet(RestServlet):
|
||||||
time_now=time_now,
|
time_now=time_now,
|
||||||
# We don't bundle "live" events, as otherwise clients
|
# We don't bundle "live" events, as otherwise clients
|
||||||
# will end up double counting annotations.
|
# will end up double counting annotations.
|
||||||
bundle_aggregations=False,
|
bundle_relations=False,
|
||||||
token_id=token_id,
|
token_id=token_id,
|
||||||
event_format=event_formatter,
|
event_format=event_formatter,
|
||||||
only_event_fields=only_fields,
|
only_event_fields=only_fields,
|
||||||
|
|
|
@ -29,7 +29,7 @@ from synapse.api.errors import Codes, SynapseError, cs_error
|
||||||
from synapse.http.server import finish_request, respond_with_json
|
from synapse.http.server import finish_request, respond_with_json
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.context import make_deferred_yieldable
|
from synapse.logging.context import make_deferred_yieldable
|
||||||
from synapse.util.stringutils import is_ascii
|
from synapse.util.stringutils import is_ascii, parse_and_validate_server_name
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -51,6 +51,19 @@ TEXT_CONTENT_TYPES = [
|
||||||
|
|
||||||
|
|
||||||
def parse_media_id(request: Request) -> Tuple[str, str, Optional[str]]:
|
def parse_media_id(request: Request) -> Tuple[str, str, Optional[str]]:
|
||||||
|
"""Parses the server name, media ID and optional file name from the request URI
|
||||||
|
|
||||||
|
Also performs some rough validation on the server name.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
request: The `Request`.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A tuple containing the parsed server name, media ID and optional file name.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
SynapseError(404): if parsing or validation fail for any reason
|
||||||
|
"""
|
||||||
try:
|
try:
|
||||||
# The type on postpath seems incorrect in Twisted 21.2.0.
|
# The type on postpath seems incorrect in Twisted 21.2.0.
|
||||||
postpath: List[bytes] = request.postpath # type: ignore
|
postpath: List[bytes] = request.postpath # type: ignore
|
||||||
|
@ -62,6 +75,9 @@ def parse_media_id(request: Request) -> Tuple[str, str, Optional[str]]:
|
||||||
server_name = server_name_bytes.decode("utf-8")
|
server_name = server_name_bytes.decode("utf-8")
|
||||||
media_id = media_id_bytes.decode("utf8")
|
media_id = media_id_bytes.decode("utf8")
|
||||||
|
|
||||||
|
# Validate the server name, raising if invalid
|
||||||
|
parse_and_validate_server_name(server_name)
|
||||||
|
|
||||||
file_name = None
|
file_name = None
|
||||||
if len(postpath) > 2:
|
if len(postpath) > 2:
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -16,7 +16,8 @@
|
||||||
import functools
|
import functools
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
from typing import Any, Callable, List, TypeVar, cast
|
import string
|
||||||
|
from typing import Any, Callable, List, TypeVar, Union, cast
|
||||||
|
|
||||||
NEW_FORMAT_ID_RE = re.compile(r"^\d\d\d\d-\d\d-\d\d")
|
NEW_FORMAT_ID_RE = re.compile(r"^\d\d\d\d-\d\d-\d\d")
|
||||||
|
|
||||||
|
@ -37,6 +38,85 @@ def _wrap_in_base_path(func: F) -> F:
|
||||||
return cast(F, _wrapped)
|
return cast(F, _wrapped)
|
||||||
|
|
||||||
|
|
||||||
|
GetPathMethod = TypeVar(
|
||||||
|
"GetPathMethod", bound=Union[Callable[..., str], Callable[..., List[str]]]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _wrap_with_jail_check(func: GetPathMethod) -> GetPathMethod:
|
||||||
|
"""Wraps a path-returning method to check that the returned path(s) do not escape
|
||||||
|
the media store directory.
|
||||||
|
|
||||||
|
The check is not expected to ever fail, unless `func` is missing a call to
|
||||||
|
`_validate_path_component`, or `_validate_path_component` is buggy.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
func: The `MediaFilePaths` method to wrap. The method may return either a single
|
||||||
|
path, or a list of paths. Returned paths may be either absolute or relative.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The method, wrapped with a check to ensure that the returned path(s) lie within
|
||||||
|
the media store directory. Raises a `ValueError` if the check fails.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@functools.wraps(func)
|
||||||
|
def _wrapped(
|
||||||
|
self: "MediaFilePaths", *args: Any, **kwargs: Any
|
||||||
|
) -> Union[str, List[str]]:
|
||||||
|
path_or_paths = func(self, *args, **kwargs)
|
||||||
|
|
||||||
|
if isinstance(path_or_paths, list):
|
||||||
|
paths_to_check = path_or_paths
|
||||||
|
else:
|
||||||
|
paths_to_check = [path_or_paths]
|
||||||
|
|
||||||
|
for path in paths_to_check:
|
||||||
|
# path may be an absolute or relative path, depending on the method being
|
||||||
|
# wrapped. When "appending" an absolute path, `os.path.join` discards the
|
||||||
|
# previous path, which is desired here.
|
||||||
|
normalized_path = os.path.normpath(os.path.join(self.real_base_path, path))
|
||||||
|
if (
|
||||||
|
os.path.commonpath([normalized_path, self.real_base_path])
|
||||||
|
!= self.real_base_path
|
||||||
|
):
|
||||||
|
raise ValueError(f"Invalid media store path: {path!r}")
|
||||||
|
|
||||||
|
return path_or_paths
|
||||||
|
|
||||||
|
return cast(GetPathMethod, _wrapped)
|
||||||
|
|
||||||
|
|
||||||
|
ALLOWED_CHARACTERS = set(
|
||||||
|
string.ascii_letters
|
||||||
|
+ string.digits
|
||||||
|
+ "_-"
|
||||||
|
+ ".[]:" # Domain names, IPv6 addresses and ports in server names
|
||||||
|
)
|
||||||
|
FORBIDDEN_NAMES = {
|
||||||
|
"",
|
||||||
|
os.path.curdir, # "." for the current platform
|
||||||
|
os.path.pardir, # ".." for the current platform
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _validate_path_component(name: str) -> str:
|
||||||
|
"""Checks that the given string can be safely used as a path component
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name: The path component to check.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The path component if valid.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError: If `name` cannot be safely used as a path component.
|
||||||
|
"""
|
||||||
|
if not ALLOWED_CHARACTERS.issuperset(name) or name in FORBIDDEN_NAMES:
|
||||||
|
raise ValueError(f"Invalid path component: {name!r}")
|
||||||
|
|
||||||
|
return name
|
||||||
|
|
||||||
|
|
||||||
class MediaFilePaths:
|
class MediaFilePaths:
|
||||||
"""Describes where files are stored on disk.
|
"""Describes where files are stored on disk.
|
||||||
|
|
||||||
|
@ -48,22 +128,46 @@ class MediaFilePaths:
|
||||||
def __init__(self, primary_base_path: str):
|
def __init__(self, primary_base_path: str):
|
||||||
self.base_path = primary_base_path
|
self.base_path = primary_base_path
|
||||||
|
|
||||||
|
# The media store directory, with all symlinks resolved.
|
||||||
|
self.real_base_path = os.path.realpath(primary_base_path)
|
||||||
|
|
||||||
|
# Refuse to initialize if paths cannot be validated correctly for the current
|
||||||
|
# platform.
|
||||||
|
assert os.path.sep not in ALLOWED_CHARACTERS
|
||||||
|
assert os.path.altsep not in ALLOWED_CHARACTERS
|
||||||
|
# On Windows, paths have all sorts of weirdness which `_validate_path_component`
|
||||||
|
# does not consider. In any case, the remote media store can't work correctly
|
||||||
|
# for certain homeservers there, since ":"s aren't allowed in paths.
|
||||||
|
assert os.name == "posix"
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def local_media_filepath_rel(self, media_id: str) -> str:
|
def local_media_filepath_rel(self, media_id: str) -> str:
|
||||||
return os.path.join("local_content", media_id[0:2], media_id[2:4], media_id[4:])
|
return os.path.join(
|
||||||
|
"local_content",
|
||||||
|
_validate_path_component(media_id[0:2]),
|
||||||
|
_validate_path_component(media_id[2:4]),
|
||||||
|
_validate_path_component(media_id[4:]),
|
||||||
|
)
|
||||||
|
|
||||||
local_media_filepath = _wrap_in_base_path(local_media_filepath_rel)
|
local_media_filepath = _wrap_in_base_path(local_media_filepath_rel)
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def local_media_thumbnail_rel(
|
def local_media_thumbnail_rel(
|
||||||
self, media_id: str, width: int, height: int, content_type: str, method: str
|
self, media_id: str, width: int, height: int, content_type: str, method: str
|
||||||
) -> str:
|
) -> str:
|
||||||
top_level_type, sub_type = content_type.split("/")
|
top_level_type, sub_type = content_type.split("/")
|
||||||
file_name = "%i-%i-%s-%s-%s" % (width, height, top_level_type, sub_type, method)
|
file_name = "%i-%i-%s-%s-%s" % (width, height, top_level_type, sub_type, method)
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
"local_thumbnails", media_id[0:2], media_id[2:4], media_id[4:], file_name
|
"local_thumbnails",
|
||||||
|
_validate_path_component(media_id[0:2]),
|
||||||
|
_validate_path_component(media_id[2:4]),
|
||||||
|
_validate_path_component(media_id[4:]),
|
||||||
|
_validate_path_component(file_name),
|
||||||
)
|
)
|
||||||
|
|
||||||
local_media_thumbnail = _wrap_in_base_path(local_media_thumbnail_rel)
|
local_media_thumbnail = _wrap_in_base_path(local_media_thumbnail_rel)
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def local_media_thumbnail_dir(self, media_id: str) -> str:
|
def local_media_thumbnail_dir(self, media_id: str) -> str:
|
||||||
"""
|
"""
|
||||||
Retrieve the local store path of thumbnails of a given media_id
|
Retrieve the local store path of thumbnails of a given media_id
|
||||||
|
@ -76,18 +180,24 @@ class MediaFilePaths:
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
self.base_path,
|
self.base_path,
|
||||||
"local_thumbnails",
|
"local_thumbnails",
|
||||||
media_id[0:2],
|
_validate_path_component(media_id[0:2]),
|
||||||
media_id[2:4],
|
_validate_path_component(media_id[2:4]),
|
||||||
media_id[4:],
|
_validate_path_component(media_id[4:]),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def remote_media_filepath_rel(self, server_name: str, file_id: str) -> str:
|
def remote_media_filepath_rel(self, server_name: str, file_id: str) -> str:
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
"remote_content", server_name, file_id[0:2], file_id[2:4], file_id[4:]
|
"remote_content",
|
||||||
|
_validate_path_component(server_name),
|
||||||
|
_validate_path_component(file_id[0:2]),
|
||||||
|
_validate_path_component(file_id[2:4]),
|
||||||
|
_validate_path_component(file_id[4:]),
|
||||||
)
|
)
|
||||||
|
|
||||||
remote_media_filepath = _wrap_in_base_path(remote_media_filepath_rel)
|
remote_media_filepath = _wrap_in_base_path(remote_media_filepath_rel)
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def remote_media_thumbnail_rel(
|
def remote_media_thumbnail_rel(
|
||||||
self,
|
self,
|
||||||
server_name: str,
|
server_name: str,
|
||||||
|
@ -101,11 +211,11 @@ class MediaFilePaths:
|
||||||
file_name = "%i-%i-%s-%s-%s" % (width, height, top_level_type, sub_type, method)
|
file_name = "%i-%i-%s-%s-%s" % (width, height, top_level_type, sub_type, method)
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
"remote_thumbnail",
|
"remote_thumbnail",
|
||||||
server_name,
|
_validate_path_component(server_name),
|
||||||
file_id[0:2],
|
_validate_path_component(file_id[0:2]),
|
||||||
file_id[2:4],
|
_validate_path_component(file_id[2:4]),
|
||||||
file_id[4:],
|
_validate_path_component(file_id[4:]),
|
||||||
file_name,
|
_validate_path_component(file_name),
|
||||||
)
|
)
|
||||||
|
|
||||||
remote_media_thumbnail = _wrap_in_base_path(remote_media_thumbnail_rel)
|
remote_media_thumbnail = _wrap_in_base_path(remote_media_thumbnail_rel)
|
||||||
|
@ -113,6 +223,7 @@ class MediaFilePaths:
|
||||||
# Legacy path that was used to store thumbnails previously.
|
# Legacy path that was used to store thumbnails previously.
|
||||||
# Should be removed after some time, when most of the thumbnails are stored
|
# Should be removed after some time, when most of the thumbnails are stored
|
||||||
# using the new path.
|
# using the new path.
|
||||||
|
@_wrap_with_jail_check
|
||||||
def remote_media_thumbnail_rel_legacy(
|
def remote_media_thumbnail_rel_legacy(
|
||||||
self, server_name: str, file_id: str, width: int, height: int, content_type: str
|
self, server_name: str, file_id: str, width: int, height: int, content_type: str
|
||||||
) -> str:
|
) -> str:
|
||||||
|
@ -120,43 +231,66 @@ class MediaFilePaths:
|
||||||
file_name = "%i-%i-%s-%s" % (width, height, top_level_type, sub_type)
|
file_name = "%i-%i-%s-%s" % (width, height, top_level_type, sub_type)
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
"remote_thumbnail",
|
"remote_thumbnail",
|
||||||
server_name,
|
_validate_path_component(server_name),
|
||||||
file_id[0:2],
|
_validate_path_component(file_id[0:2]),
|
||||||
file_id[2:4],
|
_validate_path_component(file_id[2:4]),
|
||||||
file_id[4:],
|
_validate_path_component(file_id[4:]),
|
||||||
file_name,
|
_validate_path_component(file_name),
|
||||||
)
|
)
|
||||||
|
|
||||||
def remote_media_thumbnail_dir(self, server_name: str, file_id: str) -> str:
|
def remote_media_thumbnail_dir(self, server_name: str, file_id: str) -> str:
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
self.base_path,
|
self.base_path,
|
||||||
"remote_thumbnail",
|
"remote_thumbnail",
|
||||||
server_name,
|
_validate_path_component(server_name),
|
||||||
file_id[0:2],
|
_validate_path_component(file_id[0:2]),
|
||||||
file_id[2:4],
|
_validate_path_component(file_id[2:4]),
|
||||||
file_id[4:],
|
_validate_path_component(file_id[4:]),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def url_cache_filepath_rel(self, media_id: str) -> str:
|
def url_cache_filepath_rel(self, media_id: str) -> str:
|
||||||
if NEW_FORMAT_ID_RE.match(media_id):
|
if NEW_FORMAT_ID_RE.match(media_id):
|
||||||
# Media id is of the form <DATE><RANDOM_STRING>
|
# Media id is of the form <DATE><RANDOM_STRING>
|
||||||
# E.g.: 2017-09-28-fsdRDt24DS234dsf
|
# E.g.: 2017-09-28-fsdRDt24DS234dsf
|
||||||
return os.path.join("url_cache", media_id[:10], media_id[11:])
|
return os.path.join(
|
||||||
|
"url_cache",
|
||||||
|
_validate_path_component(media_id[:10]),
|
||||||
|
_validate_path_component(media_id[11:]),
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
return os.path.join("url_cache", media_id[0:2], media_id[2:4], media_id[4:])
|
return os.path.join(
|
||||||
|
"url_cache",
|
||||||
|
_validate_path_component(media_id[0:2]),
|
||||||
|
_validate_path_component(media_id[2:4]),
|
||||||
|
_validate_path_component(media_id[4:]),
|
||||||
|
)
|
||||||
|
|
||||||
url_cache_filepath = _wrap_in_base_path(url_cache_filepath_rel)
|
url_cache_filepath = _wrap_in_base_path(url_cache_filepath_rel)
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def url_cache_filepath_dirs_to_delete(self, media_id: str) -> List[str]:
|
def url_cache_filepath_dirs_to_delete(self, media_id: str) -> List[str]:
|
||||||
"The dirs to try and remove if we delete the media_id file"
|
"The dirs to try and remove if we delete the media_id file"
|
||||||
if NEW_FORMAT_ID_RE.match(media_id):
|
if NEW_FORMAT_ID_RE.match(media_id):
|
||||||
return [os.path.join(self.base_path, "url_cache", media_id[:10])]
|
return [
|
||||||
|
os.path.join(
|
||||||
|
self.base_path, "url_cache", _validate_path_component(media_id[:10])
|
||||||
|
)
|
||||||
|
]
|
||||||
else:
|
else:
|
||||||
return [
|
return [
|
||||||
os.path.join(self.base_path, "url_cache", media_id[0:2], media_id[2:4]),
|
os.path.join(
|
||||||
os.path.join(self.base_path, "url_cache", media_id[0:2]),
|
self.base_path,
|
||||||
|
"url_cache",
|
||||||
|
_validate_path_component(media_id[0:2]),
|
||||||
|
_validate_path_component(media_id[2:4]),
|
||||||
|
),
|
||||||
|
os.path.join(
|
||||||
|
self.base_path, "url_cache", _validate_path_component(media_id[0:2])
|
||||||
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def url_cache_thumbnail_rel(
|
def url_cache_thumbnail_rel(
|
||||||
self, media_id: str, width: int, height: int, content_type: str, method: str
|
self, media_id: str, width: int, height: int, content_type: str, method: str
|
||||||
) -> str:
|
) -> str:
|
||||||
|
@ -168,37 +302,46 @@ class MediaFilePaths:
|
||||||
|
|
||||||
if NEW_FORMAT_ID_RE.match(media_id):
|
if NEW_FORMAT_ID_RE.match(media_id):
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
"url_cache_thumbnails", media_id[:10], media_id[11:], file_name
|
"url_cache_thumbnails",
|
||||||
|
_validate_path_component(media_id[:10]),
|
||||||
|
_validate_path_component(media_id[11:]),
|
||||||
|
_validate_path_component(file_name),
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
"url_cache_thumbnails",
|
"url_cache_thumbnails",
|
||||||
media_id[0:2],
|
_validate_path_component(media_id[0:2]),
|
||||||
media_id[2:4],
|
_validate_path_component(media_id[2:4]),
|
||||||
media_id[4:],
|
_validate_path_component(media_id[4:]),
|
||||||
file_name,
|
_validate_path_component(file_name),
|
||||||
)
|
)
|
||||||
|
|
||||||
url_cache_thumbnail = _wrap_in_base_path(url_cache_thumbnail_rel)
|
url_cache_thumbnail = _wrap_in_base_path(url_cache_thumbnail_rel)
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def url_cache_thumbnail_directory_rel(self, media_id: str) -> str:
|
def url_cache_thumbnail_directory_rel(self, media_id: str) -> str:
|
||||||
# Media id is of the form <DATE><RANDOM_STRING>
|
# Media id is of the form <DATE><RANDOM_STRING>
|
||||||
# E.g.: 2017-09-28-fsdRDt24DS234dsf
|
# E.g.: 2017-09-28-fsdRDt24DS234dsf
|
||||||
|
|
||||||
if NEW_FORMAT_ID_RE.match(media_id):
|
if NEW_FORMAT_ID_RE.match(media_id):
|
||||||
return os.path.join("url_cache_thumbnails", media_id[:10], media_id[11:])
|
return os.path.join(
|
||||||
|
"url_cache_thumbnails",
|
||||||
|
_validate_path_component(media_id[:10]),
|
||||||
|
_validate_path_component(media_id[11:]),
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
"url_cache_thumbnails",
|
"url_cache_thumbnails",
|
||||||
media_id[0:2],
|
_validate_path_component(media_id[0:2]),
|
||||||
media_id[2:4],
|
_validate_path_component(media_id[2:4]),
|
||||||
media_id[4:],
|
_validate_path_component(media_id[4:]),
|
||||||
)
|
)
|
||||||
|
|
||||||
url_cache_thumbnail_directory = _wrap_in_base_path(
|
url_cache_thumbnail_directory = _wrap_in_base_path(
|
||||||
url_cache_thumbnail_directory_rel
|
url_cache_thumbnail_directory_rel
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@_wrap_with_jail_check
|
||||||
def url_cache_thumbnail_dirs_to_delete(self, media_id: str) -> List[str]:
|
def url_cache_thumbnail_dirs_to_delete(self, media_id: str) -> List[str]:
|
||||||
"The dirs to try and remove if we delete the media_id thumbnails"
|
"The dirs to try and remove if we delete the media_id thumbnails"
|
||||||
# Media id is of the form <DATE><RANDOM_STRING>
|
# Media id is of the form <DATE><RANDOM_STRING>
|
||||||
|
@ -206,21 +349,35 @@ class MediaFilePaths:
|
||||||
if NEW_FORMAT_ID_RE.match(media_id):
|
if NEW_FORMAT_ID_RE.match(media_id):
|
||||||
return [
|
return [
|
||||||
os.path.join(
|
os.path.join(
|
||||||
self.base_path, "url_cache_thumbnails", media_id[:10], media_id[11:]
|
self.base_path,
|
||||||
|
"url_cache_thumbnails",
|
||||||
|
_validate_path_component(media_id[:10]),
|
||||||
|
_validate_path_component(media_id[11:]),
|
||||||
|
),
|
||||||
|
os.path.join(
|
||||||
|
self.base_path,
|
||||||
|
"url_cache_thumbnails",
|
||||||
|
_validate_path_component(media_id[:10]),
|
||||||
),
|
),
|
||||||
os.path.join(self.base_path, "url_cache_thumbnails", media_id[:10]),
|
|
||||||
]
|
]
|
||||||
else:
|
else:
|
||||||
return [
|
return [
|
||||||
os.path.join(
|
os.path.join(
|
||||||
self.base_path,
|
self.base_path,
|
||||||
"url_cache_thumbnails",
|
"url_cache_thumbnails",
|
||||||
media_id[0:2],
|
_validate_path_component(media_id[0:2]),
|
||||||
media_id[2:4],
|
_validate_path_component(media_id[2:4]),
|
||||||
media_id[4:],
|
_validate_path_component(media_id[4:]),
|
||||||
),
|
),
|
||||||
os.path.join(
|
os.path.join(
|
||||||
self.base_path, "url_cache_thumbnails", media_id[0:2], media_id[2:4]
|
self.base_path,
|
||||||
|
"url_cache_thumbnails",
|
||||||
|
_validate_path_component(media_id[0:2]),
|
||||||
|
_validate_path_component(media_id[2:4]),
|
||||||
|
),
|
||||||
|
os.path.join(
|
||||||
|
self.base_path,
|
||||||
|
"url_cache_thumbnails",
|
||||||
|
_validate_path_component(media_id[0:2]),
|
||||||
),
|
),
|
||||||
os.path.join(self.base_path, "url_cache_thumbnails", media_id[0:2]),
|
|
||||||
]
|
]
|
||||||
|
|
|
@ -45,7 +45,7 @@ from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.rest.media.v1._base import get_filename_from_headers
|
from synapse.rest.media.v1._base import get_filename_from_headers
|
||||||
from synapse.rest.media.v1.media_storage import MediaStorage
|
from synapse.rest.media.v1.media_storage import MediaStorage
|
||||||
from synapse.rest.media.v1.oembed import OEmbedProvider
|
from synapse.rest.media.v1.oembed import OEmbedProvider
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict, UserID
|
||||||
from synapse.util import json_encoder
|
from synapse.util import json_encoder
|
||||||
from synapse.util.async_helpers import ObservableDeferred
|
from synapse.util.async_helpers import ObservableDeferred
|
||||||
from synapse.util.caches.expiringcache import ExpiringCache
|
from synapse.util.caches.expiringcache import ExpiringCache
|
||||||
|
@ -231,7 +231,7 @@ class PreviewUrlResource(DirectServeJsonResource):
|
||||||
og = await make_deferred_yieldable(observable.observe())
|
og = await make_deferred_yieldable(observable.observe())
|
||||||
respond_with_json_bytes(request, 200, og, send_cors=True)
|
respond_with_json_bytes(request, 200, og, send_cors=True)
|
||||||
|
|
||||||
async def _do_preview(self, url: str, user: str, ts: int) -> bytes:
|
async def _do_preview(self, url: str, user: UserID, ts: int) -> bytes:
|
||||||
"""Check the db, and download the URL and build a preview
|
"""Check the db, and download the URL and build a preview
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -360,7 +360,7 @@ class PreviewUrlResource(DirectServeJsonResource):
|
||||||
|
|
||||||
return jsonog.encode("utf8")
|
return jsonog.encode("utf8")
|
||||||
|
|
||||||
async def _download_url(self, url: str, user: str) -> MediaInfo:
|
async def _download_url(self, url: str, user: UserID) -> MediaInfo:
|
||||||
# TODO: we should probably honour robots.txt... except in practice
|
# TODO: we should probably honour robots.txt... except in practice
|
||||||
# we're most likely being explicitly triggered by a human rather than a
|
# we're most likely being explicitly triggered by a human rather than a
|
||||||
# bot, so are we really a robot?
|
# bot, so are we really a robot?
|
||||||
|
@ -450,7 +450,7 @@ class PreviewUrlResource(DirectServeJsonResource):
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _precache_image_url(
|
async def _precache_image_url(
|
||||||
self, user: str, media_info: MediaInfo, og: JsonDict
|
self, user: UserID, media_info: MediaInfo, og: JsonDict
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Pre-cache the image (if one exists) for posterity
|
Pre-cache the image (if one exists) for posterity
|
||||||
|
|
|
@ -101,8 +101,8 @@ class Thumbnailer:
|
||||||
fits within the given rectangle::
|
fits within the given rectangle::
|
||||||
|
|
||||||
(w_in / h_in) = (w_out / h_out)
|
(w_in / h_in) = (w_out / h_out)
|
||||||
w_out = min(w_max, h_max * (w_in / h_in))
|
w_out = max(min(w_max, h_max * (w_in / h_in)), 1)
|
||||||
h_out = min(h_max, w_max * (h_in / w_in))
|
h_out = max(min(h_max, w_max * (h_in / w_in)), 1)
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
max_width: The largest possible width.
|
max_width: The largest possible width.
|
||||||
|
@ -110,9 +110,9 @@ class Thumbnailer:
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if max_width * self.height < max_height * self.width:
|
if max_width * self.height < max_height * self.width:
|
||||||
return max_width, (max_width * self.height) // self.width
|
return max_width, max((max_width * self.height) // self.width, 1)
|
||||||
else:
|
else:
|
||||||
return (max_height * self.width) // self.height, max_height
|
return max((max_height * self.width) // self.height, 1), max_height
|
||||||
|
|
||||||
def _resize(self, width: int, height: int) -> Image.Image:
|
def _resize(self, width: int, height: int) -> Image.Image:
|
||||||
# 1-bit or 8-bit color palette images need converting to RGB
|
# 1-bit or 8-bit color palette images need converting to RGB
|
||||||
|
|
|
@ -33,9 +33,10 @@ from typing import (
|
||||||
cast,
|
cast,
|
||||||
)
|
)
|
||||||
|
|
||||||
import twisted.internet.tcp
|
from twisted.internet.interfaces import IOpenSSLContextFactory
|
||||||
|
from twisted.internet.tcp import Port
|
||||||
from twisted.web.iweb import IPolicyForHTTPS
|
from twisted.web.iweb import IPolicyForHTTPS
|
||||||
from twisted.web.resource import IResource
|
from twisted.web.resource import Resource
|
||||||
|
|
||||||
from synapse.api.auth import Auth
|
from synapse.api.auth import Auth
|
||||||
from synapse.api.filtering import Filtering
|
from synapse.api.filtering import Filtering
|
||||||
|
@ -206,7 +207,7 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
Attributes:
|
Attributes:
|
||||||
config (synapse.config.homeserver.HomeserverConfig):
|
config (synapse.config.homeserver.HomeserverConfig):
|
||||||
_listening_services (list[twisted.internet.tcp.Port]): TCP ports that
|
_listening_services (list[Port]): TCP ports that
|
||||||
we are listening on to provide HTTP services.
|
we are listening on to provide HTTP services.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -225,6 +226,8 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||||
# instantiated during setup() for future return by get_datastore()
|
# instantiated during setup() for future return by get_datastore()
|
||||||
DATASTORE_CLASS = abc.abstractproperty()
|
DATASTORE_CLASS = abc.abstractproperty()
|
||||||
|
|
||||||
|
tls_server_context_factory: Optional[IOpenSSLContextFactory]
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
hostname: str,
|
hostname: str,
|
||||||
|
@ -247,7 +250,7 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||||
# the key we use to sign events and requests
|
# the key we use to sign events and requests
|
||||||
self.signing_key = config.key.signing_key[0]
|
self.signing_key = config.key.signing_key[0]
|
||||||
self.config = config
|
self.config = config
|
||||||
self._listening_services: List[twisted.internet.tcp.Port] = []
|
self._listening_services: List[Port] = []
|
||||||
self.start_time: Optional[int] = None
|
self.start_time: Optional[int] = None
|
||||||
|
|
||||||
self._instance_id = random_string(5)
|
self._instance_id = random_string(5)
|
||||||
|
@ -257,10 +260,10 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
self.datastores: Optional[Databases] = None
|
self.datastores: Optional[Databases] = None
|
||||||
|
|
||||||
self._module_web_resources: Dict[str, IResource] = {}
|
self._module_web_resources: Dict[str, Resource] = {}
|
||||||
self._module_web_resources_consumed = False
|
self._module_web_resources_consumed = False
|
||||||
|
|
||||||
def register_module_web_resource(self, path: str, resource: IResource):
|
def register_module_web_resource(self, path: str, resource: Resource):
|
||||||
"""Allows a module to register a web resource to be served at the given path.
|
"""Allows a module to register a web resource to be served at the given path.
|
||||||
|
|
||||||
If multiple modules register a resource for the same path, the module that
|
If multiple modules register a resource for the same path, the module that
|
||||||
|
|
|
@ -82,7 +82,7 @@ class BackgroundUpdater:
|
||||||
process and autotuning the batch size.
|
process and autotuning the batch size.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
MINIMUM_BACKGROUND_BATCH_SIZE = 100
|
MINIMUM_BACKGROUND_BATCH_SIZE = 1
|
||||||
DEFAULT_BACKGROUND_BATCH_SIZE = 100
|
DEFAULT_BACKGROUND_BATCH_SIZE = 100
|
||||||
BACKGROUND_UPDATE_INTERVAL_MS = 1000
|
BACKGROUND_UPDATE_INTERVAL_MS = 1000
|
||||||
BACKGROUND_UPDATE_DURATION_MS = 100
|
BACKGROUND_UPDATE_DURATION_MS = 100
|
||||||
|
@ -122,6 +122,8 @@ class BackgroundUpdater:
|
||||||
|
|
||||||
def start_doing_background_updates(self) -> None:
|
def start_doing_background_updates(self) -> None:
|
||||||
if self.enabled:
|
if self.enabled:
|
||||||
|
# if we start a new background update, not all updates are done.
|
||||||
|
self._all_done = False
|
||||||
run_as_background_process("background_updates", self.run_background_updates)
|
run_as_background_process("background_updates", self.run_background_updates)
|
||||||
|
|
||||||
async def run_background_updates(self, sleep: bool = True) -> None:
|
async def run_background_updates(self, sleep: bool = True) -> None:
|
||||||
|
|
|
@ -188,7 +188,7 @@ class LoggingDatabaseConnection:
|
||||||
|
|
||||||
|
|
||||||
# The type of entry which goes on our after_callbacks and exception_callbacks lists.
|
# The type of entry which goes on our after_callbacks and exception_callbacks lists.
|
||||||
_CallbackListEntry = Tuple[Callable[..., None], Iterable[Any], Dict[str, Any]]
|
_CallbackListEntry = Tuple[Callable[..., object], Iterable[Any], Dict[str, Any]]
|
||||||
|
|
||||||
|
|
||||||
R = TypeVar("R")
|
R = TypeVar("R")
|
||||||
|
@ -235,7 +235,7 @@ class LoggingTransaction:
|
||||||
self.after_callbacks = after_callbacks
|
self.after_callbacks = after_callbacks
|
||||||
self.exception_callbacks = exception_callbacks
|
self.exception_callbacks = exception_callbacks
|
||||||
|
|
||||||
def call_after(self, callback: Callable[..., None], *args: Any, **kwargs: Any):
|
def call_after(self, callback: Callable[..., object], *args: Any, **kwargs: Any):
|
||||||
"""Call the given callback on the main twisted thread after the
|
"""Call the given callback on the main twisted thread after the
|
||||||
transaction has finished. Used to invalidate the caches on the
|
transaction has finished. Used to invalidate the caches on the
|
||||||
correct thread.
|
correct thread.
|
||||||
|
@ -247,7 +247,7 @@ class LoggingTransaction:
|
||||||
self.after_callbacks.append((callback, args, kwargs))
|
self.after_callbacks.append((callback, args, kwargs))
|
||||||
|
|
||||||
def call_on_exception(
|
def call_on_exception(
|
||||||
self, callback: Callable[..., None], *args: Any, **kwargs: Any
|
self, callback: Callable[..., object], *args: Any, **kwargs: Any
|
||||||
):
|
):
|
||||||
# if self.exception_callbacks is None, that means that whatever constructed the
|
# if self.exception_callbacks is None, that means that whatever constructed the
|
||||||
# LoggingTransaction isn't expecting there to be any callbacks; assert that
|
# LoggingTransaction isn't expecting there to be any callbacks; assert that
|
||||||
|
|
|
@ -123,9 +123,9 @@ class DataStore(
|
||||||
RelationsStore,
|
RelationsStore,
|
||||||
CensorEventsStore,
|
CensorEventsStore,
|
||||||
UIAuthStore,
|
UIAuthStore,
|
||||||
|
EventForwardExtremitiesStore,
|
||||||
CacheInvalidationWorkerStore,
|
CacheInvalidationWorkerStore,
|
||||||
ServerMetricsStore,
|
ServerMetricsStore,
|
||||||
EventForwardExtremitiesStore,
|
|
||||||
LockStore,
|
LockStore,
|
||||||
SessionStore,
|
SessionStore,
|
||||||
):
|
):
|
||||||
|
@ -154,6 +154,7 @@ class DataStore(
|
||||||
db_conn, "local_group_updates", "stream_id"
|
db_conn, "local_group_updates", "stream_id"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self._cache_id_gen: Optional[MultiWriterIdGenerator]
|
||||||
if isinstance(self.database_engine, PostgresEngine):
|
if isinstance(self.database_engine, PostgresEngine):
|
||||||
# We set the `writers` to an empty list here as we don't care about
|
# We set the `writers` to an empty list here as we don't care about
|
||||||
# missing updates over restarts, as we'll not have anything in our
|
# missing updates over restarts, as we'll not have anything in our
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue