mirror of
https://git.anonymousland.org/anonymousland/synapse.git
synced 2024-10-01 11:49:51 -04:00
Merge branch 'release-v1.53'
This commit is contained in:
commit
07f82ac29b
@ -15,4 +15,4 @@ export LANG="C.UTF-8"
|
|||||||
# Prevent virtualenv from auto-updating pip to an incompatible version
|
# Prevent virtualenv from auto-updating pip to an incompatible version
|
||||||
export VIRTUALENV_NO_DOWNLOAD=1
|
export VIRTUALENV_NO_DOWNLOAD=1
|
||||||
|
|
||||||
exec tox -e py3-old,combine
|
exec tox -e py3-old
|
||||||
|
8
.github/workflows/tests.yml
vendored
8
.github/workflows/tests.yml
vendored
@ -345,7 +345,7 @@ jobs:
|
|||||||
path: synapse
|
path: synapse
|
||||||
|
|
||||||
# Attempt to check out the same branch of Complement as the PR. If it
|
# Attempt to check out the same branch of Complement as the PR. If it
|
||||||
# doesn't exist, fallback to master.
|
# doesn't exist, fallback to HEAD.
|
||||||
- name: Checkout complement
|
- name: Checkout complement
|
||||||
shell: bash
|
shell: bash
|
||||||
run: |
|
run: |
|
||||||
@ -358,8 +358,8 @@ jobs:
|
|||||||
# for pull requests, otherwise GITHUB_REF).
|
# for pull requests, otherwise GITHUB_REF).
|
||||||
# 2. Attempt to use the base branch, e.g. when merging into release-vX.Y
|
# 2. Attempt to use the base branch, e.g. when merging into release-vX.Y
|
||||||
# (GITHUB_BASE_REF for pull requests).
|
# (GITHUB_BASE_REF for pull requests).
|
||||||
# 3. Use the default complement branch ("master").
|
# 3. Use the default complement branch ("HEAD").
|
||||||
for BRANCH_NAME in "$GITHUB_HEAD_REF" "$GITHUB_BASE_REF" "${GITHUB_REF#refs/heads/}" "master"; do
|
for BRANCH_NAME in "$GITHUB_HEAD_REF" "$GITHUB_BASE_REF" "${GITHUB_REF#refs/heads/}" "HEAD"; do
|
||||||
# Skip empty branch names and merge commits.
|
# Skip empty branch names and merge commits.
|
||||||
if [[ -z "$BRANCH_NAME" || $BRANCH_NAME =~ ^refs/pull/.* ]]; then
|
if [[ -z "$BRANCH_NAME" || $BRANCH_NAME =~ ^refs/pull/.* ]]; then
|
||||||
continue
|
continue
|
||||||
@ -383,7 +383,7 @@ jobs:
|
|||||||
# Run Complement
|
# Run Complement
|
||||||
- run: |
|
- run: |
|
||||||
set -o pipefail
|
set -o pipefail
|
||||||
go test -v -json -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt
|
go test -v -json -p 1 -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt
|
||||||
shell: bash
|
shell: bash
|
||||||
name: Run Complement Tests
|
name: Run Complement Tests
|
||||||
env:
|
env:
|
||||||
|
87
CHANGES.md
87
CHANGES.md
@ -1,3 +1,88 @@
|
|||||||
|
Synapse 1.53.0 (2022-02-22)
|
||||||
|
===========================
|
||||||
|
|
||||||
|
No significant changes since 1.53.0rc1.
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 1.53.0rc1 (2022-02-15)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Add experimental support for sending to-device messages to application services, as specified by [MSC2409](https://github.com/matrix-org/matrix-doc/pull/2409). ([\#11215](https://github.com/matrix-org/synapse/issues/11215), [\#11966](https://github.com/matrix-org/synapse/issues/11966))
|
||||||
|
- Add a background database update to purge account data for deactivated users. ([\#11655](https://github.com/matrix-org/synapse/issues/11655))
|
||||||
|
- Experimental support for [MSC3666](https://github.com/matrix-org/matrix-doc/pull/3666): including bundled aggregations in server side search results. ([\#11837](https://github.com/matrix-org/synapse/issues/11837))
|
||||||
|
- Enable cache time-based expiry by default. The `expiry_time` config flag has been superseded by `expire_caches` and `cache_entry_ttl`. ([\#11849](https://github.com/matrix-org/synapse/issues/11849))
|
||||||
|
- Add a callback to allow modules to allow or forbid a 3PID (email address, phone number) from being associated to a local account. ([\#11854](https://github.com/matrix-org/synapse/issues/11854))
|
||||||
|
- Stabilize support and remove unstable endpoints for [MSC3231](https://github.com/matrix-org/matrix-doc/pull/3231). Clients must switch to the stable identifier and endpoint. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#stablisation-of-msc3231) for more information. ([\#11867](https://github.com/matrix-org/synapse/issues/11867))
|
||||||
|
- Allow modules to retrieve the current instance's server name and worker name. ([\#11868](https://github.com/matrix-org/synapse/issues/11868))
|
||||||
|
- Use a dedicated configurable rate limiter for 3PID invites. ([\#11892](https://github.com/matrix-org/synapse/issues/11892))
|
||||||
|
- Support the stable API endpoint for [MSC3283](https://github.com/matrix-org/matrix-doc/pull/3283): new settings in `/capabilities` endpoint. ([\#11933](https://github.com/matrix-org/synapse/issues/11933), [\#11989](https://github.com/matrix-org/synapse/issues/11989))
|
||||||
|
- Support the `dir` parameter on the `/relations` endpoint, per [MSC3715](https://github.com/matrix-org/matrix-doc/pull/3715). ([\#11941](https://github.com/matrix-org/synapse/issues/11941))
|
||||||
|
- Experimental implementation of [MSC3706](https://github.com/matrix-org/matrix-doc/pull/3706): extensions to `/send_join` to support reduced response size. ([\#11967](https://github.com/matrix-org/synapse/issues/11967))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) historical messages backfilling in random order on remote homeservers. ([\#11114](https://github.com/matrix-org/synapse/issues/11114))
|
||||||
|
- Fix a bug introduced in Synapse 1.51.0 where incoming federation transactions containing at least one EDU would be dropped if debug logging was enabled for `synapse.8631_debug`. ([\#11890](https://github.com/matrix-org/synapse/issues/11890))
|
||||||
|
- Fix a long-standing bug where some unknown endpoints would return HTML error pages instead of JSON `M_UNRECOGNIZED` errors. ([\#11930](https://github.com/matrix-org/synapse/issues/11930))
|
||||||
|
- Implement an allow list of content types for which we will attempt to preview a URL. This prevents Synapse from making useless longer-lived connections to streaming media servers. ([\#11936](https://github.com/matrix-org/synapse/issues/11936))
|
||||||
|
- Fix a long-standing bug where pagination tokens from `/sync` and `/messages` could not be provided to the `/relations` API. ([\#11952](https://github.com/matrix-org/synapse/issues/11952))
|
||||||
|
- Require that modules register their callbacks using keyword arguments. ([\#11975](https://github.com/matrix-org/synapse/issues/11975))
|
||||||
|
- Fix a long-standing bug where `M_WRONG_ROOM_KEYS_VERSION` errors would not include the specced `current_version` field. ([\#11988](https://github.com/matrix-org/synapse/issues/11988))
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Fix typo in User Admin API: unpind -> unbind. ([\#11859](https://github.com/matrix-org/synapse/issues/11859))
|
||||||
|
- Document images returned by the User List Media Admin API can include those generated by URL previews. ([\#11862](https://github.com/matrix-org/synapse/issues/11862))
|
||||||
|
- Remove outdated MSC1711 FAQ document. ([\#11907](https://github.com/matrix-org/synapse/issues/11907))
|
||||||
|
- Correct the structured logging configuration example. Contributed by Brad Jones. ([\#11946](https://github.com/matrix-org/synapse/issues/11946))
|
||||||
|
- Add information on the Synapse release cycle. ([\#11954](https://github.com/matrix-org/synapse/issues/11954))
|
||||||
|
- Fix broken link in the README to the admin API for password reset. ([\#11955](https://github.com/matrix-org/synapse/issues/11955))
|
||||||
|
|
||||||
|
|
||||||
|
Deprecations and Removals
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
- Drop support for `webclient` listeners and configuring `web_client_location` to a non-HTTP(S) URL. Deprecated configurations are a configuration error. ([\#11895](https://github.com/matrix-org/synapse/issues/11895))
|
||||||
|
- Remove deprecated `user_may_create_room_with_invites` spam checker callback. See the [upgrade notes](https://matrix-org.github.io/synapse/latest/upgrade.html#removal-of-user_may_create_room_with_invites) for more information. ([\#11950](https://github.com/matrix-org/synapse/issues/11950))
|
||||||
|
- No longer build `.deb` packages for Ubuntu 21.04 Hirsute Hippo, which has now EOLed. ([\#11961](https://github.com/matrix-org/synapse/issues/11961))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Enhance user registration test helpers to make them more useful for tests involving application services and devices. ([\#11615](https://github.com/matrix-org/synapse/issues/11615), [\#11616](https://github.com/matrix-org/synapse/issues/11616))
|
||||||
|
- Improve performance when fetching bundled aggregations for multiple events. ([\#11660](https://github.com/matrix-org/synapse/issues/11660), [\#11752](https://github.com/matrix-org/synapse/issues/11752))
|
||||||
|
- Fix type errors introduced by new annotations in the Prometheus Client library. ([\#11832](https://github.com/matrix-org/synapse/issues/11832))
|
||||||
|
- Add missing type hints to replication code. ([\#11856](https://github.com/matrix-org/synapse/issues/11856), [\#11938](https://github.com/matrix-org/synapse/issues/11938))
|
||||||
|
- Ensure that `opentracing` scopes are activated and closed at the right time. ([\#11869](https://github.com/matrix-org/synapse/issues/11869))
|
||||||
|
- Improve opentracing for incoming federation requests. ([\#11870](https://github.com/matrix-org/synapse/issues/11870))
|
||||||
|
- Improve internal docstrings in `synapse.util.caches`. ([\#11876](https://github.com/matrix-org/synapse/issues/11876))
|
||||||
|
- Do not needlessly clear the `get_users_in_room` and `get_users_in_room_with_profiles` caches when any room state changes. ([\#11878](https://github.com/matrix-org/synapse/issues/11878))
|
||||||
|
- Convert `ApplicationServiceTestCase` to use `simple_async_mock`. ([\#11880](https://github.com/matrix-org/synapse/issues/11880))
|
||||||
|
- Remove experimental changes to the default push rules which were introduced in Synapse 1.19.0 but never enabled. ([\#11884](https://github.com/matrix-org/synapse/issues/11884))
|
||||||
|
- Disable coverage calculation for olddeps build. ([\#11888](https://github.com/matrix-org/synapse/issues/11888))
|
||||||
|
- Preparation to support sending device list updates to application services. ([\#11905](https://github.com/matrix-org/synapse/issues/11905))
|
||||||
|
- Add a test that checks users receive their own device list updates down `/sync`. ([\#11909](https://github.com/matrix-org/synapse/issues/11909))
|
||||||
|
- Run Complement tests sequentially. ([\#11910](https://github.com/matrix-org/synapse/issues/11910))
|
||||||
|
- Various refactors to the application service notifier code. ([\#11911](https://github.com/matrix-org/synapse/issues/11911), [\#11912](https://github.com/matrix-org/synapse/issues/11912))
|
||||||
|
- Tests: replace mocked `Authenticator` with the real thing. ([\#11913](https://github.com/matrix-org/synapse/issues/11913))
|
||||||
|
- Various refactors to the typing notifications code. ([\#11914](https://github.com/matrix-org/synapse/issues/11914))
|
||||||
|
- Use the proper type for the `Content-Length` header in the `UploadResource`. ([\#11927](https://github.com/matrix-org/synapse/issues/11927))
|
||||||
|
- Remove an unnecessary ignoring of type hints due to fixes in upstream packages. ([\#11939](https://github.com/matrix-org/synapse/issues/11939))
|
||||||
|
- Add missing type hints. ([\#11953](https://github.com/matrix-org/synapse/issues/11953))
|
||||||
|
- Fix an import cycle in `synapse.event_auth`. ([\#11965](https://github.com/matrix-org/synapse/issues/11965))
|
||||||
|
- Unpin `frozendict` but exclude the known bad version 2.1.2. ([\#11969](https://github.com/matrix-org/synapse/issues/11969))
|
||||||
|
- Prepare for rename of default Complement branch. ([\#11971](https://github.com/matrix-org/synapse/issues/11971))
|
||||||
|
- Fetch Synapse's version using a helper from `matrix-common`. ([\#11979](https://github.com/matrix-org/synapse/issues/11979))
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.52.0 (2022-02-08)
|
Synapse 1.52.0 (2022-02-08)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
@ -188,7 +273,7 @@ Bugfixes
|
|||||||
Synapse 1.50.0 (2022-01-18)
|
Synapse 1.50.0 (2022-01-18)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
**This release contains a critical bug that may prevent clients from being able to connect.
|
**This release contains a critical bug that may prevent clients from being able to connect.
|
||||||
As such, it is not recommended to upgrade to 1.50.0. Instead, please upgrade straight to
|
As such, it is not recommended to upgrade to 1.50.0. Instead, please upgrade straight to
|
||||||
to 1.50.1. Further details are available in [this issue](https://github.com/matrix-org/synapse/issues/11763).**
|
to 1.50.1. Further details are available in [this issue](https://github.com/matrix-org/synapse/issues/11763).**
|
||||||
|
|
||||||
|
@ -246,7 +246,7 @@ Password reset
|
|||||||
==============
|
==============
|
||||||
|
|
||||||
Users can reset their password through their client. Alternatively, a server admin
|
Users can reset their password through their client. Alternatively, a server admin
|
||||||
can reset a users password using the `admin API <docs/admin_api/user_admin_api.rst#reset-password>`_
|
can reset a users password using the `admin API <docs/admin_api/user_admin_api.md#reset-password>`_
|
||||||
or by directly editing the database as shown below.
|
or by directly editing the database as shown below.
|
||||||
|
|
||||||
First calculate the hash of the new password::
|
First calculate the hash of the new password::
|
||||||
|
12
debian/changelog
vendored
12
debian/changelog
vendored
@ -1,3 +1,15 @@
|
|||||||
|
matrix-synapse-py3 (1.53.0) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.53.0.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Feb 2022 11:32:06 +0000
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.53.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.53.0~rc1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Feb 2022 10:40:50 +0000
|
||||||
|
|
||||||
matrix-synapse-py3 (1.52.0) stable; urgency=medium
|
matrix-synapse-py3 (1.52.0) stable; urgency=medium
|
||||||
|
|
||||||
* New synapse release 1.52.0.
|
* New synapse release 1.52.0.
|
||||||
|
@ -1,314 +0,0 @@
|
|||||||
# MSC1711 Certificates FAQ
|
|
||||||
|
|
||||||
## Historical Note
|
|
||||||
This document was originally written to guide server admins through the upgrade
|
|
||||||
path towards Synapse 1.0. Specifically,
|
|
||||||
[MSC1711](https://github.com/matrix-org/matrix-doc/blob/main/proposals/1711-x509-for-federation.md)
|
|
||||||
required that all servers present valid TLS certificates on their federation
|
|
||||||
API. Admins were encouraged to achieve compliance from version 0.99.0 (released
|
|
||||||
in February 2019) ahead of version 1.0 (released June 2019) enforcing the
|
|
||||||
certificate checks.
|
|
||||||
|
|
||||||
Much of what follows is now outdated since most admins will have already
|
|
||||||
upgraded, however it may be of use to those with old installs returning to the
|
|
||||||
project.
|
|
||||||
|
|
||||||
If you are setting up a server from scratch you almost certainly should look at
|
|
||||||
the [installation guide](setup/installation.md) instead.
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
The goal of Synapse 0.99.0 is to act as a stepping stone to Synapse 1.0.0. It
|
|
||||||
supports the r0.1 release of the server to server specification, but is
|
|
||||||
compatible with both the legacy Matrix federation behaviour (pre-r0.1) as well
|
|
||||||
as post-r0.1 behaviour, in order to allow for a smooth upgrade across the
|
|
||||||
federation.
|
|
||||||
|
|
||||||
The most important thing to know is that Synapse 1.0.0 will require a valid TLS
|
|
||||||
certificate on federation endpoints. Self signed certificates will not be
|
|
||||||
sufficient.
|
|
||||||
|
|
||||||
Synapse 0.99.0 makes it easy to configure TLS certificates and will
|
|
||||||
interoperate with both >= 1.0.0 servers as well as existing servers yet to
|
|
||||||
upgrade.
|
|
||||||
|
|
||||||
**It is critical that all admins upgrade to 0.99.0 and configure a valid TLS
|
|
||||||
certificate.** Admins will have 1 month to do so, after which 1.0.0 will be
|
|
||||||
released and those servers without a valid certificate will not longer be able
|
|
||||||
to federate with >= 1.0.0 servers.
|
|
||||||
|
|
||||||
Full details on how to carry out this configuration change is given
|
|
||||||
[below](#configuring-certificates-for-compatibility-with-synapse-100). A
|
|
||||||
timeline and some frequently asked questions are also given below.
|
|
||||||
|
|
||||||
For more details and context on the release of the r0.1 Server/Server API and
|
|
||||||
imminent Matrix 1.0 release, you can also see our
|
|
||||||
[main talk from FOSDEM 2019](https://matrix.org/blog/2019/02/04/matrix-at-fosdem-2019/).
|
|
||||||
|
|
||||||
## Timeline
|
|
||||||
|
|
||||||
**5th Feb 2019 - Synapse 0.99.0 is released.**
|
|
||||||
|
|
||||||
All server admins are encouraged to upgrade.
|
|
||||||
|
|
||||||
0.99.0:
|
|
||||||
|
|
||||||
- provides support for ACME to make setting up Let's Encrypt certs easy, as
|
|
||||||
well as .well-known support.
|
|
||||||
|
|
||||||
- does not enforce that a valid CA cert is present on the federation API, but
|
|
||||||
rather makes it easy to set one up.
|
|
||||||
|
|
||||||
- provides support for .well-known
|
|
||||||
|
|
||||||
Admins should upgrade and configure a valid CA cert. Homeservers that require a
|
|
||||||
.well-known entry (see below), should retain their SRV record and use it
|
|
||||||
alongside their .well-known record.
|
|
||||||
|
|
||||||
**10th June 2019 - Synapse 1.0.0 is released**
|
|
||||||
|
|
||||||
1.0.0 is scheduled for release on 10th June. In
|
|
||||||
accordance with the the [S2S spec](https://matrix.org/docs/spec/server_server/r0.1.0.html)
|
|
||||||
1.0.0 will enforce certificate validity. This means that any homeserver without a
|
|
||||||
valid certificate after this point will no longer be able to federate with
|
|
||||||
1.0.0 servers.
|
|
||||||
|
|
||||||
## Configuring certificates for compatibility with Synapse 1.0.0
|
|
||||||
|
|
||||||
### If you do not currently have an SRV record
|
|
||||||
|
|
||||||
In this case, your `server_name` points to the host where your Synapse is
|
|
||||||
running. There is no need to create a `.well-known` URI or an SRV record, but
|
|
||||||
you will need to give Synapse a valid, signed, certificate.
|
|
||||||
|
|
||||||
### If you do have an SRV record currently
|
|
||||||
|
|
||||||
If you are using an SRV record, your matrix domain (`server_name`) may not
|
|
||||||
point to the same host that your Synapse is running on (the 'target
|
|
||||||
domain'). (If it does, you can follow the recommendation above; otherwise, read
|
|
||||||
on.)
|
|
||||||
|
|
||||||
Let's assume that your `server_name` is `example.com`, and your Synapse is
|
|
||||||
hosted at a target domain of `customer.example.net`. Currently you should have
|
|
||||||
an SRV record which looks like:
|
|
||||||
|
|
||||||
```
|
|
||||||
_matrix._tcp.example.com. IN SRV 10 5 8000 customer.example.net.
|
|
||||||
```
|
|
||||||
|
|
||||||
In this situation, you have three choices for how to proceed:
|
|
||||||
|
|
||||||
#### Option 1: give Synapse a certificate for your matrix domain
|
|
||||||
|
|
||||||
Synapse 1.0 will expect your server to present a TLS certificate for your
|
|
||||||
`server_name` (`example.com` in the above example). You can achieve this by acquiring a
|
|
||||||
certificate for the `server_name` yourself (for example, using `certbot`), and giving it
|
|
||||||
and the key to Synapse via `tls_certificate_path` and `tls_private_key_path`.
|
|
||||||
|
|
||||||
#### Option 2: run Synapse behind a reverse proxy
|
|
||||||
|
|
||||||
If you have an existing reverse proxy set up with correct TLS certificates for
|
|
||||||
your domain, you can simply route all traffic through the reverse proxy by
|
|
||||||
updating the SRV record appropriately (or removing it, if the proxy listens on
|
|
||||||
8448).
|
|
||||||
|
|
||||||
See [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
|
|
||||||
reverse proxy.
|
|
||||||
|
|
||||||
#### Option 3: add a .well-known file to delegate your matrix traffic
|
|
||||||
|
|
||||||
This will allow you to keep Synapse on a separate domain, without having to
|
|
||||||
give it a certificate for the matrix domain.
|
|
||||||
|
|
||||||
You can do this with a `.well-known` file as follows:
|
|
||||||
|
|
||||||
1. Keep the SRV record in place - it is needed for backwards compatibility
|
|
||||||
with Synapse 0.34 and earlier.
|
|
||||||
|
|
||||||
2. Give Synapse a certificate corresponding to the target domain
|
|
||||||
(`customer.example.net` in the above example). You can do this by acquire a
|
|
||||||
certificate for the target domain and giving it to Synapse via `tls_certificate_path`
|
|
||||||
and `tls_private_key_path`.
|
|
||||||
|
|
||||||
3. Restart Synapse to ensure the new certificate is loaded.
|
|
||||||
|
|
||||||
4. Arrange for a `.well-known` file at
|
|
||||||
`https://<server_name>/.well-known/matrix/server` with contents:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{"m.server": "<target server name>"}
|
|
||||||
```
|
|
||||||
|
|
||||||
where the target server name is resolved as usual (i.e. SRV lookup, falling
|
|
||||||
back to talking to port 8448).
|
|
||||||
|
|
||||||
In the above example, where synapse is listening on port 8000,
|
|
||||||
`https://example.com/.well-known/matrix/server` should have `m.server` set to one of:
|
|
||||||
|
|
||||||
1. `customer.example.net` ─ with a SRV record on
|
|
||||||
`_matrix._tcp.customer.example.com` pointing to port 8000, or:
|
|
||||||
|
|
||||||
2. `customer.example.net` ─ updating synapse to listen on the default port
|
|
||||||
8448, or:
|
|
||||||
|
|
||||||
3. `customer.example.net:8000` ─ ensuring that if there is a reverse proxy
|
|
||||||
on `customer.example.net:8000` it correctly handles HTTP requests with
|
|
||||||
Host header set to `customer.example.net:8000`.
|
|
||||||
|
|
||||||
## FAQ
|
|
||||||
|
|
||||||
### Synapse 0.99.0 has just been released, what do I need to do right now?
|
|
||||||
|
|
||||||
Upgrade as soon as you can in preparation for Synapse 1.0.0, and update your
|
|
||||||
TLS certificates as [above](#configuring-certificates-for-compatibility-with-synapse-100).
|
|
||||||
|
|
||||||
### What will happen if I do not set up a valid federation certificate immediately?
|
|
||||||
|
|
||||||
Nothing initially, but once 1.0.0 is in the wild it will not be possible to
|
|
||||||
federate with 1.0.0 servers.
|
|
||||||
|
|
||||||
### What will happen if I do nothing at all?
|
|
||||||
|
|
||||||
If the admin takes no action at all, and remains on a Synapse < 0.99.0 then the
|
|
||||||
homeserver will be unable to federate with those who have implemented
|
|
||||||
.well-known. Then, as above, once the month upgrade window has expired the
|
|
||||||
homeserver will not be able to federate with any Synapse >= 1.0.0
|
|
||||||
|
|
||||||
### When do I need a SRV record or .well-known URI?
|
|
||||||
|
|
||||||
If your homeserver listens on the default federation port (8448), and your
|
|
||||||
`server_name` points to the host that your homeserver runs on, you do not need an
|
|
||||||
SRV record or `.well-known/matrix/server` URI.
|
|
||||||
|
|
||||||
For instance, if you registered `example.com` and pointed its DNS A record at a
|
|
||||||
fresh Upcloud VPS or similar, you could install Synapse 0.99 on that host,
|
|
||||||
giving it a server_name of `example.com`, and it would automatically generate a
|
|
||||||
valid TLS certificate for you via Let's Encrypt and no SRV record or
|
|
||||||
`.well-known` URI would be needed.
|
|
||||||
|
|
||||||
This is the common case, although you can add an SRV record or
|
|
||||||
`.well-known/matrix/server` URI for completeness if you wish.
|
|
||||||
|
|
||||||
**However**, if your server does not listen on port 8448, or if your `server_name`
|
|
||||||
does not point to the host that your homeserver runs on, you will need to let
|
|
||||||
other servers know how to find it.
|
|
||||||
|
|
||||||
In this case, you should see ["If you do have an SRV record
|
|
||||||
currently"](#if-you-do-have-an-srv-record-currently) above.
|
|
||||||
|
|
||||||
### Can I still use an SRV record?
|
|
||||||
|
|
||||||
Firstly, if you didn't need an SRV record before (because your server is
|
|
||||||
listening on port 8448 of your server_name), you certainly don't need one now:
|
|
||||||
the defaults are still the same.
|
|
||||||
|
|
||||||
If you previously had an SRV record, you can keep using it provided you are
|
|
||||||
able to give Synapse a TLS certificate corresponding to your server name. For
|
|
||||||
example, suppose you had the following SRV record, which directs matrix traffic
|
|
||||||
for example.com to matrix.example.com:443:
|
|
||||||
|
|
||||||
```
|
|
||||||
_matrix._tcp.example.com. IN SRV 10 5 443 matrix.example.com
|
|
||||||
```
|
|
||||||
|
|
||||||
In this case, Synapse must be given a certificate for example.com - or be
|
|
||||||
configured to acquire one from Let's Encrypt.
|
|
||||||
|
|
||||||
If you are unable to give Synapse a certificate for your server_name, you will
|
|
||||||
also need to use a .well-known URI instead. However, see also "I have created a
|
|
||||||
.well-known URI. Do I still need an SRV record?".
|
|
||||||
|
|
||||||
### I have created a .well-known URI. Do I still need an SRV record?
|
|
||||||
|
|
||||||
As of Synapse 0.99, Synapse will first check for the existence of a `.well-known`
|
|
||||||
URI and follow any delegation it suggests. It will only then check for the
|
|
||||||
existence of an SRV record.
|
|
||||||
|
|
||||||
That means that the SRV record will often be redundant. However, you should
|
|
||||||
remember that there may still be older versions of Synapse in the federation
|
|
||||||
which do not understand `.well-known` URIs, so if you removed your SRV record you
|
|
||||||
would no longer be able to federate with them.
|
|
||||||
|
|
||||||
It is therefore best to leave the SRV record in place for now. Synapse 0.34 and
|
|
||||||
earlier will follow the SRV record (and not care about the invalid
|
|
||||||
certificate). Synapse 0.99 and later will follow the .well-known URI, with the
|
|
||||||
correct certificate chain.
|
|
||||||
|
|
||||||
### It used to work just fine, why are you breaking everything?
|
|
||||||
|
|
||||||
We have always wanted Matrix servers to be as easy to set up as possible, and
|
|
||||||
so back when we started federation in 2014 we didn't want admins to have to go
|
|
||||||
through the cumbersome process of buying a valid TLS certificate to run a
|
|
||||||
server. This was before Let's Encrypt came along and made getting a free and
|
|
||||||
valid TLS certificate straightforward. So instead, we adopted a system based on
|
|
||||||
[Perspectives](https://en.wikipedia.org/wiki/Convergence_(SSL)): an approach
|
|
||||||
where you check a set of "notary servers" (in practice, homeservers) to vouch
|
|
||||||
for the validity of a certificate rather than having it signed by a CA. As long
|
|
||||||
as enough different notaries agree on the certificate's validity, then it is
|
|
||||||
trusted.
|
|
||||||
|
|
||||||
However, in practice this has never worked properly. Most people only use the
|
|
||||||
default notary server (matrix.org), leading to inadvertent centralisation which
|
|
||||||
we want to eliminate. Meanwhile, we never implemented the full consensus
|
|
||||||
algorithm to query the servers participating in a room to determine consensus
|
|
||||||
on whether a given certificate is valid. This is fiddly to get right
|
|
||||||
(especially in face of sybil attacks), and we found ourselves questioning
|
|
||||||
whether it was worth the effort to finish the work and commit to maintaining a
|
|
||||||
secure certificate validation system as opposed to focusing on core Matrix
|
|
||||||
development.
|
|
||||||
|
|
||||||
Meanwhile, Let's Encrypt came along in 2016, and put the final nail in the
|
|
||||||
coffin of the Perspectives project (which was already pretty dead). So, the
|
|
||||||
Spec Core Team decided that a better approach would be to mandate valid TLS
|
|
||||||
certificates for federation alongside the rest of the Web. More details can be
|
|
||||||
found in
|
|
||||||
[MSC1711](https://github.com/matrix-org/matrix-doc/blob/main/proposals/1711-x509-for-federation.md#background-the-failure-of-the-perspectives-approach).
|
|
||||||
|
|
||||||
This results in a breaking change, which is disruptive, but absolutely critical
|
|
||||||
for the security model. However, the existence of Let's Encrypt as a trivial
|
|
||||||
way to replace the old self-signed certificates with valid CA-signed ones helps
|
|
||||||
smooth things over massively, especially as Synapse can now automate Let's
|
|
||||||
Encrypt certificate generation if needed.
|
|
||||||
|
|
||||||
### Can I manage my own certificates rather than having Synapse renew certificates itself?
|
|
||||||
|
|
||||||
Yes, you are welcome to manage your certificates yourself. Synapse will only
|
|
||||||
attempt to obtain certificates from Let's Encrypt if you configure it to do
|
|
||||||
so.The only requirement is that there is a valid TLS cert present for
|
|
||||||
federation end points.
|
|
||||||
|
|
||||||
### Do you still recommend against using a reverse proxy on the federation port?
|
|
||||||
|
|
||||||
We no longer actively recommend against using a reverse proxy. Many admins will
|
|
||||||
find it easier to direct federation traffic to a reverse proxy and manage their
|
|
||||||
own TLS certificates, and this is a supported configuration.
|
|
||||||
|
|
||||||
See [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
|
|
||||||
reverse proxy.
|
|
||||||
|
|
||||||
### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
|
|
||||||
|
|
||||||
Practically speaking, this is no longer necessary.
|
|
||||||
|
|
||||||
If you are using a reverse proxy for all of your TLS traffic, then you can set
|
|
||||||
`no_tls: True`. In that case, the only reason Synapse needs the certificate is
|
|
||||||
to populate a legacy 'tls_fingerprints' field in the federation API. This is
|
|
||||||
ignored by Synapse 0.99.0 and later, and the only time pre-0.99 Synapses will
|
|
||||||
check it is when attempting to fetch the server keys - and generally this is
|
|
||||||
delegated via `matrix.org`, which is on 0.99.0.
|
|
||||||
|
|
||||||
However, there is a bug in Synapse 0.99.0
|
|
||||||
[4554](<https://github.com/matrix-org/synapse/issues/4554>) which prevents
|
|
||||||
Synapse from starting if you do not give it a TLS certificate. To work around
|
|
||||||
this, you can give it any TLS certificate at all. This will be fixed soon.
|
|
||||||
|
|
||||||
### Do I need the same certificate for the client and federation port?
|
|
||||||
|
|
||||||
No. There is nothing stopping you from using different certificates,
|
|
||||||
particularly if you are using a reverse proxy. However, Synapse will use the
|
|
||||||
same certificate on any ports where TLS is configured.
|
|
||||||
|
|
||||||
### How do I tell Synapse to reload my keys/certificates after I replace them?
|
|
||||||
|
|
||||||
Synapse will reload the keys and certificates when it receives a SIGHUP - for
|
|
||||||
example `kill -HUP $(cat homeserver.pid)`. Alternatively, simply restart
|
|
||||||
Synapse, though this will result in downtime while it restarts.
|
|
@ -13,7 +13,6 @@
|
|||||||
|
|
||||||
# Upgrading
|
# Upgrading
|
||||||
- [Upgrading between Synapse Versions](upgrade.md)
|
- [Upgrading between Synapse Versions](upgrade.md)
|
||||||
- [Upgrading from pre-Synapse 1.0](MSC1711_certificates_FAQ.md)
|
|
||||||
|
|
||||||
# Usage
|
# Usage
|
||||||
- [Federation](federate.md)
|
- [Federation](federate.md)
|
||||||
@ -72,7 +71,7 @@
|
|||||||
- [Understanding Synapse Through Grafana Graphs](usage/administration/understanding_synapse_through_grafana_graphs.md)
|
- [Understanding Synapse Through Grafana Graphs](usage/administration/understanding_synapse_through_grafana_graphs.md)
|
||||||
- [Useful SQL for Admins](usage/administration/useful_sql_for_admins.md)
|
- [Useful SQL for Admins](usage/administration/useful_sql_for_admins.md)
|
||||||
- [Database Maintenance Tools](usage/administration/database_maintenance_tools.md)
|
- [Database Maintenance Tools](usage/administration/database_maintenance_tools.md)
|
||||||
- [State Groups](usage/administration/state_groups.md)
|
- [State Groups](usage/administration/state_groups.md)
|
||||||
- [Request log format](usage/administration/request_log.md)
|
- [Request log format](usage/administration/request_log.md)
|
||||||
- [Admin FAQ](usage/administration/admin_faq.md)
|
- [Admin FAQ](usage/administration/admin_faq.md)
|
||||||
- [Scripts]()
|
- [Scripts]()
|
||||||
@ -80,6 +79,7 @@
|
|||||||
# Development
|
# Development
|
||||||
- [Contributing Guide](development/contributing_guide.md)
|
- [Contributing Guide](development/contributing_guide.md)
|
||||||
- [Code Style](code_style.md)
|
- [Code Style](code_style.md)
|
||||||
|
- [Release Cycle](development/releases.md)
|
||||||
- [Git Usage](development/git.md)
|
- [Git Usage](development/git.md)
|
||||||
- [Testing]()
|
- [Testing]()
|
||||||
- [OpenTracing](opentracing.md)
|
- [OpenTracing](opentracing.md)
|
||||||
|
@ -2,6 +2,9 @@
|
|||||||
|
|
||||||
These APIs allow extracting media information from the homeserver.
|
These APIs allow extracting media information from the homeserver.
|
||||||
|
|
||||||
|
Details about the format of the `media_id` and storage of the media in the file system
|
||||||
|
are documented under [media repository](../media_repository.md).
|
||||||
|
|
||||||
To use it, you will need to authenticate by providing an `access_token`
|
To use it, you will need to authenticate by providing an `access_token`
|
||||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||||
|
|
||||||
|
@ -331,7 +331,7 @@ An empty body may be passed for backwards compatibility.
|
|||||||
|
|
||||||
The following actions are performed when deactivating an user:
|
The following actions are performed when deactivating an user:
|
||||||
|
|
||||||
- Try to unpind 3PIDs from the identity server
|
- Try to unbind 3PIDs from the identity server
|
||||||
- Remove all 3PIDs from the homeserver
|
- Remove all 3PIDs from the homeserver
|
||||||
- Delete all devices and E2EE keys
|
- Delete all devices and E2EE keys
|
||||||
- Delete all access tokens
|
- Delete all access tokens
|
||||||
@ -539,6 +539,11 @@ The following fields are returned in the JSON response body:
|
|||||||
|
|
||||||
### List media uploaded by a user
|
### List media uploaded by a user
|
||||||
Gets a list of all local media that a specific `user_id` has created.
|
Gets a list of all local media that a specific `user_id` has created.
|
||||||
|
These are media that the user has uploaded themselves
|
||||||
|
([local media](../media_repository.md#local-media)), as well as
|
||||||
|
[URL preview images](../media_repository.md#url-previews) requested by the user if the
|
||||||
|
[feature is enabled](../development/url_previews.md).
|
||||||
|
|
||||||
By default, the response is ordered by descending creation date and ascending media ID.
|
By default, the response is ordered by descending creation date and ascending media ID.
|
||||||
The newest media is on top. You can change the order with parameters
|
The newest media is on top. You can change the order with parameters
|
||||||
`order_by` and `dir`.
|
`order_by` and `dir`.
|
||||||
@ -635,7 +640,9 @@ The following fields are returned in the JSON response body:
|
|||||||
Media objects contain the following fields:
|
Media objects contain the following fields:
|
||||||
- `created_ts` - integer - Timestamp when the content was uploaded in ms.
|
- `created_ts` - integer - Timestamp when the content was uploaded in ms.
|
||||||
- `last_access_ts` - integer - Timestamp when the content was last accessed in ms.
|
- `last_access_ts` - integer - Timestamp when the content was last accessed in ms.
|
||||||
- `media_id` - string - The id used to refer to the media.
|
- `media_id` - string - The id used to refer to the media. Details about the format
|
||||||
|
are documented under
|
||||||
|
[media repository](../media_repository.md).
|
||||||
- `media_length` - integer - Length of the media in bytes.
|
- `media_length` - integer - Length of the media in bytes.
|
||||||
- `media_type` - string - The MIME-type of the media.
|
- `media_type` - string - The MIME-type of the media.
|
||||||
- `quarantined_by` - string - The user ID that initiated the quarantine request
|
- `quarantined_by` - string - The user ID that initiated the quarantine request
|
||||||
|
37
docs/development/releases.md
Normal file
37
docs/development/releases.md
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
# Synapse Release Cycle
|
||||||
|
|
||||||
|
Releases of Synapse follow a two week release cycle with new releases usually
|
||||||
|
occurring on Tuesdays:
|
||||||
|
|
||||||
|
* Day 0: Synapse `N - 1` is released.
|
||||||
|
* Day 7: Synapse `N` release candidate 1 is released.
|
||||||
|
* Days 7 - 13: Synapse `N` release candidates 2+ are released, if bugs are found.
|
||||||
|
* Day 14: Synapse `N` is released.
|
||||||
|
|
||||||
|
Note that this schedule might be modified depending on the availability of the
|
||||||
|
Synapse team, e.g. releases may be skipped to avoid holidays.
|
||||||
|
|
||||||
|
Release announcements can be found in the
|
||||||
|
[release category of the Matrix blog](https://matrix.org/blog/category/releases).
|
||||||
|
|
||||||
|
## Bugfix releases
|
||||||
|
|
||||||
|
If a bug is found after release that is deemed severe enough (by a combination
|
||||||
|
of the impacted users and the impact on those users) then a bugfix release may
|
||||||
|
be issued. This may be at any point in the release cycle.
|
||||||
|
|
||||||
|
## Security releases
|
||||||
|
|
||||||
|
Security will sometimes be backported to the previous version and released
|
||||||
|
immediately before the next release candidate. An example of this might be:
|
||||||
|
|
||||||
|
* Day 0: Synapse N - 1 is released.
|
||||||
|
* Day 7: Synapse (N - 1).1 is released as Synapse N - 1 + the security fix.
|
||||||
|
* Day 7: Synapse N release candidate 1 is released (including the security fix).
|
||||||
|
|
||||||
|
Depending on the impact and complexity of security fixes, multiple fixes might
|
||||||
|
be held to be released together.
|
||||||
|
|
||||||
|
In some cases, a pre-disclosure of a security release will be issued as a notice
|
||||||
|
to Synapse operators that there is an upcoming security release. These can be
|
||||||
|
found in the [security category of the Matrix blog](https://matrix.org/blog/category/security).
|
@ -148,7 +148,7 @@ Here's an example featuring all currently supported keys:
|
|||||||
"address": "33123456789",
|
"address": "33123456789",
|
||||||
"validated_at": 1642701357084,
|
"validated_at": 1642701357084,
|
||||||
},
|
},
|
||||||
"org.matrix.msc3231.login.registration_token": "sometoken", # User has registered through the flow described in MSC3231
|
"m.login.registration_token": "sometoken", # User has registered through a registration token
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -166,6 +166,25 @@ any of the subsequent implementations of this callback. If every callback return
|
|||||||
the username provided by the user is used, if any (otherwise one is automatically
|
the username provided by the user is used, if any (otherwise one is automatically
|
||||||
generated).
|
generated).
|
||||||
|
|
||||||
|
## `is_3pid_allowed`
|
||||||
|
|
||||||
|
_First introduced in Synapse v1.53.0_
|
||||||
|
|
||||||
|
```python
|
||||||
|
async def is_3pid_allowed(self, medium: str, address: str, registration: bool) -> bool
|
||||||
|
```
|
||||||
|
|
||||||
|
Called when attempting to bind a third-party identifier (i.e. an email address or a phone
|
||||||
|
number). The module is given the medium of the third-party identifier (which is `email` if
|
||||||
|
the identifier is an email address, or `msisdn` if the identifier is a phone number) and
|
||||||
|
its address, as well as a boolean indicating whether the attempt to bind is happening as
|
||||||
|
part of registering a new user. The module must return a boolean indicating whether the
|
||||||
|
identifier can be allowed to be bound to an account on the local homeserver.
|
||||||
|
|
||||||
|
If multiple modules implement this callback, they will be considered in order. If a
|
||||||
|
callback returns `True`, Synapse falls through to the next one. The value of the first
|
||||||
|
callback that does not return `True` will be used. If this happens, Synapse will not call
|
||||||
|
any of the subsequent implementations of this callback.
|
||||||
|
|
||||||
## Example
|
## Example
|
||||||
|
|
||||||
|
@ -751,11 +751,16 @@ caches:
|
|||||||
per_cache_factors:
|
per_cache_factors:
|
||||||
#get_users_who_share_room_with_user: 2.0
|
#get_users_who_share_room_with_user: 2.0
|
||||||
|
|
||||||
# Controls how long an entry can be in a cache without having been
|
# Controls whether cache entries are evicted after a specified time
|
||||||
# accessed before being evicted. Defaults to None, which means
|
# period. Defaults to true. Uncomment to disable this feature.
|
||||||
# entries are never evicted based on time.
|
|
||||||
#
|
#
|
||||||
#expiry_time: 30m
|
#expire_caches: false
|
||||||
|
|
||||||
|
# If expire_caches is enabled, this flag controls how long an entry can
|
||||||
|
# be in a cache without having been accessed before being evicted.
|
||||||
|
# Defaults to 30m. Uncomment to set a different time to live for cache entries.
|
||||||
|
#
|
||||||
|
#cache_entry_ttl: 30m
|
||||||
|
|
||||||
# Controls how long the results of a /sync request are cached for after
|
# Controls how long the results of a /sync request are cached for after
|
||||||
# a successful response is returned. A higher duration can help clients with
|
# a successful response is returned. A higher duration can help clients with
|
||||||
@ -857,6 +862,9 @@ log_config: "CONFDIR/SERVERNAME.log.config"
|
|||||||
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
|
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
|
||||||
# - two for ratelimiting how often invites can be sent in a room or to a
|
# - two for ratelimiting how often invites can be sent in a room or to a
|
||||||
# specific user.
|
# specific user.
|
||||||
|
# - one for ratelimiting 3PID invites (i.e. invites sent to a third-party ID
|
||||||
|
# such as an email address or a phone number) based on the account that's
|
||||||
|
# sending the invite.
|
||||||
#
|
#
|
||||||
# The defaults are as shown below.
|
# The defaults are as shown below.
|
||||||
#
|
#
|
||||||
@ -906,6 +914,10 @@ log_config: "CONFDIR/SERVERNAME.log.config"
|
|||||||
# per_user:
|
# per_user:
|
||||||
# per_second: 0.003
|
# per_second: 0.003
|
||||||
# burst_count: 5
|
# burst_count: 5
|
||||||
|
#
|
||||||
|
#rc_third_party_invite:
|
||||||
|
# per_second: 0.2
|
||||||
|
# burst_count: 10
|
||||||
|
|
||||||
# Ratelimiting settings for incoming federation
|
# Ratelimiting settings for incoming federation
|
||||||
#
|
#
|
||||||
|
@ -141,7 +141,7 @@ formatters:
|
|||||||
handlers:
|
handlers:
|
||||||
console:
|
console:
|
||||||
class: logging.StreamHandler
|
class: logging.StreamHandler
|
||||||
location: ext://sys.stdout
|
stream: ext://sys.stdout
|
||||||
file:
|
file:
|
||||||
class: logging.FileHandler
|
class: logging.FileHandler
|
||||||
formatter: json
|
formatter: json
|
||||||
|
@ -85,6 +85,70 @@ process, for example:
|
|||||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Upgrading to v1.53.0
|
||||||
|
|
||||||
|
## Dropping support for `webclient` listeners and non-HTTP(S) `web_client_location`
|
||||||
|
|
||||||
|
Per the deprecation notice in Synapse v1.51.0, listeners of type `webclient`
|
||||||
|
are no longer supported and configuring them is a now a configuration error.
|
||||||
|
|
||||||
|
Configuring a non-HTTP(S) `web_client_location` configuration is is now a
|
||||||
|
configuration error. Since the `webclient` listener is no longer supported, this
|
||||||
|
setting only applies to the root path `/` of Synapse's web server and no longer
|
||||||
|
the `/_matrix/client/` path.
|
||||||
|
|
||||||
|
## Stablisation of MSC3231
|
||||||
|
|
||||||
|
The unstable validity-check endpoint for the
|
||||||
|
[Registration Tokens](https://spec.matrix.org/v1.2/client-server-api/#get_matrixclientv1registermloginregistration_tokenvalidity)
|
||||||
|
feature has been stabilised and moved from:
|
||||||
|
|
||||||
|
`/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity`
|
||||||
|
|
||||||
|
to:
|
||||||
|
|
||||||
|
`/_matrix/client/v1/register/m.login.registration_token/validity`
|
||||||
|
|
||||||
|
Please update any relevant reverse proxy or firewall configurations appropriately.
|
||||||
|
|
||||||
|
## Time-based cache expiry is now enabled by default
|
||||||
|
|
||||||
|
Formerly, entries in the cache were not evicted regardless of whether they were accessed after storing.
|
||||||
|
This behavior has now changed. By default entries in the cache are now evicted after 30m of not being accessed.
|
||||||
|
To change the default behavior, go to the `caches` section of the config and change the `expire_caches` and
|
||||||
|
`cache_entry_ttl` flags as necessary. Please note that these flags replace the `expiry_time` flag in the config.
|
||||||
|
The `expiry_time` flag will still continue to work, but it has been deprecated and will be removed in the future.
|
||||||
|
|
||||||
|
## Deprecation of `capability` `org.matrix.msc3283.*`
|
||||||
|
|
||||||
|
The `capabilities` of MSC3283 from the REST API `/_matrix/client/r0/capabilities`
|
||||||
|
becomes stable.
|
||||||
|
|
||||||
|
The old `capabilities`
|
||||||
|
- `org.matrix.msc3283.set_displayname`,
|
||||||
|
- `org.matrix.msc3283.set_avatar_url` and
|
||||||
|
- `org.matrix.msc3283.3pid_changes`
|
||||||
|
|
||||||
|
are deprecated and scheduled to be removed in Synapse v1.54.0.
|
||||||
|
|
||||||
|
The new `capabilities`
|
||||||
|
- `m.set_displayname`,
|
||||||
|
- `m.set_avatar_url` and
|
||||||
|
- `m.3pid_changes`
|
||||||
|
|
||||||
|
are now active by default.
|
||||||
|
|
||||||
|
## Removal of `user_may_create_room_with_invites`
|
||||||
|
|
||||||
|
As announced with the release of [Synapse 1.47.0](#deprecation-of-the-user_may_create_room_with_invites-module-callback),
|
||||||
|
the deprecated `user_may_create_room_with_invites` module callback has been removed.
|
||||||
|
|
||||||
|
Modules relying on it can instead implement [`user_may_invite`](https://matrix-org.github.io/synapse/latest/modules/spam_checker_callbacks.html#user_may_invite)
|
||||||
|
and use the [`get_room_state`](https://github.com/matrix-org/synapse/blob/872f23b95fa980a61b0866c1475e84491991fa20/synapse/module_api/__init__.py#L869-L876)
|
||||||
|
module API to infer whether the invite is happening while creating a room (see [this function](https://github.com/matrix-org/synapse-domain-rule-checker/blob/e7d092dd9f2a7f844928771dbfd9fd24c2332e48/synapse_domain_rule_checker/__init__.py#L56-L89)
|
||||||
|
as an example). Alternately, modules can also implement [`on_create_room`](https://matrix-org.github.io/synapse/latest/modules/third_party_rules_callbacks.html#on_create_room).
|
||||||
|
|
||||||
|
|
||||||
# Upgrading to v1.52.0
|
# Upgrading to v1.52.0
|
||||||
|
|
||||||
## Twisted security release
|
## Twisted security release
|
||||||
@ -1141,8 +1205,7 @@ more details on upgrading your database.
|
|||||||
|
|
||||||
Synapse v1.0 is the first release to enforce validation of TLS
|
Synapse v1.0 is the first release to enforce validation of TLS
|
||||||
certificates for the federation API. It is therefore essential that your
|
certificates for the federation API. It is therefore essential that your
|
||||||
certificates are correctly configured. See the
|
certificates are correctly configured.
|
||||||
[FAQ](MSC1711_certificates_FAQ.md) for more information.
|
|
||||||
|
|
||||||
Note, v1.0 installations will also no longer be able to federate with
|
Note, v1.0 installations will also no longer be able to federate with
|
||||||
servers that have not correctly configured their certificates.
|
servers that have not correctly configured their certificates.
|
||||||
@ -1207,9 +1270,6 @@ you will need to replace any self-signed certificates with those
|
|||||||
verified by a root CA. Information on how to do so can be found at the
|
verified by a root CA. Information on how to do so can be found at the
|
||||||
ACME docs.
|
ACME docs.
|
||||||
|
|
||||||
For more information on configuring TLS certificates see the
|
|
||||||
[FAQ](MSC1711_certificates_FAQ.md).
|
|
||||||
|
|
||||||
# Upgrading to v0.34.0
|
# Upgrading to v0.34.0
|
||||||
|
|
||||||
1. This release is the first to fully support Python 3. Synapse will
|
1. This release is the first to fully support Python 3. Synapse will
|
||||||
|
@ -241,7 +241,7 @@ expressions:
|
|||||||
# Registration/login requests
|
# Registration/login requests
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
|
||||||
^/_matrix/client/(r0|v3|unstable)/register$
|
^/_matrix/client/(r0|v3|unstable)/register$
|
||||||
^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
|
^/_matrix/client/v1/register/m.login.registration_token/validity$
|
||||||
|
|
||||||
# Event sending requests
|
# Event sending requests
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
|
||||||
|
9
mypy.ini
9
mypy.ini
@ -142,6 +142,9 @@ disallow_untyped_defs = True
|
|||||||
[mypy-synapse.crypto.*]
|
[mypy-synapse.crypto.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.event_auth]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.events.*]
|
[mypy-synapse.events.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
@ -166,9 +169,15 @@ disallow_untyped_defs = True
|
|||||||
[mypy-synapse.module_api.*]
|
[mypy-synapse.module_api.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.notifier]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.push.*]
|
[mypy-synapse.push.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.replication.*]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.rest.*]
|
[mypy-synapse.rest.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
@ -25,7 +25,6 @@ DISTS = (
|
|||||||
"debian:bookworm",
|
"debian:bookworm",
|
||||||
"debian:sid",
|
"debian:sid",
|
||||||
"ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14)
|
"ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14)
|
||||||
"ubuntu:hirsute", # 21.04 (EOL 2022-01-05)
|
|
||||||
"ubuntu:impish", # 21.10 (EOL 2022-07)
|
"ubuntu:impish", # 21.10 (EOL 2022-07)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -24,10 +24,10 @@ import traceback
|
|||||||
from typing import Dict, Iterable, Optional, Set
|
from typing import Dict, Iterable, Optional, Set
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
|
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer, reactor
|
||||||
|
|
||||||
import synapse
|
|
||||||
from synapse.config.database import DatabaseConnectionConfig
|
from synapse.config.database import DatabaseConnectionConfig
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.logging.context import (
|
from synapse.logging.context import (
|
||||||
@ -36,6 +36,8 @@ from synapse.logging.context import (
|
|||||||
run_in_background,
|
run_in_background,
|
||||||
)
|
)
|
||||||
from synapse.storage.database import DatabasePool, make_conn
|
from synapse.storage.database import DatabasePool, make_conn
|
||||||
|
from synapse.storage.databases.main import PushRuleStore
|
||||||
|
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
|
||||||
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
|
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
|
||||||
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
|
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
|
||||||
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
|
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
|
||||||
@ -65,7 +67,6 @@ from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStor
|
|||||||
from synapse.storage.engines import create_engine
|
from synapse.storage.engines import create_engine
|
||||||
from synapse.storage.prepare_database import prepare_database
|
from synapse.storage.prepare_database import prepare_database
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
logger = logging.getLogger("synapse_port_db")
|
logger = logging.getLogger("synapse_port_db")
|
||||||
|
|
||||||
@ -180,6 +181,8 @@ class Store(
|
|||||||
UserDirectoryBackgroundUpdateStore,
|
UserDirectoryBackgroundUpdateStore,
|
||||||
EndToEndKeyBackgroundStore,
|
EndToEndKeyBackgroundStore,
|
||||||
StatsStore,
|
StatsStore,
|
||||||
|
AccountDataWorkerStore,
|
||||||
|
PushRuleStore,
|
||||||
PusherWorkerStore,
|
PusherWorkerStore,
|
||||||
PresenceBackgroundUpdateStore,
|
PresenceBackgroundUpdateStore,
|
||||||
GroupServerWorkerStore,
|
GroupServerWorkerStore,
|
||||||
@ -218,7 +221,9 @@ class MockHomeserver:
|
|||||||
self.clock = Clock(reactor)
|
self.clock = Clock(reactor)
|
||||||
self.config = config
|
self.config = config
|
||||||
self.hostname = config.server.server_name
|
self.hostname = config.server.server_name
|
||||||
self.version_string = "Synapse/" + get_version_string(synapse)
|
self.version_string = "Synapse/" + get_distribution_version_string(
|
||||||
|
"matrix-synapse"
|
||||||
|
)
|
||||||
|
|
||||||
def get_clock(self):
|
def get_clock(self):
|
||||||
return self.clock
|
return self.clock
|
||||||
|
@ -18,15 +18,14 @@ import logging
|
|||||||
import sys
|
import sys
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
|
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer, reactor
|
||||||
|
|
||||||
import synapse
|
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
from synapse.storage import DataStore
|
from synapse.storage import DataStore
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
logger = logging.getLogger("update_database")
|
logger = logging.getLogger("update_database")
|
||||||
|
|
||||||
@ -39,7 +38,9 @@ class MockHomeserver(HomeServer):
|
|||||||
config.server.server_name, reactor=reactor, config=config, **kwargs
|
config.server.server_name, reactor=reactor, config=config, **kwargs
|
||||||
)
|
)
|
||||||
|
|
||||||
self.version_string = "Synapse/" + get_version_string(synapse)
|
self.version_string = "Synapse/" + get_distribution_version_string(
|
||||||
|
"matrix-synapse"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def run_background_updates(hs):
|
def run_background_updates(hs):
|
||||||
|
@ -47,7 +47,7 @@ try:
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "1.52.0"
|
__version__ = "1.53.0"
|
||||||
|
|
||||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||||
# We import here so that we don't have to install a bunch of deps when
|
# We import here so that we don't have to install a bunch of deps when
|
||||||
|
@ -81,7 +81,7 @@ class LoginType:
|
|||||||
TERMS: Final = "m.login.terms"
|
TERMS: Final = "m.login.terms"
|
||||||
SSO: Final = "m.login.sso"
|
SSO: Final = "m.login.sso"
|
||||||
DUMMY: Final = "m.login.dummy"
|
DUMMY: Final = "m.login.dummy"
|
||||||
REGISTRATION_TOKEN: Final = "org.matrix.msc3231.login.registration_token"
|
REGISTRATION_TOKEN: Final = "m.login.registration_token"
|
||||||
|
|
||||||
|
|
||||||
# This is used in the `type` parameter for /register when called by
|
# This is used in the `type` parameter for /register when called by
|
||||||
|
@ -406,6 +406,9 @@ class RoomKeysVersionError(SynapseError):
|
|||||||
super().__init__(403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION)
|
super().__init__(403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION)
|
||||||
self.current_version = current_version
|
self.current_version = current_version
|
||||||
|
|
||||||
|
def error_dict(self) -> "JsonDict":
|
||||||
|
return cs_error(self.msg, self.errcode, current_version=self.current_version)
|
||||||
|
|
||||||
|
|
||||||
class UnsupportedRoomVersionError(SynapseError):
|
class UnsupportedRoomVersionError(SynapseError):
|
||||||
"""The client's request to create a room used a room version that the server does
|
"""The client's request to create a room used a room version that the server does
|
||||||
|
@ -28,7 +28,6 @@ FEDERATION_V1_PREFIX = FEDERATION_PREFIX + "/v1"
|
|||||||
FEDERATION_V2_PREFIX = FEDERATION_PREFIX + "/v2"
|
FEDERATION_V2_PREFIX = FEDERATION_PREFIX + "/v2"
|
||||||
FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
|
FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
|
||||||
STATIC_PREFIX = "/_matrix/static"
|
STATIC_PREFIX = "/_matrix/static"
|
||||||
WEB_CLIENT_PREFIX = "/_matrix/client"
|
|
||||||
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
||||||
MEDIA_R0_PREFIX = "/_matrix/media/r0"
|
MEDIA_R0_PREFIX = "/_matrix/media/r0"
|
||||||
MEDIA_V3_PREFIX = "/_matrix/media/v3"
|
MEDIA_V3_PREFIX = "/_matrix/media/v3"
|
||||||
|
@ -37,6 +37,7 @@ from typing import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
from cryptography.utils import CryptographyDeprecationWarning
|
from cryptography.utils import CryptographyDeprecationWarning
|
||||||
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
|
|
||||||
import twisted
|
import twisted
|
||||||
from twisted.internet import defer, error, reactor as _reactor
|
from twisted.internet import defer, error, reactor as _reactor
|
||||||
@ -67,7 +68,6 @@ from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
|
|||||||
from synapse.util.daemonize import daemonize_process
|
from synapse.util.daemonize import daemonize_process
|
||||||
from synapse.util.gai_resolver import GAIResolver
|
from synapse.util.gai_resolver import GAIResolver
|
||||||
from synapse.util.rlimit import change_resource_limit
|
from synapse.util.rlimit import change_resource_limit
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -487,7 +487,8 @@ def setup_sentry(hs: "HomeServer") -> None:
|
|||||||
import sentry_sdk
|
import sentry_sdk
|
||||||
|
|
||||||
sentry_sdk.init(
|
sentry_sdk.init(
|
||||||
dsn=hs.config.metrics.sentry_dsn, release=get_version_string(synapse)
|
dsn=hs.config.metrics.sentry_dsn,
|
||||||
|
release=get_distribution_version_string("matrix-synapse"),
|
||||||
)
|
)
|
||||||
|
|
||||||
# We set some default tags that give some context to this instance
|
# We set some default tags that give some context to this instance
|
||||||
|
@ -19,6 +19,8 @@ import sys
|
|||||||
import tempfile
|
import tempfile
|
||||||
from typing import List, Optional
|
from typing import List, Optional
|
||||||
|
|
||||||
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
|
|
||||||
from twisted.internet import defer, task
|
from twisted.internet import defer, task
|
||||||
|
|
||||||
import synapse
|
import synapse
|
||||||
@ -44,7 +46,6 @@ from synapse.server import HomeServer
|
|||||||
from synapse.storage.databases.main.room import RoomWorkerStore
|
from synapse.storage.databases.main.room import RoomWorkerStore
|
||||||
from synapse.types import StateMap
|
from synapse.types import StateMap
|
||||||
from synapse.util.logcontext import LoggingContext
|
from synapse.util.logcontext import LoggingContext
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
logger = logging.getLogger("synapse.app.admin_cmd")
|
logger = logging.getLogger("synapse.app.admin_cmd")
|
||||||
|
|
||||||
@ -223,7 +224,7 @@ def start(config_options: List[str]) -> None:
|
|||||||
ss = AdminCmdServer(
|
ss = AdminCmdServer(
|
||||||
config.server.server_name,
|
config.server.server_name,
|
||||||
config=config,
|
config=config,
|
||||||
version_string="Synapse/" + get_version_string(synapse),
|
version_string="Synapse/" + get_distribution_version_string("matrix-synapse"),
|
||||||
)
|
)
|
||||||
|
|
||||||
setup_logging(ss, config, use_worker_options=True)
|
setup_logging(ss, config, use_worker_options=True)
|
||||||
|
@ -16,6 +16,8 @@ import logging
|
|||||||
import sys
|
import sys
|
||||||
from typing import Dict, List, Optional, Tuple
|
from typing import Dict, List, Optional, Tuple
|
||||||
|
|
||||||
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
|
|
||||||
from twisted.internet import address
|
from twisted.internet import address
|
||||||
from twisted.web.resource import Resource
|
from twisted.web.resource import Resource
|
||||||
|
|
||||||
@ -122,7 +124,6 @@ from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
|
|||||||
from synapse.storage.databases.main.user_directory import UserDirectoryStore
|
from synapse.storage.databases.main.user_directory import UserDirectoryStore
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util.httpresourcetree import create_resource_tree
|
from synapse.util.httpresourcetree import create_resource_tree
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
logger = logging.getLogger("synapse.app.generic_worker")
|
logger = logging.getLogger("synapse.app.generic_worker")
|
||||||
|
|
||||||
@ -482,7 +483,7 @@ def start(config_options: List[str]) -> None:
|
|||||||
hs = GenericWorkerServer(
|
hs = GenericWorkerServer(
|
||||||
config.server.server_name,
|
config.server.server_name,
|
||||||
config=config,
|
config=config,
|
||||||
version_string="Synapse/" + get_version_string(synapse),
|
version_string="Synapse/" + get_distribution_version_string("matrix-synapse"),
|
||||||
)
|
)
|
||||||
|
|
||||||
setup_logging(hs, config, use_worker_options=True)
|
setup_logging(hs, config, use_worker_options=True)
|
||||||
|
@ -18,22 +18,23 @@ import os
|
|||||||
import sys
|
import sys
|
||||||
from typing import Dict, Iterable, Iterator, List
|
from typing import Dict, Iterable, Iterator, List
|
||||||
|
|
||||||
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
|
|
||||||
from twisted.internet.tcp import Port
|
from twisted.internet.tcp import Port
|
||||||
from twisted.web.resource import EncodingResourceWrapper, Resource
|
from twisted.web.resource import EncodingResourceWrapper, Resource
|
||||||
from twisted.web.server import GzipEncoderFactory
|
from twisted.web.server import GzipEncoderFactory
|
||||||
from twisted.web.static import File
|
|
||||||
|
|
||||||
import synapse
|
import synapse
|
||||||
import synapse.config.logger
|
import synapse.config.logger
|
||||||
from synapse import events
|
from synapse import events
|
||||||
from synapse.api.urls import (
|
from synapse.api.urls import (
|
||||||
|
CLIENT_API_PREFIX,
|
||||||
FEDERATION_PREFIX,
|
FEDERATION_PREFIX,
|
||||||
LEGACY_MEDIA_PREFIX,
|
LEGACY_MEDIA_PREFIX,
|
||||||
MEDIA_R0_PREFIX,
|
MEDIA_R0_PREFIX,
|
||||||
MEDIA_V3_PREFIX,
|
MEDIA_V3_PREFIX,
|
||||||
SERVER_KEY_V2_PREFIX,
|
SERVER_KEY_V2_PREFIX,
|
||||||
STATIC_PREFIX,
|
STATIC_PREFIX,
|
||||||
WEB_CLIENT_PREFIX,
|
|
||||||
)
|
)
|
||||||
from synapse.app import _base
|
from synapse.app import _base
|
||||||
from synapse.app._base import (
|
from synapse.app._base import (
|
||||||
@ -53,7 +54,6 @@ from synapse.http.additional_resource import AdditionalResource
|
|||||||
from synapse.http.server import (
|
from synapse.http.server import (
|
||||||
OptionsResource,
|
OptionsResource,
|
||||||
RootOptionsRedirectResource,
|
RootOptionsRedirectResource,
|
||||||
RootRedirect,
|
|
||||||
StaticResource,
|
StaticResource,
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseSite
|
from synapse.http.site import SynapseSite
|
||||||
@ -72,7 +72,6 @@ from synapse.server import HomeServer
|
|||||||
from synapse.storage import DataStore
|
from synapse.storage import DataStore
|
||||||
from synapse.util.httpresourcetree import create_resource_tree
|
from synapse.util.httpresourcetree import create_resource_tree
|
||||||
from synapse.util.module_loader import load_module
|
from synapse.util.module_loader import load_module
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
logger = logging.getLogger("synapse.app.homeserver")
|
logger = logging.getLogger("synapse.app.homeserver")
|
||||||
|
|
||||||
@ -134,15 +133,12 @@ class SynapseHomeServer(HomeServer):
|
|||||||
# Try to find something useful to serve at '/':
|
# Try to find something useful to serve at '/':
|
||||||
#
|
#
|
||||||
# 1. Redirect to the web client if it is an HTTP(S) URL.
|
# 1. Redirect to the web client if it is an HTTP(S) URL.
|
||||||
# 2. Redirect to the web client served via Synapse.
|
# 2. Redirect to the static "Synapse is running" page.
|
||||||
# 3. Redirect to the static "Synapse is running" page.
|
# 3. Do not redirect and use a blank resource.
|
||||||
# 4. Do not redirect and use a blank resource.
|
if self.config.server.web_client_location:
|
||||||
if self.config.server.web_client_location_is_redirect:
|
|
||||||
root_resource: Resource = RootOptionsRedirectResource(
|
root_resource: Resource = RootOptionsRedirectResource(
|
||||||
self.config.server.web_client_location
|
self.config.server.web_client_location
|
||||||
)
|
)
|
||||||
elif WEB_CLIENT_PREFIX in resources:
|
|
||||||
root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
|
|
||||||
elif STATIC_PREFIX in resources:
|
elif STATIC_PREFIX in resources:
|
||||||
root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
|
root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
|
||||||
else:
|
else:
|
||||||
@ -201,13 +197,7 @@ class SynapseHomeServer(HomeServer):
|
|||||||
|
|
||||||
resources.update(
|
resources.update(
|
||||||
{
|
{
|
||||||
"/_matrix/client/api/v1": client_resource,
|
CLIENT_API_PREFIX: client_resource,
|
||||||
"/_matrix/client/r0": client_resource,
|
|
||||||
"/_matrix/client/v1": client_resource,
|
|
||||||
"/_matrix/client/v3": client_resource,
|
|
||||||
"/_matrix/client/unstable": client_resource,
|
|
||||||
"/_matrix/client/v2_alpha": client_resource,
|
|
||||||
"/_matrix/client/versions": client_resource,
|
|
||||||
"/.well-known": well_known_resource(self),
|
"/.well-known": well_known_resource(self),
|
||||||
"/_synapse/admin": AdminRestResource(self),
|
"/_synapse/admin": AdminRestResource(self),
|
||||||
**build_synapse_client_resource_tree(self),
|
**build_synapse_client_resource_tree(self),
|
||||||
@ -270,28 +260,6 @@ class SynapseHomeServer(HomeServer):
|
|||||||
if name in ["keys", "federation"]:
|
if name in ["keys", "federation"]:
|
||||||
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
|
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
|
||||||
|
|
||||||
if name == "webclient":
|
|
||||||
# webclient listeners are deprecated as of Synapse v1.51.0, remove it
|
|
||||||
# in > v1.53.0.
|
|
||||||
webclient_loc = self.config.server.web_client_location
|
|
||||||
|
|
||||||
if webclient_loc is None:
|
|
||||||
logger.warning(
|
|
||||||
"Not enabling webclient resource, as web_client_location is unset."
|
|
||||||
)
|
|
||||||
elif self.config.server.web_client_location_is_redirect:
|
|
||||||
resources[WEB_CLIENT_PREFIX] = RootRedirect(webclient_loc)
|
|
||||||
else:
|
|
||||||
logger.warning(
|
|
||||||
"Running webclient on the same domain is not recommended: "
|
|
||||||
"https://github.com/matrix-org/synapse#security-note - "
|
|
||||||
"after you move webclient to different host you can set "
|
|
||||||
"web_client_location to its full URL to enable redirection."
|
|
||||||
)
|
|
||||||
# GZip is disabled here due to
|
|
||||||
# https://twistedmatrix.com/trac/ticket/7678
|
|
||||||
resources[WEB_CLIENT_PREFIX] = File(webclient_loc)
|
|
||||||
|
|
||||||
if name == "metrics" and self.config.metrics.enable_metrics:
|
if name == "metrics" and self.config.metrics.enable_metrics:
|
||||||
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
|
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
|
||||||
|
|
||||||
@ -383,7 +351,7 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
|
|||||||
hs = SynapseHomeServer(
|
hs = SynapseHomeServer(
|
||||||
config.server.server_name,
|
config.server.server_name,
|
||||||
config=config,
|
config=config,
|
||||||
version_string="Synapse/" + get_version_string(synapse),
|
version_string="Synapse/" + get_distribution_version_string("matrix-synapse"),
|
||||||
)
|
)
|
||||||
|
|
||||||
synapse.config.logger.setup_logging(hs, config, use_worker_options=False)
|
synapse.config.logger.setup_logging(hs, config, use_worker_options=False)
|
||||||
|
@ -165,23 +165,16 @@ class ApplicationService:
|
|||||||
return namespace.exclusive
|
return namespace.exclusive
|
||||||
return False
|
return False
|
||||||
|
|
||||||
async def _matches_user(
|
async def _matches_user(self, event: EventBase, store: "DataStore") -> bool:
|
||||||
self, event: Optional[EventBase], store: Optional["DataStore"] = None
|
|
||||||
) -> bool:
|
|
||||||
if not event:
|
|
||||||
return False
|
|
||||||
|
|
||||||
if self.is_interested_in_user(event.sender):
|
if self.is_interested_in_user(event.sender):
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# also check m.room.member state key
|
# also check m.room.member state key
|
||||||
if event.type == EventTypes.Member and self.is_interested_in_user(
|
if event.type == EventTypes.Member and self.is_interested_in_user(
|
||||||
event.state_key
|
event.state_key
|
||||||
):
|
):
|
||||||
return True
|
return True
|
||||||
|
|
||||||
if not store:
|
|
||||||
return False
|
|
||||||
|
|
||||||
does_match = await self.matches_user_in_member_list(event.room_id, store)
|
does_match = await self.matches_user_in_member_list(event.room_id, store)
|
||||||
return does_match
|
return does_match
|
||||||
|
|
||||||
@ -216,21 +209,15 @@ class ApplicationService:
|
|||||||
return self.is_interested_in_room(event.room_id)
|
return self.is_interested_in_room(event.room_id)
|
||||||
return False
|
return False
|
||||||
|
|
||||||
async def _matches_aliases(
|
async def _matches_aliases(self, event: EventBase, store: "DataStore") -> bool:
|
||||||
self, event: EventBase, store: Optional["DataStore"] = None
|
|
||||||
) -> bool:
|
|
||||||
if not store or not event:
|
|
||||||
return False
|
|
||||||
|
|
||||||
alias_list = await store.get_aliases_for_room(event.room_id)
|
alias_list = await store.get_aliases_for_room(event.room_id)
|
||||||
for alias in alias_list:
|
for alias in alias_list:
|
||||||
if self.is_interested_in_alias(alias):
|
if self.is_interested_in_alias(alias):
|
||||||
return True
|
return True
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
async def is_interested(
|
async def is_interested(self, event: EventBase, store: "DataStore") -> bool:
|
||||||
self, event: EventBase, store: Optional["DataStore"] = None
|
|
||||||
) -> bool:
|
|
||||||
"""Check if this service is interested in this event.
|
"""Check if this service is interested in this event.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@ -351,11 +338,13 @@ class AppServiceTransaction:
|
|||||||
id: int,
|
id: int,
|
||||||
events: List[EventBase],
|
events: List[EventBase],
|
||||||
ephemeral: List[JsonDict],
|
ephemeral: List[JsonDict],
|
||||||
|
to_device_messages: List[JsonDict],
|
||||||
):
|
):
|
||||||
self.service = service
|
self.service = service
|
||||||
self.id = id
|
self.id = id
|
||||||
self.events = events
|
self.events = events
|
||||||
self.ephemeral = ephemeral
|
self.ephemeral = ephemeral
|
||||||
|
self.to_device_messages = to_device_messages
|
||||||
|
|
||||||
async def send(self, as_api: "ApplicationServiceApi") -> bool:
|
async def send(self, as_api: "ApplicationServiceApi") -> bool:
|
||||||
"""Sends this transaction using the provided AS API interface.
|
"""Sends this transaction using the provided AS API interface.
|
||||||
@ -369,6 +358,7 @@ class AppServiceTransaction:
|
|||||||
service=self.service,
|
service=self.service,
|
||||||
events=self.events,
|
events=self.events,
|
||||||
ephemeral=self.ephemeral,
|
ephemeral=self.ephemeral,
|
||||||
|
to_device_messages=self.to_device_messages,
|
||||||
txn_id=self.id,
|
txn_id=self.id,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -218,8 +218,23 @@ class ApplicationServiceApi(SimpleHttpClient):
|
|||||||
service: "ApplicationService",
|
service: "ApplicationService",
|
||||||
events: List[EventBase],
|
events: List[EventBase],
|
||||||
ephemeral: List[JsonDict],
|
ephemeral: List[JsonDict],
|
||||||
|
to_device_messages: List[JsonDict],
|
||||||
txn_id: Optional[int] = None,
|
txn_id: Optional[int] = None,
|
||||||
) -> bool:
|
) -> bool:
|
||||||
|
"""
|
||||||
|
Push data to an application service.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
service: The application service to send to.
|
||||||
|
events: The persistent events to send.
|
||||||
|
ephemeral: The ephemeral events to send.
|
||||||
|
to_device_messages: The to-device messages to send.
|
||||||
|
txn_id: An unique ID to assign to this transaction. Application services should
|
||||||
|
deduplicate transactions received with identitical IDs.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if the task succeeded, False if it failed.
|
||||||
|
"""
|
||||||
if service.url is None:
|
if service.url is None:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@ -237,13 +252,15 @@ class ApplicationServiceApi(SimpleHttpClient):
|
|||||||
uri = service.url + ("/transactions/%s" % urllib.parse.quote(str(txn_id)))
|
uri = service.url + ("/transactions/%s" % urllib.parse.quote(str(txn_id)))
|
||||||
|
|
||||||
# Never send ephemeral events to appservices that do not support it
|
# Never send ephemeral events to appservices that do not support it
|
||||||
|
body: Dict[str, List[JsonDict]] = {"events": serialized_events}
|
||||||
if service.supports_ephemeral:
|
if service.supports_ephemeral:
|
||||||
body = {
|
body.update(
|
||||||
"events": serialized_events,
|
{
|
||||||
"de.sorunome.msc2409.ephemeral": ephemeral,
|
# TODO: Update to stable prefixes once MSC2409 completes FCP merge.
|
||||||
}
|
"de.sorunome.msc2409.ephemeral": ephemeral,
|
||||||
else:
|
"de.sorunome.msc2409.to_device": to_device_messages,
|
||||||
body = {"events": serialized_events}
|
}
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
await self.put_json(
|
await self.put_json(
|
||||||
|
@ -48,7 +48,16 @@ This is all tied together by the AppServiceScheduler which DIs the required
|
|||||||
components.
|
components.
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional, Set
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Awaitable,
|
||||||
|
Callable,
|
||||||
|
Collection,
|
||||||
|
Dict,
|
||||||
|
List,
|
||||||
|
Optional,
|
||||||
|
Set,
|
||||||
|
)
|
||||||
|
|
||||||
from synapse.appservice import ApplicationService, ApplicationServiceState
|
from synapse.appservice import ApplicationService, ApplicationServiceState
|
||||||
from synapse.appservice.api import ApplicationServiceApi
|
from synapse.appservice.api import ApplicationServiceApi
|
||||||
@ -71,6 +80,9 @@ MAX_PERSISTENT_EVENTS_PER_TRANSACTION = 100
|
|||||||
# Maximum number of ephemeral events to provide in an AS transaction.
|
# Maximum number of ephemeral events to provide in an AS transaction.
|
||||||
MAX_EPHEMERAL_EVENTS_PER_TRANSACTION = 100
|
MAX_EPHEMERAL_EVENTS_PER_TRANSACTION = 100
|
||||||
|
|
||||||
|
# Maximum number of to-device messages to provide in an AS transaction.
|
||||||
|
MAX_TO_DEVICE_MESSAGES_PER_TRANSACTION = 100
|
||||||
|
|
||||||
|
|
||||||
class ApplicationServiceScheduler:
|
class ApplicationServiceScheduler:
|
||||||
"""Public facing API for this module. Does the required DI to tie the
|
"""Public facing API for this module. Does the required DI to tie the
|
||||||
@ -97,15 +109,40 @@ class ApplicationServiceScheduler:
|
|||||||
for service in services:
|
for service in services:
|
||||||
self.txn_ctrl.start_recoverer(service)
|
self.txn_ctrl.start_recoverer(service)
|
||||||
|
|
||||||
def submit_event_for_as(
|
def enqueue_for_appservice(
|
||||||
self, service: ApplicationService, event: EventBase
|
self,
|
||||||
|
appservice: ApplicationService,
|
||||||
|
events: Optional[Collection[EventBase]] = None,
|
||||||
|
ephemeral: Optional[Collection[JsonDict]] = None,
|
||||||
|
to_device_messages: Optional[Collection[JsonDict]] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
self.queuer.enqueue_event(service, event)
|
"""
|
||||||
|
Enqueue some data to be sent off to an application service.
|
||||||
|
|
||||||
def submit_ephemeral_events_for_as(
|
Args:
|
||||||
self, service: ApplicationService, events: List[JsonDict]
|
appservice: The application service to create and send a transaction to.
|
||||||
) -> None:
|
events: The persistent room events to send.
|
||||||
self.queuer.enqueue_ephemeral(service, events)
|
ephemeral: The ephemeral events to send.
|
||||||
|
to_device_messages: The to-device messages to send. These differ from normal
|
||||||
|
to-device messages sent to clients, as they have 'to_device_id' and
|
||||||
|
'to_user_id' fields.
|
||||||
|
"""
|
||||||
|
# We purposefully allow this method to run with empty events/ephemeral
|
||||||
|
# collections, so that callers do not need to check iterable size themselves.
|
||||||
|
if not events and not ephemeral and not to_device_messages:
|
||||||
|
return
|
||||||
|
|
||||||
|
if events:
|
||||||
|
self.queuer.queued_events.setdefault(appservice.id, []).extend(events)
|
||||||
|
if ephemeral:
|
||||||
|
self.queuer.queued_ephemeral.setdefault(appservice.id, []).extend(ephemeral)
|
||||||
|
if to_device_messages:
|
||||||
|
self.queuer.queued_to_device_messages.setdefault(appservice.id, []).extend(
|
||||||
|
to_device_messages
|
||||||
|
)
|
||||||
|
|
||||||
|
# Kick off a new application service transaction
|
||||||
|
self.queuer.start_background_request(appservice)
|
||||||
|
|
||||||
|
|
||||||
class _ServiceQueuer:
|
class _ServiceQueuer:
|
||||||
@ -121,13 +158,15 @@ class _ServiceQueuer:
|
|||||||
self.queued_events: Dict[str, List[EventBase]] = {}
|
self.queued_events: Dict[str, List[EventBase]] = {}
|
||||||
# dict of {service_id: [events]}
|
# dict of {service_id: [events]}
|
||||||
self.queued_ephemeral: Dict[str, List[JsonDict]] = {}
|
self.queued_ephemeral: Dict[str, List[JsonDict]] = {}
|
||||||
|
# dict of {service_id: [to_device_message_json]}
|
||||||
|
self.queued_to_device_messages: Dict[str, List[JsonDict]] = {}
|
||||||
|
|
||||||
# the appservices which currently have a transaction in flight
|
# the appservices which currently have a transaction in flight
|
||||||
self.requests_in_flight: Set[str] = set()
|
self.requests_in_flight: Set[str] = set()
|
||||||
self.txn_ctrl = txn_ctrl
|
self.txn_ctrl = txn_ctrl
|
||||||
self.clock = clock
|
self.clock = clock
|
||||||
|
|
||||||
def _start_background_request(self, service: ApplicationService) -> None:
|
def start_background_request(self, service: ApplicationService) -> None:
|
||||||
# start a sender for this appservice if we don't already have one
|
# start a sender for this appservice if we don't already have one
|
||||||
if service.id in self.requests_in_flight:
|
if service.id in self.requests_in_flight:
|
||||||
return
|
return
|
||||||
@ -136,16 +175,6 @@ class _ServiceQueuer:
|
|||||||
"as-sender-%s" % (service.id,), self._send_request, service
|
"as-sender-%s" % (service.id,), self._send_request, service
|
||||||
)
|
)
|
||||||
|
|
||||||
def enqueue_event(self, service: ApplicationService, event: EventBase) -> None:
|
|
||||||
self.queued_events.setdefault(service.id, []).append(event)
|
|
||||||
self._start_background_request(service)
|
|
||||||
|
|
||||||
def enqueue_ephemeral(
|
|
||||||
self, service: ApplicationService, events: List[JsonDict]
|
|
||||||
) -> None:
|
|
||||||
self.queued_ephemeral.setdefault(service.id, []).extend(events)
|
|
||||||
self._start_background_request(service)
|
|
||||||
|
|
||||||
async def _send_request(self, service: ApplicationService) -> None:
|
async def _send_request(self, service: ApplicationService) -> None:
|
||||||
# sanity-check: we shouldn't get here if this service already has a sender
|
# sanity-check: we shouldn't get here if this service already has a sender
|
||||||
# running.
|
# running.
|
||||||
@ -162,11 +191,21 @@ class _ServiceQueuer:
|
|||||||
ephemeral = all_events_ephemeral[:MAX_EPHEMERAL_EVENTS_PER_TRANSACTION]
|
ephemeral = all_events_ephemeral[:MAX_EPHEMERAL_EVENTS_PER_TRANSACTION]
|
||||||
del all_events_ephemeral[:MAX_EPHEMERAL_EVENTS_PER_TRANSACTION]
|
del all_events_ephemeral[:MAX_EPHEMERAL_EVENTS_PER_TRANSACTION]
|
||||||
|
|
||||||
if not events and not ephemeral:
|
all_to_device_messages = self.queued_to_device_messages.get(
|
||||||
|
service.id, []
|
||||||
|
)
|
||||||
|
to_device_messages_to_send = all_to_device_messages[
|
||||||
|
:MAX_TO_DEVICE_MESSAGES_PER_TRANSACTION
|
||||||
|
]
|
||||||
|
del all_to_device_messages[:MAX_TO_DEVICE_MESSAGES_PER_TRANSACTION]
|
||||||
|
|
||||||
|
if not events and not ephemeral and not to_device_messages_to_send:
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
await self.txn_ctrl.send(service, events, ephemeral)
|
await self.txn_ctrl.send(
|
||||||
|
service, events, ephemeral, to_device_messages_to_send
|
||||||
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception("AS request failed")
|
logger.exception("AS request failed")
|
||||||
finally:
|
finally:
|
||||||
@ -198,10 +237,24 @@ class _TransactionController:
|
|||||||
service: ApplicationService,
|
service: ApplicationService,
|
||||||
events: List[EventBase],
|
events: List[EventBase],
|
||||||
ephemeral: Optional[List[JsonDict]] = None,
|
ephemeral: Optional[List[JsonDict]] = None,
|
||||||
|
to_device_messages: Optional[List[JsonDict]] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
|
"""
|
||||||
|
Create a transaction with the given data and send to the provided
|
||||||
|
application service.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
service: The application service to send the transaction to.
|
||||||
|
events: The persistent events to include in the transaction.
|
||||||
|
ephemeral: The ephemeral events to include in the transaction.
|
||||||
|
to_device_messages: The to-device messages to include in the transaction.
|
||||||
|
"""
|
||||||
try:
|
try:
|
||||||
txn = await self.store.create_appservice_txn(
|
txn = await self.store.create_appservice_txn(
|
||||||
service=service, events=events, ephemeral=ephemeral or []
|
service=service,
|
||||||
|
events=events,
|
||||||
|
ephemeral=ephemeral or [],
|
||||||
|
to_device_messages=to_device_messages or [],
|
||||||
)
|
)
|
||||||
service_is_up = await self._is_service_up(service)
|
service_is_up = await self._is_service_up(service)
|
||||||
if service_is_up:
|
if service_is_up:
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import threading
|
import threading
|
||||||
@ -23,6 +24,8 @@ from synapse.python_dependencies import DependencyException, check_requirements
|
|||||||
|
|
||||||
from ._base import Config, ConfigError
|
from ._base import Config, ConfigError
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# The prefix for all cache factor-related environment variables
|
# The prefix for all cache factor-related environment variables
|
||||||
_CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR"
|
_CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR"
|
||||||
|
|
||||||
@ -148,11 +151,16 @@ class CacheConfig(Config):
|
|||||||
per_cache_factors:
|
per_cache_factors:
|
||||||
#get_users_who_share_room_with_user: 2.0
|
#get_users_who_share_room_with_user: 2.0
|
||||||
|
|
||||||
# Controls how long an entry can be in a cache without having been
|
# Controls whether cache entries are evicted after a specified time
|
||||||
# accessed before being evicted. Defaults to None, which means
|
# period. Defaults to true. Uncomment to disable this feature.
|
||||||
# entries are never evicted based on time.
|
|
||||||
#
|
#
|
||||||
#expiry_time: 30m
|
#expire_caches: false
|
||||||
|
|
||||||
|
# If expire_caches is enabled, this flag controls how long an entry can
|
||||||
|
# be in a cache without having been accessed before being evicted.
|
||||||
|
# Defaults to 30m. Uncomment to set a different time to live for cache entries.
|
||||||
|
#
|
||||||
|
#cache_entry_ttl: 30m
|
||||||
|
|
||||||
# Controls how long the results of a /sync request are cached for after
|
# Controls how long the results of a /sync request are cached for after
|
||||||
# a successful response is returned. A higher duration can help clients with
|
# a successful response is returned. A higher duration can help clients with
|
||||||
@ -217,12 +225,30 @@ class CacheConfig(Config):
|
|||||||
e.message # noqa: B306, DependencyException.message is a property
|
e.message # noqa: B306, DependencyException.message is a property
|
||||||
)
|
)
|
||||||
|
|
||||||
expiry_time = cache_config.get("expiry_time")
|
expire_caches = cache_config.get("expire_caches", True)
|
||||||
if expiry_time:
|
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")
|
||||||
self.expiry_time_msec: Optional[int] = self.parse_duration(expiry_time)
|
|
||||||
|
if expire_caches:
|
||||||
|
self.expiry_time_msec: Optional[int] = self.parse_duration(cache_entry_ttl)
|
||||||
else:
|
else:
|
||||||
self.expiry_time_msec = None
|
self.expiry_time_msec = None
|
||||||
|
|
||||||
|
# Backwards compatibility support for the now-removed "expiry_time" config flag.
|
||||||
|
expiry_time = cache_config.get("expiry_time")
|
||||||
|
|
||||||
|
if expiry_time and expire_caches:
|
||||||
|
logger.warning(
|
||||||
|
"You have set two incompatible options, expiry_time and expire_caches. Please only use the "
|
||||||
|
"expire_caches and cache_entry_ttl options and delete the expiry_time option as it is "
|
||||||
|
"deprecated."
|
||||||
|
)
|
||||||
|
if expiry_time:
|
||||||
|
logger.warning(
|
||||||
|
"Expiry_time is a deprecated option, please use the expire_caches and cache_entry_ttl options "
|
||||||
|
"instead."
|
||||||
|
)
|
||||||
|
self.expiry_time_msec = self.parse_duration(expiry_time)
|
||||||
|
|
||||||
self.sync_response_cache_duration = self.parse_duration(
|
self.sync_response_cache_duration = self.parse_duration(
|
||||||
cache_config.get("sync_response_cache_duration", 0)
|
cache_config.get("sync_response_cache_duration", 0)
|
||||||
)
|
)
|
||||||
|
@ -26,6 +26,8 @@ class ExperimentalConfig(Config):
|
|||||||
|
|
||||||
# MSC3440 (thread relation)
|
# MSC3440 (thread relation)
|
||||||
self.msc3440_enabled: bool = experimental.get("msc3440_enabled", False)
|
self.msc3440_enabled: bool = experimental.get("msc3440_enabled", False)
|
||||||
|
# MSC3666: including bundled relations in /search.
|
||||||
|
self.msc3666_enabled: bool = experimental.get("msc3666_enabled", False)
|
||||||
|
|
||||||
# MSC3026 (busy presence state)
|
# MSC3026 (busy presence state)
|
||||||
self.msc3026_enabled: bool = experimental.get("msc3026_enabled", False)
|
self.msc3026_enabled: bool = experimental.get("msc3026_enabled", False)
|
||||||
@ -52,3 +54,13 @@ class ExperimentalConfig(Config):
|
|||||||
self.msc3202_device_masquerading_enabled: bool = experimental.get(
|
self.msc3202_device_masquerading_enabled: bool = experimental.get(
|
||||||
"msc3202_device_masquerading", False
|
"msc3202_device_masquerading", False
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# MSC2409 (this setting only relates to optionally sending to-device messages).
|
||||||
|
# Presence, typing and read receipt EDUs are already sent to application services that
|
||||||
|
# have opted in to receive them. If enabled, this adds to-device messages to that list.
|
||||||
|
self.msc2409_to_device_messages_enabled: bool = experimental.get(
|
||||||
|
"msc2409_to_device_messages_enabled", False
|
||||||
|
)
|
||||||
|
|
||||||
|
# MSC3706 (server-side support for partial state in /send_join responses)
|
||||||
|
self.msc3706_enabled: bool = experimental.get("msc3706_enabled", False)
|
||||||
|
@ -22,6 +22,7 @@ from string import Template
|
|||||||
from typing import TYPE_CHECKING, Any, Dict, Optional
|
from typing import TYPE_CHECKING, Any, Dict, Optional
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
from zope.interface import implementer
|
from zope.interface import implementer
|
||||||
|
|
||||||
from twisted.logger import (
|
from twisted.logger import (
|
||||||
@ -32,11 +33,9 @@ from twisted.logger import (
|
|||||||
globalLogBeginner,
|
globalLogBeginner,
|
||||||
)
|
)
|
||||||
|
|
||||||
import synapse
|
|
||||||
from synapse.logging._structured import setup_structured_logging
|
from synapse.logging._structured import setup_structured_logging
|
||||||
from synapse.logging.context import LoggingContextFilter
|
from synapse.logging.context import LoggingContextFilter
|
||||||
from synapse.logging.filter import MetadataFilter
|
from synapse.logging.filter import MetadataFilter
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
from ._base import Config, ConfigError
|
from ._base import Config, ConfigError
|
||||||
|
|
||||||
@ -347,6 +346,10 @@ def setup_logging(
|
|||||||
|
|
||||||
# Log immediately so we can grep backwards.
|
# Log immediately so we can grep backwards.
|
||||||
logging.warning("***** STARTING SERVER *****")
|
logging.warning("***** STARTING SERVER *****")
|
||||||
logging.warning("Server %s version %s", sys.argv[0], get_version_string(synapse))
|
logging.warning(
|
||||||
|
"Server %s version %s",
|
||||||
|
sys.argv[0],
|
||||||
|
get_distribution_version_string("matrix-synapse"),
|
||||||
|
)
|
||||||
logging.info("Server hostname: %s", config.server.server_name)
|
logging.info("Server hostname: %s", config.server.server_name)
|
||||||
logging.info("Instance name: %s", hs.get_instance_name())
|
logging.info("Instance name: %s", hs.get_instance_name())
|
||||||
|
@ -134,6 +134,14 @@ class RatelimitConfig(Config):
|
|||||||
defaults={"per_second": 0.003, "burst_count": 5},
|
defaults={"per_second": 0.003, "burst_count": 5},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.rc_third_party_invite = RateLimitConfig(
|
||||||
|
config.get("rc_third_party_invite", {}),
|
||||||
|
defaults={
|
||||||
|
"per_second": self.rc_message.per_second,
|
||||||
|
"burst_count": self.rc_message.burst_count,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
def generate_config_section(self, **kwargs):
|
def generate_config_section(self, **kwargs):
|
||||||
return """\
|
return """\
|
||||||
## Ratelimiting ##
|
## Ratelimiting ##
|
||||||
@ -168,6 +176,9 @@ class RatelimitConfig(Config):
|
|||||||
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
|
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
|
||||||
# - two for ratelimiting how often invites can be sent in a room or to a
|
# - two for ratelimiting how often invites can be sent in a room or to a
|
||||||
# specific user.
|
# specific user.
|
||||||
|
# - one for ratelimiting 3PID invites (i.e. invites sent to a third-party ID
|
||||||
|
# such as an email address or a phone number) based on the account that's
|
||||||
|
# sending the invite.
|
||||||
#
|
#
|
||||||
# The defaults are as shown below.
|
# The defaults are as shown below.
|
||||||
#
|
#
|
||||||
@ -217,6 +228,10 @@ class RatelimitConfig(Config):
|
|||||||
# per_user:
|
# per_user:
|
||||||
# per_second: 0.003
|
# per_second: 0.003
|
||||||
# burst_count: 5
|
# burst_count: 5
|
||||||
|
#
|
||||||
|
#rc_third_party_invite:
|
||||||
|
# per_second: 0.2
|
||||||
|
# burst_count: 10
|
||||||
|
|
||||||
# Ratelimiting settings for incoming federation
|
# Ratelimiting settings for incoming federation
|
||||||
#
|
#
|
||||||
|
@ -179,7 +179,6 @@ KNOWN_RESOURCES = {
|
|||||||
"openid",
|
"openid",
|
||||||
"replication",
|
"replication",
|
||||||
"static",
|
"static",
|
||||||
"webclient",
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -519,16 +518,12 @@ class ServerConfig(Config):
|
|||||||
self.listeners = l2
|
self.listeners = l2
|
||||||
|
|
||||||
self.web_client_location = config.get("web_client_location", None)
|
self.web_client_location = config.get("web_client_location", None)
|
||||||
self.web_client_location_is_redirect = self.web_client_location and (
|
# Non-HTTP(S) web client location is not supported.
|
||||||
|
if self.web_client_location and not (
|
||||||
self.web_client_location.startswith("http://")
|
self.web_client_location.startswith("http://")
|
||||||
or self.web_client_location.startswith("https://")
|
or self.web_client_location.startswith("https://")
|
||||||
)
|
):
|
||||||
# A non-HTTP(S) web client location is deprecated.
|
raise ConfigError("web_client_location must point to a HTTP(S) URL.")
|
||||||
if self.web_client_location and not self.web_client_location_is_redirect:
|
|
||||||
logger.warning(NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING)
|
|
||||||
|
|
||||||
# Warn if webclient is configured for a worker.
|
|
||||||
_warn_if_webclient_configured(self.listeners)
|
|
||||||
|
|
||||||
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
|
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
|
||||||
self.gc_seconds = self.read_gc_intervals(config.get("gc_min_interval", None))
|
self.gc_seconds = self.read_gc_intervals(config.get("gc_min_interval", None))
|
||||||
@ -656,19 +651,6 @@ class ServerConfig(Config):
|
|||||||
False,
|
False,
|
||||||
)
|
)
|
||||||
|
|
||||||
# List of users trialing the new experimental default push rules. This setting is
|
|
||||||
# not included in the sample configuration file on purpose as it's a temporary
|
|
||||||
# hack, so that some users can trial the new defaults without impacting every
|
|
||||||
# user on the homeserver.
|
|
||||||
users_new_default_push_rules: list = (
|
|
||||||
config.get("users_new_default_push_rules") or []
|
|
||||||
)
|
|
||||||
if not isinstance(users_new_default_push_rules, list):
|
|
||||||
raise ConfigError("'users_new_default_push_rules' must be a list")
|
|
||||||
|
|
||||||
# Turn the list into a set to improve lookup speed.
|
|
||||||
self.users_new_default_push_rules: set = set(users_new_default_push_rules)
|
|
||||||
|
|
||||||
# Whitelist of domain names that given next_link parameters must have
|
# Whitelist of domain names that given next_link parameters must have
|
||||||
next_link_domain_whitelist: Optional[List[str]] = config.get(
|
next_link_domain_whitelist: Optional[List[str]] = config.get(
|
||||||
"next_link_domain_whitelist"
|
"next_link_domain_whitelist"
|
||||||
@ -1364,11 +1346,16 @@ def parse_listener_def(listener: Any) -> ListenerConfig:
|
|||||||
|
|
||||||
http_config = None
|
http_config = None
|
||||||
if listener_type == "http":
|
if listener_type == "http":
|
||||||
|
try:
|
||||||
|
resources = [
|
||||||
|
HttpResourceConfig(**res) for res in listener.get("resources", [])
|
||||||
|
]
|
||||||
|
except ValueError as e:
|
||||||
|
raise ConfigError("Unknown listener resource") from e
|
||||||
|
|
||||||
http_config = HttpListenerConfig(
|
http_config = HttpListenerConfig(
|
||||||
x_forwarded=listener.get("x_forwarded", False),
|
x_forwarded=listener.get("x_forwarded", False),
|
||||||
resources=[
|
resources=resources,
|
||||||
HttpResourceConfig(**res) for res in listener.get("resources", [])
|
|
||||||
],
|
|
||||||
additional_resources=listener.get("additional_resources", {}),
|
additional_resources=listener.get("additional_resources", {}),
|
||||||
tag=listener.get("tag"),
|
tag=listener.get("tag"),
|
||||||
)
|
)
|
||||||
@ -1376,30 +1363,6 @@ def parse_listener_def(listener: Any) -> ListenerConfig:
|
|||||||
return ListenerConfig(port, bind_addresses, listener_type, tls, http_config)
|
return ListenerConfig(port, bind_addresses, listener_type, tls, http_config)
|
||||||
|
|
||||||
|
|
||||||
NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING = """
|
|
||||||
Synapse no longer supports serving a web client. To remove this warning,
|
|
||||||
configure 'web_client_location' with an HTTP(S) URL.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
NO_MORE_WEB_CLIENT_WARNING = """
|
|
||||||
Synapse no longer includes a web client. To redirect the root resource to a web client, configure
|
|
||||||
'web_client_location'. To remove this warning, remove 'webclient' from the 'listeners'
|
|
||||||
configuration.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
def _warn_if_webclient_configured(listeners: Iterable[ListenerConfig]) -> None:
|
|
||||||
for listener in listeners:
|
|
||||||
if not listener.http_options:
|
|
||||||
continue
|
|
||||||
for res in listener.http_options.resources:
|
|
||||||
for name in res.names:
|
|
||||||
if name == "webclient":
|
|
||||||
logger.warning(NO_MORE_WEB_CLIENT_WARNING)
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
_MANHOLE_SETTINGS_SCHEMA = {
|
_MANHOLE_SETTINGS_SCHEMA = {
|
||||||
"type": "object",
|
"type": "object",
|
||||||
"properties": {
|
"properties": {
|
||||||
|
@ -14,6 +14,7 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
import typing
|
||||||
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union
|
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union
|
||||||
|
|
||||||
from canonicaljson import encode_canonical_json
|
from canonicaljson import encode_canonical_json
|
||||||
@ -34,15 +35,18 @@ from synapse.api.room_versions import (
|
|||||||
EventFormatVersions,
|
EventFormatVersions,
|
||||||
RoomVersion,
|
RoomVersion,
|
||||||
)
|
)
|
||||||
from synapse.events import EventBase
|
|
||||||
from synapse.events.builder import EventBuilder
|
|
||||||
from synapse.types import StateMap, UserID, get_domain_from_id
|
from synapse.types import StateMap, UserID, get_domain_from_id
|
||||||
|
|
||||||
|
if typing.TYPE_CHECKING:
|
||||||
|
# conditional imports to avoid import cycle
|
||||||
|
from synapse.events import EventBase
|
||||||
|
from synapse.events.builder import EventBuilder
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def validate_event_for_room_version(
|
def validate_event_for_room_version(
|
||||||
room_version_obj: RoomVersion, event: EventBase
|
room_version_obj: RoomVersion, event: "EventBase"
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Ensure that the event complies with the limits, and has the right signatures
|
"""Ensure that the event complies with the limits, and has the right signatures
|
||||||
|
|
||||||
@ -113,7 +117,9 @@ def validate_event_for_room_version(
|
|||||||
|
|
||||||
|
|
||||||
def check_auth_rules_for_event(
|
def check_auth_rules_for_event(
|
||||||
room_version_obj: RoomVersion, event: EventBase, auth_events: Iterable[EventBase]
|
room_version_obj: RoomVersion,
|
||||||
|
event: "EventBase",
|
||||||
|
auth_events: Iterable["EventBase"],
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Check that an event complies with the auth rules
|
"""Check that an event complies with the auth rules
|
||||||
|
|
||||||
@ -256,7 +262,7 @@ def check_auth_rules_for_event(
|
|||||||
logger.debug("Allowing! %s", event)
|
logger.debug("Allowing! %s", event)
|
||||||
|
|
||||||
|
|
||||||
def _check_size_limits(event: EventBase) -> None:
|
def _check_size_limits(event: "EventBase") -> None:
|
||||||
if len(event.user_id) > 255:
|
if len(event.user_id) > 255:
|
||||||
raise EventSizeError("'user_id' too large")
|
raise EventSizeError("'user_id' too large")
|
||||||
if len(event.room_id) > 255:
|
if len(event.room_id) > 255:
|
||||||
@ -271,7 +277,7 @@ def _check_size_limits(event: EventBase) -> None:
|
|||||||
raise EventSizeError("event too large")
|
raise EventSizeError("event too large")
|
||||||
|
|
||||||
|
|
||||||
def _can_federate(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
|
def _can_federate(event: "EventBase", auth_events: StateMap["EventBase"]) -> bool:
|
||||||
creation_event = auth_events.get((EventTypes.Create, ""))
|
creation_event = auth_events.get((EventTypes.Create, ""))
|
||||||
# There should always be a creation event, but if not don't federate.
|
# There should always be a creation event, but if not don't federate.
|
||||||
if not creation_event:
|
if not creation_event:
|
||||||
@ -281,7 +287,7 @@ def _can_federate(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
|
|||||||
|
|
||||||
|
|
||||||
def _is_membership_change_allowed(
|
def _is_membership_change_allowed(
|
||||||
room_version: RoomVersion, event: EventBase, auth_events: StateMap[EventBase]
|
room_version: RoomVersion, event: "EventBase", auth_events: StateMap["EventBase"]
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Confirms that the event which changes membership is an allowed change.
|
Confirms that the event which changes membership is an allowed change.
|
||||||
@ -471,7 +477,7 @@ def _is_membership_change_allowed(
|
|||||||
|
|
||||||
|
|
||||||
def _check_event_sender_in_room(
|
def _check_event_sender_in_room(
|
||||||
event: EventBase, auth_events: StateMap[EventBase]
|
event: "EventBase", auth_events: StateMap["EventBase"]
|
||||||
) -> None:
|
) -> None:
|
||||||
key = (EventTypes.Member, event.user_id)
|
key = (EventTypes.Member, event.user_id)
|
||||||
member_event = auth_events.get(key)
|
member_event = auth_events.get(key)
|
||||||
@ -479,7 +485,9 @@ def _check_event_sender_in_room(
|
|||||||
_check_joined_room(member_event, event.user_id, event.room_id)
|
_check_joined_room(member_event, event.user_id, event.room_id)
|
||||||
|
|
||||||
|
|
||||||
def _check_joined_room(member: Optional[EventBase], user_id: str, room_id: str) -> None:
|
def _check_joined_room(
|
||||||
|
member: Optional["EventBase"], user_id: str, room_id: str
|
||||||
|
) -> None:
|
||||||
if not member or member.membership != Membership.JOIN:
|
if not member or member.membership != Membership.JOIN:
|
||||||
raise AuthError(
|
raise AuthError(
|
||||||
403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member))
|
403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member))
|
||||||
@ -487,7 +495,7 @@ def _check_joined_room(member: Optional[EventBase], user_id: str, room_id: str)
|
|||||||
|
|
||||||
|
|
||||||
def get_send_level(
|
def get_send_level(
|
||||||
etype: str, state_key: Optional[str], power_levels_event: Optional[EventBase]
|
etype: str, state_key: Optional[str], power_levels_event: Optional["EventBase"]
|
||||||
) -> int:
|
) -> int:
|
||||||
"""Get the power level required to send an event of a given type
|
"""Get the power level required to send an event of a given type
|
||||||
|
|
||||||
@ -523,7 +531,7 @@ def get_send_level(
|
|||||||
return int(send_level)
|
return int(send_level)
|
||||||
|
|
||||||
|
|
||||||
def _can_send_event(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
|
def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> bool:
|
||||||
power_levels_event = get_power_level_event(auth_events)
|
power_levels_event = get_power_level_event(auth_events)
|
||||||
|
|
||||||
send_level = get_send_level(event.type, event.get("state_key"), power_levels_event)
|
send_level = get_send_level(event.type, event.get("state_key"), power_levels_event)
|
||||||
@ -547,8 +555,8 @@ def _can_send_event(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
|
|||||||
|
|
||||||
def check_redaction(
|
def check_redaction(
|
||||||
room_version_obj: RoomVersion,
|
room_version_obj: RoomVersion,
|
||||||
event: EventBase,
|
event: "EventBase",
|
||||||
auth_events: StateMap[EventBase],
|
auth_events: StateMap["EventBase"],
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""Check whether the event sender is allowed to redact the target event.
|
"""Check whether the event sender is allowed to redact the target event.
|
||||||
|
|
||||||
@ -585,8 +593,8 @@ def check_redaction(
|
|||||||
|
|
||||||
def check_historical(
|
def check_historical(
|
||||||
room_version_obj: RoomVersion,
|
room_version_obj: RoomVersion,
|
||||||
event: EventBase,
|
event: "EventBase",
|
||||||
auth_events: StateMap[EventBase],
|
auth_events: StateMap["EventBase"],
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Check whether the event sender is allowed to send historical related
|
"""Check whether the event sender is allowed to send historical related
|
||||||
events like "insertion", "batch", and "marker".
|
events like "insertion", "batch", and "marker".
|
||||||
@ -616,8 +624,8 @@ def check_historical(
|
|||||||
|
|
||||||
def _check_power_levels(
|
def _check_power_levels(
|
||||||
room_version_obj: RoomVersion,
|
room_version_obj: RoomVersion,
|
||||||
event: EventBase,
|
event: "EventBase",
|
||||||
auth_events: StateMap[EventBase],
|
auth_events: StateMap["EventBase"],
|
||||||
) -> None:
|
) -> None:
|
||||||
user_list = event.content.get("users", {})
|
user_list = event.content.get("users", {})
|
||||||
# Validate users
|
# Validate users
|
||||||
@ -710,11 +718,11 @@ def _check_power_levels(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_power_level_event(auth_events: StateMap[EventBase]) -> Optional[EventBase]:
|
def get_power_level_event(auth_events: StateMap["EventBase"]) -> Optional["EventBase"]:
|
||||||
return auth_events.get((EventTypes.PowerLevels, ""))
|
return auth_events.get((EventTypes.PowerLevels, ""))
|
||||||
|
|
||||||
|
|
||||||
def get_user_power_level(user_id: str, auth_events: StateMap[EventBase]) -> int:
|
def get_user_power_level(user_id: str, auth_events: StateMap["EventBase"]) -> int:
|
||||||
"""Get a user's power level
|
"""Get a user's power level
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@ -750,7 +758,7 @@ def get_user_power_level(user_id: str, auth_events: StateMap[EventBase]) -> int:
|
|||||||
return 0
|
return 0
|
||||||
|
|
||||||
|
|
||||||
def get_named_level(auth_events: StateMap[EventBase], name: str, default: int) -> int:
|
def get_named_level(auth_events: StateMap["EventBase"], name: str, default: int) -> int:
|
||||||
power_level_event = get_power_level_event(auth_events)
|
power_level_event = get_power_level_event(auth_events)
|
||||||
|
|
||||||
if not power_level_event:
|
if not power_level_event:
|
||||||
@ -763,7 +771,9 @@ def get_named_level(auth_events: StateMap[EventBase], name: str, default: int) -
|
|||||||
return default
|
return default
|
||||||
|
|
||||||
|
|
||||||
def _verify_third_party_invite(event: EventBase, auth_events: StateMap[EventBase]):
|
def _verify_third_party_invite(
|
||||||
|
event: "EventBase", auth_events: StateMap["EventBase"]
|
||||||
|
) -> bool:
|
||||||
"""
|
"""
|
||||||
Validates that the invite event is authorized by a previous third-party invite.
|
Validates that the invite event is authorized by a previous third-party invite.
|
||||||
|
|
||||||
@ -827,7 +837,7 @@ def _verify_third_party_invite(event: EventBase, auth_events: StateMap[EventBase
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
def get_public_keys(invite_event: EventBase) -> List[Dict[str, Any]]:
|
def get_public_keys(invite_event: "EventBase") -> List[Dict[str, Any]]:
|
||||||
public_keys = []
|
public_keys = []
|
||||||
if "public_key" in invite_event.content:
|
if "public_key" in invite_event.content:
|
||||||
o = {"public_key": invite_event.content["public_key"]}
|
o = {"public_key": invite_event.content["public_key"]}
|
||||||
@ -839,7 +849,7 @@ def get_public_keys(invite_event: EventBase) -> List[Dict[str, Any]]:
|
|||||||
|
|
||||||
|
|
||||||
def auth_types_for_event(
|
def auth_types_for_event(
|
||||||
room_version: RoomVersion, event: Union[EventBase, EventBuilder]
|
room_version: RoomVersion, event: Union["EventBase", "EventBuilder"]
|
||||||
) -> Set[Tuple[str, str]]:
|
) -> Set[Tuple[str, str]]:
|
||||||
"""Given an event, return a list of (EventType, StateKey) that may be
|
"""Given an event, return a list of (EventType, StateKey) that may be
|
||||||
needed to auth the event. The returned list may be a superset of what
|
needed to auth the event. The returned list may be a superset of what
|
||||||
|
@ -48,9 +48,6 @@ USER_MAY_JOIN_ROOM_CALLBACK = Callable[[str, str, bool], Awaitable[bool]]
|
|||||||
USER_MAY_INVITE_CALLBACK = Callable[[str, str, str], Awaitable[bool]]
|
USER_MAY_INVITE_CALLBACK = Callable[[str, str, str], Awaitable[bool]]
|
||||||
USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[[str, str, str, str], Awaitable[bool]]
|
USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[[str, str, str, str], Awaitable[bool]]
|
||||||
USER_MAY_CREATE_ROOM_CALLBACK = Callable[[str], Awaitable[bool]]
|
USER_MAY_CREATE_ROOM_CALLBACK = Callable[[str], Awaitable[bool]]
|
||||||
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK = Callable[
|
|
||||||
[str, List[str], List[Dict[str, str]]], Awaitable[bool]
|
|
||||||
]
|
|
||||||
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]]
|
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]]
|
||||||
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]]
|
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]]
|
||||||
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[Dict[str, str]], Awaitable[bool]]
|
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[Dict[str, str]], Awaitable[bool]]
|
||||||
@ -174,9 +171,6 @@ class SpamChecker:
|
|||||||
USER_MAY_SEND_3PID_INVITE_CALLBACK
|
USER_MAY_SEND_3PID_INVITE_CALLBACK
|
||||||
] = []
|
] = []
|
||||||
self._user_may_create_room_callbacks: List[USER_MAY_CREATE_ROOM_CALLBACK] = []
|
self._user_may_create_room_callbacks: List[USER_MAY_CREATE_ROOM_CALLBACK] = []
|
||||||
self._user_may_create_room_with_invites_callbacks: List[
|
|
||||||
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
|
|
||||||
] = []
|
|
||||||
self._user_may_create_room_alias_callbacks: List[
|
self._user_may_create_room_alias_callbacks: List[
|
||||||
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
|
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
|
||||||
] = []
|
] = []
|
||||||
@ -198,9 +192,6 @@ class SpamChecker:
|
|||||||
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
|
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
|
||||||
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
|
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
|
||||||
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
|
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
|
||||||
user_may_create_room_with_invites: Optional[
|
|
||||||
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
|
|
||||||
] = None,
|
|
||||||
user_may_create_room_alias: Optional[
|
user_may_create_room_alias: Optional[
|
||||||
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
|
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
|
||||||
] = None,
|
] = None,
|
||||||
@ -229,11 +220,6 @@ class SpamChecker:
|
|||||||
if user_may_create_room is not None:
|
if user_may_create_room is not None:
|
||||||
self._user_may_create_room_callbacks.append(user_may_create_room)
|
self._user_may_create_room_callbacks.append(user_may_create_room)
|
||||||
|
|
||||||
if user_may_create_room_with_invites is not None:
|
|
||||||
self._user_may_create_room_with_invites_callbacks.append(
|
|
||||||
user_may_create_room_with_invites,
|
|
||||||
)
|
|
||||||
|
|
||||||
if user_may_create_room_alias is not None:
|
if user_may_create_room_alias is not None:
|
||||||
self._user_may_create_room_alias_callbacks.append(
|
self._user_may_create_room_alias_callbacks.append(
|
||||||
user_may_create_room_alias,
|
user_may_create_room_alias,
|
||||||
@ -359,34 +345,6 @@ class SpamChecker:
|
|||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
async def user_may_create_room_with_invites(
|
|
||||||
self,
|
|
||||||
userid: str,
|
|
||||||
invites: List[str],
|
|
||||||
threepid_invites: List[Dict[str, str]],
|
|
||||||
) -> bool:
|
|
||||||
"""Checks if a given user may create a room with invites
|
|
||||||
|
|
||||||
If this method returns false, the creation request will be rejected.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
userid: The ID of the user attempting to create a room
|
|
||||||
invites: The IDs of the Matrix users to be invited if the room creation is
|
|
||||||
allowed.
|
|
||||||
threepid_invites: The threepids to be invited if the room creation is allowed,
|
|
||||||
as a dict including a "medium" key indicating the threepid's medium (e.g.
|
|
||||||
"email") and an "address" key indicating the threepid's address (e.g.
|
|
||||||
"alice@example.com")
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if the user may create the room, otherwise False
|
|
||||||
"""
|
|
||||||
for callback in self._user_may_create_room_with_invites_callbacks:
|
|
||||||
if await callback(userid, invites, threepid_invites) is False:
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
async def user_may_create_room_alias(
|
async def user_may_create_room_alias(
|
||||||
self, userid: str, room_alias: RoomAlias
|
self, userid: str, room_alias: RoomAlias
|
||||||
) -> bool:
|
) -> bool:
|
||||||
|
@ -20,6 +20,7 @@ from typing import (
|
|||||||
Any,
|
Any,
|
||||||
Awaitable,
|
Awaitable,
|
||||||
Callable,
|
Callable,
|
||||||
|
Collection,
|
||||||
Dict,
|
Dict,
|
||||||
Iterable,
|
Iterable,
|
||||||
List,
|
List,
|
||||||
@ -64,7 +65,7 @@ from synapse.replication.http.federation import (
|
|||||||
ReplicationGetQueryRestServlet,
|
ReplicationGetQueryRestServlet,
|
||||||
)
|
)
|
||||||
from synapse.storage.databases.main.lock import Lock
|
from synapse.storage.databases.main.lock import Lock
|
||||||
from synapse.types import JsonDict, get_domain_from_id
|
from synapse.types import JsonDict, StateMap, get_domain_from_id
|
||||||
from synapse.util import json_decoder, unwrapFirstError
|
from synapse.util import json_decoder, unwrapFirstError
|
||||||
from synapse.util.async_helpers import Linearizer, concurrently_execute, gather_results
|
from synapse.util.async_helpers import Linearizer, concurrently_execute, gather_results
|
||||||
from synapse.util.caches.response_cache import ResponseCache
|
from synapse.util.caches.response_cache import ResponseCache
|
||||||
@ -571,7 +572,7 @@ class FederationServer(FederationBase):
|
|||||||
) -> JsonDict:
|
) -> JsonDict:
|
||||||
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
|
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
|
||||||
auth_chain_ids = await self.store.get_auth_chain_ids(room_id, state_ids)
|
auth_chain_ids = await self.store.get_auth_chain_ids(room_id, state_ids)
|
||||||
return {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
|
return {"pdu_ids": state_ids, "auth_chain_ids": list(auth_chain_ids)}
|
||||||
|
|
||||||
async def _on_context_state_request_compute(
|
async def _on_context_state_request_compute(
|
||||||
self, room_id: str, event_id: Optional[str]
|
self, room_id: str, event_id: Optional[str]
|
||||||
@ -645,27 +646,61 @@ class FederationServer(FederationBase):
|
|||||||
return {"event": ret_pdu.get_pdu_json(time_now)}
|
return {"event": ret_pdu.get_pdu_json(time_now)}
|
||||||
|
|
||||||
async def on_send_join_request(
|
async def on_send_join_request(
|
||||||
self, origin: str, content: JsonDict, room_id: str
|
self,
|
||||||
|
origin: str,
|
||||||
|
content: JsonDict,
|
||||||
|
room_id: str,
|
||||||
|
caller_supports_partial_state: bool = False,
|
||||||
) -> Dict[str, Any]:
|
) -> Dict[str, Any]:
|
||||||
event, context = await self._on_send_membership_event(
|
event, context = await self._on_send_membership_event(
|
||||||
origin, content, Membership.JOIN, room_id
|
origin, content, Membership.JOIN, room_id
|
||||||
)
|
)
|
||||||
|
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
prev_state_ids = await context.get_prev_state_ids()
|
||||||
state_ids = list(prev_state_ids.values())
|
|
||||||
auth_chain = await self.store.get_auth_chain(room_id, state_ids)
|
|
||||||
state = await self.store.get_events(state_ids)
|
|
||||||
|
|
||||||
|
state_event_ids: Collection[str]
|
||||||
|
servers_in_room: Optional[Collection[str]]
|
||||||
|
if caller_supports_partial_state:
|
||||||
|
state_event_ids = _get_event_ids_for_partial_state_join(
|
||||||
|
event, prev_state_ids
|
||||||
|
)
|
||||||
|
servers_in_room = await self.state.get_hosts_in_room_at_events(
|
||||||
|
room_id, event_ids=event.prev_event_ids()
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
state_event_ids = prev_state_ids.values()
|
||||||
|
servers_in_room = None
|
||||||
|
|
||||||
|
auth_chain_event_ids = await self.store.get_auth_chain_ids(
|
||||||
|
room_id, state_event_ids
|
||||||
|
)
|
||||||
|
|
||||||
|
# if the caller has opted in, we can omit any auth_chain events which are
|
||||||
|
# already in state_event_ids
|
||||||
|
if caller_supports_partial_state:
|
||||||
|
auth_chain_event_ids.difference_update(state_event_ids)
|
||||||
|
|
||||||
|
auth_chain_events = await self.store.get_events_as_list(auth_chain_event_ids)
|
||||||
|
state_events = await self.store.get_events_as_list(state_event_ids)
|
||||||
|
|
||||||
|
# we try to do all the async stuff before this point, so that time_now is as
|
||||||
|
# accurate as possible.
|
||||||
time_now = self._clock.time_msec()
|
time_now = self._clock.time_msec()
|
||||||
event_json = event.get_pdu_json()
|
event_json = event.get_pdu_json(time_now)
|
||||||
return {
|
resp = {
|
||||||
# TODO Remove the unstable prefix when servers have updated.
|
# TODO Remove the unstable prefix when servers have updated.
|
||||||
"org.matrix.msc3083.v2.event": event_json,
|
"org.matrix.msc3083.v2.event": event_json,
|
||||||
"event": event_json,
|
"event": event_json,
|
||||||
"state": [p.get_pdu_json(time_now) for p in state.values()],
|
"state": [p.get_pdu_json(time_now) for p in state_events],
|
||||||
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain],
|
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain_events],
|
||||||
|
"org.matrix.msc3706.partial_state": caller_supports_partial_state,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if servers_in_room is not None:
|
||||||
|
resp["org.matrix.msc3706.servers_in_room"] = list(servers_in_room)
|
||||||
|
|
||||||
|
return resp
|
||||||
|
|
||||||
async def on_make_leave_request(
|
async def on_make_leave_request(
|
||||||
self, origin: str, room_id: str, user_id: str
|
self, origin: str, room_id: str, user_id: str
|
||||||
) -> Dict[str, Any]:
|
) -> Dict[str, Any]:
|
||||||
@ -1339,3 +1374,39 @@ class FederationHandlerRegistry:
|
|||||||
# error.
|
# error.
|
||||||
logger.warning("No handler registered for query type %s", query_type)
|
logger.warning("No handler registered for query type %s", query_type)
|
||||||
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
|
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
|
||||||
|
|
||||||
|
|
||||||
|
def _get_event_ids_for_partial_state_join(
|
||||||
|
join_event: EventBase,
|
||||||
|
prev_state_ids: StateMap[str],
|
||||||
|
) -> Collection[str]:
|
||||||
|
"""Calculate state to be retuned in a partial_state send_join
|
||||||
|
|
||||||
|
Args:
|
||||||
|
join_event: the join event being send_joined
|
||||||
|
prev_state_ids: the event ids of the state before the join
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
the event ids to be returned
|
||||||
|
"""
|
||||||
|
|
||||||
|
# return all non-member events
|
||||||
|
state_event_ids = {
|
||||||
|
event_id
|
||||||
|
for (event_type, state_key), event_id in prev_state_ids.items()
|
||||||
|
if event_type != EventTypes.Member
|
||||||
|
}
|
||||||
|
|
||||||
|
# we also need the current state of the current user (it's going to
|
||||||
|
# be an auth event for the new join, so we may as well return it)
|
||||||
|
current_membership_event_id = prev_state_ids.get(
|
||||||
|
(EventTypes.Member, join_event.state_key)
|
||||||
|
)
|
||||||
|
if current_membership_event_id is not None:
|
||||||
|
state_event_ids.add(current_membership_event_id)
|
||||||
|
|
||||||
|
# TODO: return a few more members:
|
||||||
|
# - those with invites
|
||||||
|
# - those that are kicked? / banned
|
||||||
|
|
||||||
|
return state_event_ids
|
||||||
|
@ -15,6 +15,7 @@
|
|||||||
import functools
|
import functools
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
|
import time
|
||||||
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Optional, Tuple, cast
|
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Optional, Tuple, cast
|
||||||
|
|
||||||
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
|
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
|
||||||
@ -24,8 +25,10 @@ from synapse.http.servlet import parse_json_object_from_request
|
|||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.context import run_in_background
|
from synapse.logging.context import run_in_background
|
||||||
from synapse.logging.opentracing import (
|
from synapse.logging.opentracing import (
|
||||||
|
active_span,
|
||||||
set_tag,
|
set_tag,
|
||||||
span_context_from_request,
|
span_context_from_request,
|
||||||
|
start_active_span,
|
||||||
start_active_span_follows_from,
|
start_active_span_follows_from,
|
||||||
whitelisted_homeserver,
|
whitelisted_homeserver,
|
||||||
)
|
)
|
||||||
@ -265,9 +268,10 @@ class BaseFederationServlet:
|
|||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
origin: Optional[str] = await authenticator.authenticate_request(
|
with start_active_span("authenticate_request"):
|
||||||
request, content
|
origin: Optional[str] = await authenticator.authenticate_request(
|
||||||
)
|
request, content
|
||||||
|
)
|
||||||
except NoAuthenticationError:
|
except NoAuthenticationError:
|
||||||
origin = None
|
origin = None
|
||||||
if self.REQUIRE_AUTH:
|
if self.REQUIRE_AUTH:
|
||||||
@ -282,32 +286,57 @@ class BaseFederationServlet:
|
|||||||
# update the active opentracing span with the authenticated entity
|
# update the active opentracing span with the authenticated entity
|
||||||
set_tag("authenticated_entity", origin)
|
set_tag("authenticated_entity", origin)
|
||||||
|
|
||||||
# if the origin is authenticated and whitelisted, link to its span context
|
# if the origin is authenticated and whitelisted, use its span context
|
||||||
|
# as the parent.
|
||||||
context = None
|
context = None
|
||||||
if origin and whitelisted_homeserver(origin):
|
if origin and whitelisted_homeserver(origin):
|
||||||
context = span_context_from_request(request)
|
context = span_context_from_request(request)
|
||||||
|
|
||||||
scope = start_active_span_follows_from(
|
if context:
|
||||||
"incoming-federation-request", contexts=(context,) if context else ()
|
servlet_span = active_span()
|
||||||
)
|
# a scope which uses the origin's context as a parent
|
||||||
|
processing_start_time = time.time()
|
||||||
|
scope = start_active_span_follows_from(
|
||||||
|
"incoming-federation-request",
|
||||||
|
child_of=context,
|
||||||
|
contexts=(servlet_span,),
|
||||||
|
start_time=processing_start_time,
|
||||||
|
)
|
||||||
|
|
||||||
with scope:
|
else:
|
||||||
if origin and self.RATELIMIT:
|
# just use our context as a parent
|
||||||
with ratelimiter.ratelimit(origin) as d:
|
scope = start_active_span(
|
||||||
await d
|
"incoming-federation-request",
|
||||||
if request._disconnected:
|
)
|
||||||
logger.warning(
|
|
||||||
"client disconnected before we started processing "
|
try:
|
||||||
"request"
|
with scope:
|
||||||
|
if origin and self.RATELIMIT:
|
||||||
|
with ratelimiter.ratelimit(origin) as d:
|
||||||
|
await d
|
||||||
|
if request._disconnected:
|
||||||
|
logger.warning(
|
||||||
|
"client disconnected before we started processing "
|
||||||
|
"request"
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
response = await func(
|
||||||
|
origin, content, request.args, *args, **kwargs
|
||||||
)
|
)
|
||||||
return None
|
else:
|
||||||
response = await func(
|
response = await func(
|
||||||
origin, content, request.args, *args, **kwargs
|
origin, content, request.args, *args, **kwargs
|
||||||
)
|
)
|
||||||
else:
|
finally:
|
||||||
response = await func(
|
# if we used the origin's context as the parent, add a new span using
|
||||||
origin, content, request.args, *args, **kwargs
|
# the servlet span as a parent, so that we have a link
|
||||||
|
if context:
|
||||||
|
scope2 = start_active_span_follows_from(
|
||||||
|
"process-federation_request",
|
||||||
|
contexts=(scope.span,),
|
||||||
|
start_time=processing_start_time,
|
||||||
)
|
)
|
||||||
|
scope2.close()
|
||||||
|
|
||||||
return response
|
return response
|
||||||
|
|
||||||
|
@ -24,9 +24,9 @@ from typing import (
|
|||||||
Union,
|
Union,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
import synapse
|
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, SynapseError
|
||||||
from synapse.api.room_versions import RoomVersions
|
from synapse.api.room_versions import RoomVersions
|
||||||
from synapse.api.urls import FEDERATION_UNSTABLE_PREFIX, FEDERATION_V2_PREFIX
|
from synapse.api.urls import FEDERATION_UNSTABLE_PREFIX, FEDERATION_V2_PREFIX
|
||||||
@ -42,7 +42,6 @@ from synapse.http.servlet import (
|
|||||||
)
|
)
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util.ratelimitutils import FederationRateLimiter
|
from synapse.util.ratelimitutils import FederationRateLimiter
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -109,11 +108,11 @@ class FederationSendServlet(BaseFederationServerServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if issue_8631_logger.isEnabledFor(logging.DEBUG):
|
if issue_8631_logger.isEnabledFor(logging.DEBUG):
|
||||||
DEVICE_UPDATE_EDUS = {"m.device_list_update", "m.signing_key_update"}
|
DEVICE_UPDATE_EDUS = ["m.device_list_update", "m.signing_key_update"]
|
||||||
device_list_updates = [
|
device_list_updates = [
|
||||||
edu.content
|
edu.content
|
||||||
for edu in transaction_data.get("edus", [])
|
for edu in transaction_data.get("edus", [])
|
||||||
if edu.edu_type in DEVICE_UPDATE_EDUS
|
if edu.get("edu_type") in DEVICE_UPDATE_EDUS
|
||||||
]
|
]
|
||||||
if device_list_updates:
|
if device_list_updates:
|
||||||
issue_8631_logger.debug(
|
issue_8631_logger.debug(
|
||||||
@ -412,6 +411,16 @@ class FederationV2SendJoinServlet(BaseFederationServerServlet):
|
|||||||
|
|
||||||
PREFIX = FEDERATION_V2_PREFIX
|
PREFIX = FEDERATION_V2_PREFIX
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
hs: "HomeServer",
|
||||||
|
authenticator: Authenticator,
|
||||||
|
ratelimiter: FederationRateLimiter,
|
||||||
|
server_name: str,
|
||||||
|
):
|
||||||
|
super().__init__(hs, authenticator, ratelimiter, server_name)
|
||||||
|
self._msc3706_enabled = hs.config.experimental.msc3706_enabled
|
||||||
|
|
||||||
async def on_PUT(
|
async def on_PUT(
|
||||||
self,
|
self,
|
||||||
origin: str,
|
origin: str,
|
||||||
@ -422,7 +431,15 @@ class FederationV2SendJoinServlet(BaseFederationServerServlet):
|
|||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
# TODO(paul): assert that event_id parsed from path actually
|
# TODO(paul): assert that event_id parsed from path actually
|
||||||
# match those given in content
|
# match those given in content
|
||||||
result = await self.handler.on_send_join_request(origin, content, room_id)
|
|
||||||
|
partial_state = False
|
||||||
|
if self._msc3706_enabled:
|
||||||
|
partial_state = parse_boolean_from_args(
|
||||||
|
query, "org.matrix.msc3706.partial_state", default=False
|
||||||
|
)
|
||||||
|
result = await self.handler.on_send_join_request(
|
||||||
|
origin, content, room_id, caller_supports_partial_state=partial_state
|
||||||
|
)
|
||||||
return 200, result
|
return 200, result
|
||||||
|
|
||||||
|
|
||||||
@ -598,7 +615,12 @@ class FederationVersionServlet(BaseFederationServlet):
|
|||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
return (
|
return (
|
||||||
200,
|
200,
|
||||||
{"server": {"name": "Synapse", "version": get_version_string(synapse)}},
|
{
|
||||||
|
"server": {
|
||||||
|
"name": "Synapse",
|
||||||
|
"version": get_distribution_version_string("matrix-synapse"),
|
||||||
|
}
|
||||||
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -55,6 +55,9 @@ class ApplicationServicesHandler:
|
|||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self.notify_appservices = hs.config.appservice.notify_appservices
|
self.notify_appservices = hs.config.appservice.notify_appservices
|
||||||
self.event_sources = hs.get_event_sources()
|
self.event_sources = hs.get_event_sources()
|
||||||
|
self._msc2409_to_device_messages_enabled = (
|
||||||
|
hs.config.experimental.msc2409_to_device_messages_enabled
|
||||||
|
)
|
||||||
|
|
||||||
self.current_max = 0
|
self.current_max = 0
|
||||||
self.is_processing = False
|
self.is_processing = False
|
||||||
@ -132,7 +135,9 @@ class ApplicationServicesHandler:
|
|||||||
|
|
||||||
# Fork off pushes to these services
|
# Fork off pushes to these services
|
||||||
for service in services:
|
for service in services:
|
||||||
self.scheduler.submit_event_for_as(service, event)
|
self.scheduler.enqueue_for_appservice(
|
||||||
|
service, events=[event]
|
||||||
|
)
|
||||||
|
|
||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
ts = await self.store.get_received_ts(event.event_id)
|
ts = await self.store.get_received_ts(event.event_id)
|
||||||
@ -199,8 +204,9 @@ class ApplicationServicesHandler:
|
|||||||
Args:
|
Args:
|
||||||
stream_key: The stream the event came from.
|
stream_key: The stream the event came from.
|
||||||
|
|
||||||
`stream_key` can be "typing_key", "receipt_key" or "presence_key". Any other
|
`stream_key` can be "typing_key", "receipt_key", "presence_key" or
|
||||||
value for `stream_key` will cause this function to return early.
|
"to_device_key". Any other value for `stream_key` will cause this function
|
||||||
|
to return early.
|
||||||
|
|
||||||
Ephemeral events will only be pushed to appservices that have opted into
|
Ephemeral events will only be pushed to appservices that have opted into
|
||||||
receiving them by setting `push_ephemeral` to true in their registration
|
receiving them by setting `push_ephemeral` to true in their registration
|
||||||
@ -216,8 +222,15 @@ class ApplicationServicesHandler:
|
|||||||
if not self.notify_appservices:
|
if not self.notify_appservices:
|
||||||
return
|
return
|
||||||
|
|
||||||
# Ignore any unsupported streams
|
# Notify appservices of updates in ephemeral event streams.
|
||||||
if stream_key not in ("typing_key", "receipt_key", "presence_key"):
|
# Only the following streams are currently supported.
|
||||||
|
# FIXME: We should use constants for these values.
|
||||||
|
if stream_key not in (
|
||||||
|
"typing_key",
|
||||||
|
"receipt_key",
|
||||||
|
"presence_key",
|
||||||
|
"to_device_key",
|
||||||
|
):
|
||||||
return
|
return
|
||||||
|
|
||||||
# Assert that new_token is an integer (and not a RoomStreamToken).
|
# Assert that new_token is an integer (and not a RoomStreamToken).
|
||||||
@ -233,6 +246,13 @@ class ApplicationServicesHandler:
|
|||||||
# Additional context: https://github.com/matrix-org/synapse/pull/11137
|
# Additional context: https://github.com/matrix-org/synapse/pull/11137
|
||||||
assert isinstance(new_token, int)
|
assert isinstance(new_token, int)
|
||||||
|
|
||||||
|
# Ignore to-device messages if the feature flag is not enabled
|
||||||
|
if (
|
||||||
|
stream_key == "to_device_key"
|
||||||
|
and not self._msc2409_to_device_messages_enabled
|
||||||
|
):
|
||||||
|
return
|
||||||
|
|
||||||
# Check whether there are any appservices which have registered to receive
|
# Check whether there are any appservices which have registered to receive
|
||||||
# ephemeral events.
|
# ephemeral events.
|
||||||
#
|
#
|
||||||
@ -266,7 +286,7 @@ class ApplicationServicesHandler:
|
|||||||
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
||||||
for service in services:
|
for service in services:
|
||||||
if stream_key == "typing_key":
|
if stream_key == "typing_key":
|
||||||
# Note that we don't persist the token (via set_type_stream_id_for_appservice)
|
# Note that we don't persist the token (via set_appservice_stream_type_pos)
|
||||||
# for typing_key due to performance reasons and due to their highly
|
# for typing_key due to performance reasons and due to their highly
|
||||||
# ephemeral nature.
|
# ephemeral nature.
|
||||||
#
|
#
|
||||||
@ -274,7 +294,7 @@ class ApplicationServicesHandler:
|
|||||||
# and, if they apply to this application service, send it off.
|
# and, if they apply to this application service, send it off.
|
||||||
events = await self._handle_typing(service, new_token)
|
events = await self._handle_typing(service, new_token)
|
||||||
if events:
|
if events:
|
||||||
self.scheduler.submit_ephemeral_events_for_as(service, events)
|
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Since we read/update the stream position for this AS/stream
|
# Since we read/update the stream position for this AS/stream
|
||||||
@ -285,28 +305,37 @@ class ApplicationServicesHandler:
|
|||||||
):
|
):
|
||||||
if stream_key == "receipt_key":
|
if stream_key == "receipt_key":
|
||||||
events = await self._handle_receipts(service, new_token)
|
events = await self._handle_receipts(service, new_token)
|
||||||
if events:
|
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
|
||||||
self.scheduler.submit_ephemeral_events_for_as(
|
|
||||||
service, events
|
|
||||||
)
|
|
||||||
|
|
||||||
# Persist the latest handled stream token for this appservice
|
# Persist the latest handled stream token for this appservice
|
||||||
await self.store.set_type_stream_id_for_appservice(
|
await self.store.set_appservice_stream_type_pos(
|
||||||
service, "read_receipt", new_token
|
service, "read_receipt", new_token
|
||||||
)
|
)
|
||||||
|
|
||||||
elif stream_key == "presence_key":
|
elif stream_key == "presence_key":
|
||||||
events = await self._handle_presence(service, users, new_token)
|
events = await self._handle_presence(service, users, new_token)
|
||||||
if events:
|
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
|
||||||
self.scheduler.submit_ephemeral_events_for_as(
|
|
||||||
service, events
|
|
||||||
)
|
|
||||||
|
|
||||||
# Persist the latest handled stream token for this appservice
|
# Persist the latest handled stream token for this appservice
|
||||||
await self.store.set_type_stream_id_for_appservice(
|
await self.store.set_appservice_stream_type_pos(
|
||||||
service, "presence", new_token
|
service, "presence", new_token
|
||||||
)
|
)
|
||||||
|
|
||||||
|
elif stream_key == "to_device_key":
|
||||||
|
# Retrieve a list of to-device message events, as well as the
|
||||||
|
# maximum stream token of the messages we were able to retrieve.
|
||||||
|
to_device_messages = await self._get_to_device_messages(
|
||||||
|
service, new_token, users
|
||||||
|
)
|
||||||
|
self.scheduler.enqueue_for_appservice(
|
||||||
|
service, to_device_messages=to_device_messages
|
||||||
|
)
|
||||||
|
|
||||||
|
# Persist the latest handled stream token for this appservice
|
||||||
|
await self.store.set_appservice_stream_type_pos(
|
||||||
|
service, "to_device", new_token
|
||||||
|
)
|
||||||
|
|
||||||
async def _handle_typing(
|
async def _handle_typing(
|
||||||
self, service: ApplicationService, new_token: int
|
self, service: ApplicationService, new_token: int
|
||||||
) -> List[JsonDict]:
|
) -> List[JsonDict]:
|
||||||
@ -440,6 +469,79 @@ class ApplicationServicesHandler:
|
|||||||
|
|
||||||
return events
|
return events
|
||||||
|
|
||||||
|
async def _get_to_device_messages(
|
||||||
|
self,
|
||||||
|
service: ApplicationService,
|
||||||
|
new_token: int,
|
||||||
|
users: Collection[Union[str, UserID]],
|
||||||
|
) -> List[JsonDict]:
|
||||||
|
"""
|
||||||
|
Given an application service, determine which events it should receive
|
||||||
|
from those between the last-recorded to-device message stream token for this
|
||||||
|
appservice and the given stream token.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
service: The application service to check for which events it should receive.
|
||||||
|
new_token: The latest to-device event stream token.
|
||||||
|
users: The users to be notified for the new to-device messages
|
||||||
|
(ie, the recipients of the messages).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A list of JSON dictionaries containing data derived from the to-device events
|
||||||
|
that should be sent to the given application service.
|
||||||
|
"""
|
||||||
|
# Get the stream token that this application service has processed up until
|
||||||
|
from_key = await self.store.get_type_stream_id_for_appservice(
|
||||||
|
service, "to_device"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Filter out users that this appservice is not interested in
|
||||||
|
users_appservice_is_interested_in: List[str] = []
|
||||||
|
for user in users:
|
||||||
|
# FIXME: We should do this farther up the call stack. We currently repeat
|
||||||
|
# this operation in _handle_presence.
|
||||||
|
if isinstance(user, UserID):
|
||||||
|
user = user.to_string()
|
||||||
|
|
||||||
|
if service.is_interested_in_user(user):
|
||||||
|
users_appservice_is_interested_in.append(user)
|
||||||
|
|
||||||
|
if not users_appservice_is_interested_in:
|
||||||
|
# Return early if the AS was not interested in any of these users
|
||||||
|
return []
|
||||||
|
|
||||||
|
# Retrieve the to-device messages for each user
|
||||||
|
recipient_device_to_messages = await self.store.get_messages_for_user_devices(
|
||||||
|
users_appservice_is_interested_in,
|
||||||
|
from_key,
|
||||||
|
new_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
# According to MSC2409, we'll need to add 'to_user_id' and 'to_device_id' fields
|
||||||
|
# to the event JSON so that the application service will know which user/device
|
||||||
|
# combination this messages was intended for.
|
||||||
|
#
|
||||||
|
# So we mangle this dict into a flat list of to-device messages with the relevant
|
||||||
|
# user ID and device ID embedded inside each message dict.
|
||||||
|
message_payload: List[JsonDict] = []
|
||||||
|
for (
|
||||||
|
user_id,
|
||||||
|
device_id,
|
||||||
|
), messages in recipient_device_to_messages.items():
|
||||||
|
for message_json in messages:
|
||||||
|
# Remove 'message_id' from the to-device message, as it's an internal ID
|
||||||
|
message_json.pop("message_id", None)
|
||||||
|
|
||||||
|
message_payload.append(
|
||||||
|
{
|
||||||
|
"to_user_id": user_id,
|
||||||
|
"to_device_id": device_id,
|
||||||
|
**message_json,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return message_payload
|
||||||
|
|
||||||
async def query_user_exists(self, user_id: str) -> bool:
|
async def query_user_exists(self, user_id: str) -> bool:
|
||||||
"""Check if any application service knows this user_id exists.
|
"""Check if any application service knows this user_id exists.
|
||||||
|
|
||||||
@ -547,7 +649,7 @@ class ApplicationServicesHandler:
|
|||||||
"""Retrieve a list of application services interested in this event.
|
"""Retrieve a list of application services interested in this event.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
event: The event to check. Can be None if alias_list is not.
|
event: The event to check.
|
||||||
Returns:
|
Returns:
|
||||||
A list of services interested in this event based on the service regex.
|
A list of services interested in this event based on the service regex.
|
||||||
"""
|
"""
|
||||||
|
@ -2064,6 +2064,7 @@ GET_USERNAME_FOR_REGISTRATION_CALLBACK = Callable[
|
|||||||
[JsonDict, JsonDict],
|
[JsonDict, JsonDict],
|
||||||
Awaitable[Optional[str]],
|
Awaitable[Optional[str]],
|
||||||
]
|
]
|
||||||
|
IS_3PID_ALLOWED_CALLBACK = Callable[[str, str, bool], Awaitable[bool]]
|
||||||
|
|
||||||
|
|
||||||
class PasswordAuthProvider:
|
class PasswordAuthProvider:
|
||||||
@ -2079,6 +2080,7 @@ class PasswordAuthProvider:
|
|||||||
self.get_username_for_registration_callbacks: List[
|
self.get_username_for_registration_callbacks: List[
|
||||||
GET_USERNAME_FOR_REGISTRATION_CALLBACK
|
GET_USERNAME_FOR_REGISTRATION_CALLBACK
|
||||||
] = []
|
] = []
|
||||||
|
self.is_3pid_allowed_callbacks: List[IS_3PID_ALLOWED_CALLBACK] = []
|
||||||
|
|
||||||
# Mapping from login type to login parameters
|
# Mapping from login type to login parameters
|
||||||
self._supported_login_types: Dict[str, Iterable[str]] = {}
|
self._supported_login_types: Dict[str, Iterable[str]] = {}
|
||||||
@ -2090,6 +2092,7 @@ class PasswordAuthProvider:
|
|||||||
self,
|
self,
|
||||||
check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None,
|
check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None,
|
||||||
on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None,
|
on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None,
|
||||||
|
is_3pid_allowed: Optional[IS_3PID_ALLOWED_CALLBACK] = None,
|
||||||
auth_checkers: Optional[
|
auth_checkers: Optional[
|
||||||
Dict[Tuple[str, Tuple[str, ...]], CHECK_AUTH_CALLBACK]
|
Dict[Tuple[str, Tuple[str, ...]], CHECK_AUTH_CALLBACK]
|
||||||
] = None,
|
] = None,
|
||||||
@ -2145,6 +2148,9 @@ class PasswordAuthProvider:
|
|||||||
get_username_for_registration,
|
get_username_for_registration,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if is_3pid_allowed is not None:
|
||||||
|
self.is_3pid_allowed_callbacks.append(is_3pid_allowed)
|
||||||
|
|
||||||
def get_supported_login_types(self) -> Mapping[str, Iterable[str]]:
|
def get_supported_login_types(self) -> Mapping[str, Iterable[str]]:
|
||||||
"""Get the login types supported by this password provider
|
"""Get the login types supported by this password provider
|
||||||
|
|
||||||
@ -2343,3 +2349,41 @@ class PasswordAuthProvider:
|
|||||||
raise SynapseError(code=500, msg="Internal Server Error")
|
raise SynapseError(code=500, msg="Internal Server Error")
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
async def is_3pid_allowed(
|
||||||
|
self,
|
||||||
|
medium: str,
|
||||||
|
address: str,
|
||||||
|
registration: bool,
|
||||||
|
) -> bool:
|
||||||
|
"""Check if the user can be allowed to bind a 3PID on this homeserver.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
medium: The medium of the 3PID.
|
||||||
|
address: The address of the 3PID.
|
||||||
|
registration: Whether the 3PID is being bound when registering a new user.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Whether the 3PID is allowed to be bound on this homeserver
|
||||||
|
"""
|
||||||
|
for callback in self.is_3pid_allowed_callbacks:
|
||||||
|
try:
|
||||||
|
res = await callback(medium, address, registration)
|
||||||
|
|
||||||
|
if res is False:
|
||||||
|
return res
|
||||||
|
elif not isinstance(res, bool):
|
||||||
|
# mypy complains that this line is unreachable because it assumes the
|
||||||
|
# data returned by the module fits the expected type. We just want
|
||||||
|
# to make sure this is the case.
|
||||||
|
logger.warning( # type: ignore[unreachable]
|
||||||
|
"Ignoring non-string value returned by"
|
||||||
|
" is_3pid_allowed callback %s: %s",
|
||||||
|
callback,
|
||||||
|
res,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error("Module raised an exception in is_3pid_allowed: %s", e)
|
||||||
|
raise SynapseError(code=500, msg="Internal Server Error")
|
||||||
|
|
||||||
|
return True
|
||||||
|
@ -495,13 +495,11 @@ class DeviceHandler(DeviceWorkerHandler):
|
|||||||
"Notifying about update %r/%r, ID: %r", user_id, device_id, position
|
"Notifying about update %r/%r, ID: %r", user_id, device_id, position
|
||||||
)
|
)
|
||||||
|
|
||||||
room_ids = await self.store.get_rooms_for_user(user_id)
|
|
||||||
|
|
||||||
# specify the user ID too since the user should always get their own device list
|
# specify the user ID too since the user should always get their own device list
|
||||||
# updates, even if they aren't in any rooms.
|
# updates, even if they aren't in any rooms.
|
||||||
self.notifier.on_new_event(
|
users_to_notify = users_who_share_room.union({user_id})
|
||||||
"device_list_key", position, users=[user_id], rooms=room_ids
|
|
||||||
)
|
self.notifier.on_new_event("device_list_key", position, users=users_to_notify)
|
||||||
|
|
||||||
if hosts:
|
if hosts:
|
||||||
logger.info(
|
logger.info(
|
||||||
|
@ -166,9 +166,14 @@ class FederationHandler:
|
|||||||
oldest_events_with_depth = (
|
oldest_events_with_depth = (
|
||||||
await self.store.get_oldest_event_ids_with_depth_in_room(room_id)
|
await self.store.get_oldest_event_ids_with_depth_in_room(room_id)
|
||||||
)
|
)
|
||||||
insertion_events_to_be_backfilled = (
|
|
||||||
await self.store.get_insertion_event_backwards_extremities_in_room(room_id)
|
insertion_events_to_be_backfilled: Dict[str, int] = {}
|
||||||
)
|
if self.hs.config.experimental.msc2716_enabled:
|
||||||
|
insertion_events_to_be_backfilled = (
|
||||||
|
await self.store.get_insertion_event_backward_extremities_in_room(
|
||||||
|
room_id
|
||||||
|
)
|
||||||
|
)
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"_maybe_backfill_inner: extremities oldest_events_with_depth=%s insertion_events_to_be_backfilled=%s",
|
"_maybe_backfill_inner: extremities oldest_events_with_depth=%s insertion_events_to_be_backfilled=%s",
|
||||||
oldest_events_with_depth,
|
oldest_events_with_depth,
|
||||||
@ -271,11 +276,12 @@ class FederationHandler:
|
|||||||
]
|
]
|
||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"room_id: %s, backfill: current_depth: %s, limit: %s, max_depth: %s, extrems: %s filtered_sorted_extremeties_tuple: %s",
|
"room_id: %s, backfill: current_depth: %s, limit: %s, max_depth: %s, extrems (%d): %s filtered_sorted_extremeties_tuple: %s",
|
||||||
room_id,
|
room_id,
|
||||||
current_depth,
|
current_depth,
|
||||||
limit,
|
limit,
|
||||||
max_depth,
|
max_depth,
|
||||||
|
len(sorted_extremeties_tuple),
|
||||||
sorted_extremeties_tuple,
|
sorted_extremeties_tuple,
|
||||||
filtered_sorted_extremeties_tuple,
|
filtered_sorted_extremeties_tuple,
|
||||||
)
|
)
|
||||||
@ -1047,6 +1053,19 @@ class FederationHandler:
|
|||||||
limit = min(limit, 100)
|
limit = min(limit, 100)
|
||||||
|
|
||||||
events = await self.store.get_backfill_events(room_id, pdu_list, limit)
|
events = await self.store.get_backfill_events(room_id, pdu_list, limit)
|
||||||
|
logger.debug(
|
||||||
|
"on_backfill_request: backfill events=%s",
|
||||||
|
[
|
||||||
|
"event_id=%s,depth=%d,body=%s,prevs=%s\n"
|
||||||
|
% (
|
||||||
|
event.event_id,
|
||||||
|
event.depth,
|
||||||
|
event.content.get("body", event.type),
|
||||||
|
event.prev_event_ids(),
|
||||||
|
)
|
||||||
|
for event in events
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
events = await filter_events_for_server(self.storage, origin, events)
|
events = await filter_events_for_server(self.storage, origin, events)
|
||||||
|
|
||||||
|
@ -508,7 +508,11 @@ class FederationEventHandler:
|
|||||||
f"room {ev.room_id}, when we were backfilling in {room_id}"
|
f"room {ev.room_id}, when we were backfilling in {room_id}"
|
||||||
)
|
)
|
||||||
|
|
||||||
await self._process_pulled_events(dest, events, backfilled=True)
|
await self._process_pulled_events(
|
||||||
|
dest,
|
||||||
|
events,
|
||||||
|
backfilled=True,
|
||||||
|
)
|
||||||
|
|
||||||
async def _get_missing_events_for_pdu(
|
async def _get_missing_events_for_pdu(
|
||||||
self, origin: str, pdu: EventBase, prevs: Set[str], min_depth: int
|
self, origin: str, pdu: EventBase, prevs: Set[str], min_depth: int
|
||||||
@ -626,11 +630,24 @@ class FederationEventHandler:
|
|||||||
backfilled: True if this is part of a historical batch of events (inhibits
|
backfilled: True if this is part of a historical batch of events (inhibits
|
||||||
notification to clients, and validation of device keys.)
|
notification to clients, and validation of device keys.)
|
||||||
"""
|
"""
|
||||||
|
logger.debug(
|
||||||
|
"processing pulled backfilled=%s events=%s",
|
||||||
|
backfilled,
|
||||||
|
[
|
||||||
|
"event_id=%s,depth=%d,body=%s,prevs=%s\n"
|
||||||
|
% (
|
||||||
|
event.event_id,
|
||||||
|
event.depth,
|
||||||
|
event.content.get("body", event.type),
|
||||||
|
event.prev_event_ids(),
|
||||||
|
)
|
||||||
|
for event in events
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
# We want to sort these by depth so we process them and
|
# We want to sort these by depth so we process them and
|
||||||
# tell clients about them in order.
|
# tell clients about them in order.
|
||||||
sorted_events = sorted(events, key=lambda x: x.depth)
|
sorted_events = sorted(events, key=lambda x: x.depth)
|
||||||
|
|
||||||
for ev in sorted_events:
|
for ev in sorted_events:
|
||||||
with nested_logging_context(ev.event_id):
|
with nested_logging_context(ev.event_id):
|
||||||
await self._process_pulled_event(origin, ev, backfilled=backfilled)
|
await self._process_pulled_event(origin, ev, backfilled=backfilled)
|
||||||
@ -992,6 +1009,8 @@ class FederationEventHandler:
|
|||||||
|
|
||||||
await self._run_push_actions_and_persist_event(event, context, backfilled)
|
await self._run_push_actions_and_persist_event(event, context, backfilled)
|
||||||
|
|
||||||
|
await self._handle_marker_event(origin, event)
|
||||||
|
|
||||||
if backfilled or context.rejected:
|
if backfilled or context.rejected:
|
||||||
return
|
return
|
||||||
|
|
||||||
@ -1071,8 +1090,6 @@ class FederationEventHandler:
|
|||||||
event.sender,
|
event.sender,
|
||||||
)
|
)
|
||||||
|
|
||||||
await self._handle_marker_event(origin, event)
|
|
||||||
|
|
||||||
async def _resync_device(self, sender: str) -> None:
|
async def _resync_device(self, sender: str) -> None:
|
||||||
"""We have detected that the device list for the given user may be out
|
"""We have detected that the device list for the given user may be out
|
||||||
of sync, so we try and resync them.
|
of sync, so we try and resync them.
|
||||||
@ -1323,7 +1340,14 @@ class FederationEventHandler:
|
|||||||
return event, context
|
return event, context
|
||||||
|
|
||||||
events_to_persist = (x for x in (prep(event) for event in fetched_events) if x)
|
events_to_persist = (x for x in (prep(event) for event in fetched_events) if x)
|
||||||
await self.persist_events_and_notify(room_id, tuple(events_to_persist))
|
await self.persist_events_and_notify(
|
||||||
|
room_id,
|
||||||
|
tuple(events_to_persist),
|
||||||
|
# Mark these events backfilled as they're historic events that will
|
||||||
|
# eventually be backfilled. For example, missing events we fetch
|
||||||
|
# during backfill should be marked as backfilled as well.
|
||||||
|
backfilled=True,
|
||||||
|
)
|
||||||
|
|
||||||
async def _check_event_auth(
|
async def _check_event_auth(
|
||||||
self,
|
self,
|
||||||
|
@ -490,12 +490,12 @@ class EventCreationHandler:
|
|||||||
requester: Requester,
|
requester: Requester,
|
||||||
event_dict: dict,
|
event_dict: dict,
|
||||||
txn_id: Optional[str] = None,
|
txn_id: Optional[str] = None,
|
||||||
|
allow_no_prev_events: bool = False,
|
||||||
prev_event_ids: Optional[List[str]] = None,
|
prev_event_ids: Optional[List[str]] = None,
|
||||||
auth_event_ids: Optional[List[str]] = None,
|
auth_event_ids: Optional[List[str]] = None,
|
||||||
require_consent: bool = True,
|
require_consent: bool = True,
|
||||||
outlier: bool = False,
|
outlier: bool = False,
|
||||||
historical: bool = False,
|
historical: bool = False,
|
||||||
allow_no_prev_events: bool = False,
|
|
||||||
depth: Optional[int] = None,
|
depth: Optional[int] = None,
|
||||||
) -> Tuple[EventBase, EventContext]:
|
) -> Tuple[EventBase, EventContext]:
|
||||||
"""
|
"""
|
||||||
@ -510,6 +510,10 @@ class EventCreationHandler:
|
|||||||
requester
|
requester
|
||||||
event_dict: An entire event
|
event_dict: An entire event
|
||||||
txn_id
|
txn_id
|
||||||
|
allow_no_prev_events: Whether to allow this event to be created an empty
|
||||||
|
list of prev_events. Normally this is prohibited just because most
|
||||||
|
events should have a prev_event and we should only use this in special
|
||||||
|
cases like MSC2716.
|
||||||
prev_event_ids:
|
prev_event_ids:
|
||||||
the forward extremities to use as the prev_events for the
|
the forward extremities to use as the prev_events for the
|
||||||
new event.
|
new event.
|
||||||
@ -604,10 +608,10 @@ class EventCreationHandler:
|
|||||||
event, context = await self.create_new_client_event(
|
event, context = await self.create_new_client_event(
|
||||||
builder=builder,
|
builder=builder,
|
||||||
requester=requester,
|
requester=requester,
|
||||||
|
allow_no_prev_events=allow_no_prev_events,
|
||||||
prev_event_ids=prev_event_ids,
|
prev_event_ids=prev_event_ids,
|
||||||
auth_event_ids=auth_event_ids,
|
auth_event_ids=auth_event_ids,
|
||||||
depth=depth,
|
depth=depth,
|
||||||
allow_no_prev_events=allow_no_prev_events,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# In an ideal world we wouldn't need the second part of this condition. However,
|
# In an ideal world we wouldn't need the second part of this condition. However,
|
||||||
@ -764,6 +768,7 @@ class EventCreationHandler:
|
|||||||
self,
|
self,
|
||||||
requester: Requester,
|
requester: Requester,
|
||||||
event_dict: dict,
|
event_dict: dict,
|
||||||
|
allow_no_prev_events: bool = False,
|
||||||
prev_event_ids: Optional[List[str]] = None,
|
prev_event_ids: Optional[List[str]] = None,
|
||||||
auth_event_ids: Optional[List[str]] = None,
|
auth_event_ids: Optional[List[str]] = None,
|
||||||
ratelimit: bool = True,
|
ratelimit: bool = True,
|
||||||
@ -781,6 +786,10 @@ class EventCreationHandler:
|
|||||||
Args:
|
Args:
|
||||||
requester: The requester sending the event.
|
requester: The requester sending the event.
|
||||||
event_dict: An entire event.
|
event_dict: An entire event.
|
||||||
|
allow_no_prev_events: Whether to allow this event to be created an empty
|
||||||
|
list of prev_events. Normally this is prohibited just because most
|
||||||
|
events should have a prev_event and we should only use this in special
|
||||||
|
cases like MSC2716.
|
||||||
prev_event_ids:
|
prev_event_ids:
|
||||||
The event IDs to use as the prev events.
|
The event IDs to use as the prev events.
|
||||||
Should normally be left as None to automatically request them
|
Should normally be left as None to automatically request them
|
||||||
@ -880,16 +889,20 @@ class EventCreationHandler:
|
|||||||
self,
|
self,
|
||||||
builder: EventBuilder,
|
builder: EventBuilder,
|
||||||
requester: Optional[Requester] = None,
|
requester: Optional[Requester] = None,
|
||||||
|
allow_no_prev_events: bool = False,
|
||||||
prev_event_ids: Optional[List[str]] = None,
|
prev_event_ids: Optional[List[str]] = None,
|
||||||
auth_event_ids: Optional[List[str]] = None,
|
auth_event_ids: Optional[List[str]] = None,
|
||||||
depth: Optional[int] = None,
|
depth: Optional[int] = None,
|
||||||
allow_no_prev_events: bool = False,
|
|
||||||
) -> Tuple[EventBase, EventContext]:
|
) -> Tuple[EventBase, EventContext]:
|
||||||
"""Create a new event for a local client
|
"""Create a new event for a local client
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
builder:
|
builder:
|
||||||
requester:
|
requester:
|
||||||
|
allow_no_prev_events: Whether to allow this event to be created an empty
|
||||||
|
list of prev_events. Normally this is prohibited just because most
|
||||||
|
events should have a prev_event and we should only use this in special
|
||||||
|
cases like MSC2716.
|
||||||
prev_event_ids:
|
prev_event_ids:
|
||||||
the forward extremities to use as the prev_events for the
|
the forward extremities to use as the prev_events for the
|
||||||
new event.
|
new event.
|
||||||
@ -908,7 +921,6 @@ class EventCreationHandler:
|
|||||||
Returns:
|
Returns:
|
||||||
Tuple of created event, context
|
Tuple of created event, context
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Strip down the auth_event_ids to only what we need to auth the event.
|
# Strip down the auth_event_ids to only what we need to auth the event.
|
||||||
# For example, we don't need extra m.room.member that don't match event.sender
|
# For example, we don't need extra m.room.member that don't match event.sender
|
||||||
full_state_ids_at_event = None
|
full_state_ids_at_event = None
|
||||||
|
@ -544,9 +544,9 @@ class OidcProvider:
|
|||||||
"""
|
"""
|
||||||
metadata = await self.load_metadata()
|
metadata = await self.load_metadata()
|
||||||
token_endpoint = metadata.get("token_endpoint")
|
token_endpoint = metadata.get("token_endpoint")
|
||||||
raw_headers = {
|
raw_headers: Dict[str, str] = {
|
||||||
"Content-Type": "application/x-www-form-urlencoded",
|
"Content-Type": "application/x-www-form-urlencoded",
|
||||||
"User-Agent": self._http_client.user_agent,
|
"User-Agent": self._http_client.user_agent.decode("ascii"),
|
||||||
"Accept": "application/json",
|
"Accept": "application/json",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -694,11 +694,6 @@ class RoomCreationHandler:
|
|||||||
|
|
||||||
if not is_requester_admin and not (
|
if not is_requester_admin and not (
|
||||||
await self.spam_checker.user_may_create_room(user_id)
|
await self.spam_checker.user_may_create_room(user_id)
|
||||||
and await self.spam_checker.user_may_create_room_with_invites(
|
|
||||||
user_id,
|
|
||||||
invite_list,
|
|
||||||
invite_3pid_list,
|
|
||||||
)
|
|
||||||
):
|
):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
403, "You are not permitted to create rooms", Codes.FORBIDDEN
|
403, "You are not permitted to create rooms", Codes.FORBIDDEN
|
||||||
|
@ -13,10 +13,6 @@ if TYPE_CHECKING:
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def generate_fake_event_id() -> str:
|
|
||||||
return "$fake_" + random_string(43)
|
|
||||||
|
|
||||||
|
|
||||||
class RoomBatchHandler:
|
class RoomBatchHandler:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
@ -182,11 +178,12 @@ class RoomBatchHandler:
|
|||||||
state_event_ids_at_start = []
|
state_event_ids_at_start = []
|
||||||
auth_event_ids = initial_auth_event_ids.copy()
|
auth_event_ids = initial_auth_event_ids.copy()
|
||||||
|
|
||||||
# Make the state events float off on their own so we don't have a
|
# Make the state events float off on their own by specifying no
|
||||||
# bunch of `@mxid joined the room` noise between each batch
|
# prev_events for the first one in the chain so we don't have a bunch of
|
||||||
prev_event_id_for_state_chain = generate_fake_event_id()
|
# `@mxid joined the room` noise between each batch.
|
||||||
|
prev_event_ids_for_state_chain: List[str] = []
|
||||||
|
|
||||||
for state_event in state_events_at_start:
|
for index, state_event in enumerate(state_events_at_start):
|
||||||
assert_params_in_dict(
|
assert_params_in_dict(
|
||||||
state_event, ["type", "origin_server_ts", "content", "sender"]
|
state_event, ["type", "origin_server_ts", "content", "sender"]
|
||||||
)
|
)
|
||||||
@ -222,7 +219,10 @@ class RoomBatchHandler:
|
|||||||
content=event_dict["content"],
|
content=event_dict["content"],
|
||||||
outlier=True,
|
outlier=True,
|
||||||
historical=True,
|
historical=True,
|
||||||
prev_event_ids=[prev_event_id_for_state_chain],
|
# Only the first event in the chain should be floating.
|
||||||
|
# The rest should hang off each other in a chain.
|
||||||
|
allow_no_prev_events=index == 0,
|
||||||
|
prev_event_ids=prev_event_ids_for_state_chain,
|
||||||
# Make sure to use a copy of this list because we modify it
|
# Make sure to use a copy of this list because we modify it
|
||||||
# later in the loop here. Otherwise it will be the same
|
# later in the loop here. Otherwise it will be the same
|
||||||
# reference and also update in the event when we append later.
|
# reference and also update in the event when we append later.
|
||||||
@ -242,7 +242,10 @@ class RoomBatchHandler:
|
|||||||
event_dict,
|
event_dict,
|
||||||
outlier=True,
|
outlier=True,
|
||||||
historical=True,
|
historical=True,
|
||||||
prev_event_ids=[prev_event_id_for_state_chain],
|
# Only the first event in the chain should be floating.
|
||||||
|
# The rest should hang off each other in a chain.
|
||||||
|
allow_no_prev_events=index == 0,
|
||||||
|
prev_event_ids=prev_event_ids_for_state_chain,
|
||||||
# Make sure to use a copy of this list because we modify it
|
# Make sure to use a copy of this list because we modify it
|
||||||
# later in the loop here. Otherwise it will be the same
|
# later in the loop here. Otherwise it will be the same
|
||||||
# reference and also update in the event when we append later.
|
# reference and also update in the event when we append later.
|
||||||
@ -253,7 +256,7 @@ class RoomBatchHandler:
|
|||||||
state_event_ids_at_start.append(event_id)
|
state_event_ids_at_start.append(event_id)
|
||||||
auth_event_ids.append(event_id)
|
auth_event_ids.append(event_id)
|
||||||
# Connect all the state in a floating chain
|
# Connect all the state in a floating chain
|
||||||
prev_event_id_for_state_chain = event_id
|
prev_event_ids_for_state_chain = [event_id]
|
||||||
|
|
||||||
return state_event_ids_at_start
|
return state_event_ids_at_start
|
||||||
|
|
||||||
@ -261,7 +264,6 @@ class RoomBatchHandler:
|
|||||||
self,
|
self,
|
||||||
events_to_create: List[JsonDict],
|
events_to_create: List[JsonDict],
|
||||||
room_id: str,
|
room_id: str,
|
||||||
initial_prev_event_ids: List[str],
|
|
||||||
inherited_depth: int,
|
inherited_depth: int,
|
||||||
auth_event_ids: List[str],
|
auth_event_ids: List[str],
|
||||||
app_service_requester: Requester,
|
app_service_requester: Requester,
|
||||||
@ -277,9 +279,6 @@ class RoomBatchHandler:
|
|||||||
events_to_create: List of historical events to create in JSON
|
events_to_create: List of historical events to create in JSON
|
||||||
dictionary format.
|
dictionary format.
|
||||||
room_id: Room where you want the events persisted in.
|
room_id: Room where you want the events persisted in.
|
||||||
initial_prev_event_ids: These will be the prev_events for the first
|
|
||||||
event created. Each event created afterwards will point to the
|
|
||||||
previous event created.
|
|
||||||
inherited_depth: The depth to create the events at (you will
|
inherited_depth: The depth to create the events at (you will
|
||||||
probably by calling inherit_depth_from_prev_ids(...)).
|
probably by calling inherit_depth_from_prev_ids(...)).
|
||||||
auth_event_ids: Define which events allow you to create the given
|
auth_event_ids: Define which events allow you to create the given
|
||||||
@ -291,11 +290,14 @@ class RoomBatchHandler:
|
|||||||
"""
|
"""
|
||||||
assert app_service_requester.app_service
|
assert app_service_requester.app_service
|
||||||
|
|
||||||
prev_event_ids = initial_prev_event_ids.copy()
|
# Make the historical event chain float off on its own by specifying no
|
||||||
|
# prev_events for the first event in the chain which causes the HS to
|
||||||
|
# ask for the state at the start of the batch later.
|
||||||
|
prev_event_ids: List[str] = []
|
||||||
|
|
||||||
event_ids = []
|
event_ids = []
|
||||||
events_to_persist = []
|
events_to_persist = []
|
||||||
for ev in events_to_create:
|
for index, ev in enumerate(events_to_create):
|
||||||
assert_params_in_dict(ev, ["type", "origin_server_ts", "content", "sender"])
|
assert_params_in_dict(ev, ["type", "origin_server_ts", "content", "sender"])
|
||||||
|
|
||||||
assert self.hs.is_mine_id(ev["sender"]), "User must be our own: %s" % (
|
assert self.hs.is_mine_id(ev["sender"]), "User must be our own: %s" % (
|
||||||
@ -319,6 +321,9 @@ class RoomBatchHandler:
|
|||||||
ev["sender"], app_service_requester.app_service
|
ev["sender"], app_service_requester.app_service
|
||||||
),
|
),
|
||||||
event_dict,
|
event_dict,
|
||||||
|
# Only the first event in the chain should be floating.
|
||||||
|
# The rest should hang off each other in a chain.
|
||||||
|
allow_no_prev_events=index == 0,
|
||||||
prev_event_ids=event_dict.get("prev_events"),
|
prev_event_ids=event_dict.get("prev_events"),
|
||||||
auth_event_ids=auth_event_ids,
|
auth_event_ids=auth_event_ids,
|
||||||
historical=True,
|
historical=True,
|
||||||
@ -370,7 +375,6 @@ class RoomBatchHandler:
|
|||||||
events_to_create: List[JsonDict],
|
events_to_create: List[JsonDict],
|
||||||
room_id: str,
|
room_id: str,
|
||||||
batch_id_to_connect_to: str,
|
batch_id_to_connect_to: str,
|
||||||
initial_prev_event_ids: List[str],
|
|
||||||
inherited_depth: int,
|
inherited_depth: int,
|
||||||
auth_event_ids: List[str],
|
auth_event_ids: List[str],
|
||||||
app_service_requester: Requester,
|
app_service_requester: Requester,
|
||||||
@ -385,9 +389,6 @@ class RoomBatchHandler:
|
|||||||
room_id: Room where you want the events created in.
|
room_id: Room where you want the events created in.
|
||||||
batch_id_to_connect_to: The batch_id from the insertion event you
|
batch_id_to_connect_to: The batch_id from the insertion event you
|
||||||
want this batch to connect to.
|
want this batch to connect to.
|
||||||
initial_prev_event_ids: These will be the prev_events for the first
|
|
||||||
event created. Each event created afterwards will point to the
|
|
||||||
previous event created.
|
|
||||||
inherited_depth: The depth to create the events at (you will
|
inherited_depth: The depth to create the events at (you will
|
||||||
probably by calling inherit_depth_from_prev_ids(...)).
|
probably by calling inherit_depth_from_prev_ids(...)).
|
||||||
auth_event_ids: Define which events allow you to create the given
|
auth_event_ids: Define which events allow you to create the given
|
||||||
@ -436,7 +437,6 @@ class RoomBatchHandler:
|
|||||||
event_ids = await self.persist_historical_events(
|
event_ids = await self.persist_historical_events(
|
||||||
events_to_create=events_to_create,
|
events_to_create=events_to_create,
|
||||||
room_id=room_id,
|
room_id=room_id,
|
||||||
initial_prev_event_ids=initial_prev_event_ids,
|
|
||||||
inherited_depth=inherited_depth,
|
inherited_depth=inherited_depth,
|
||||||
auth_event_ids=auth_event_ids,
|
auth_event_ids=auth_event_ids,
|
||||||
app_service_requester=app_service_requester,
|
app_service_requester=app_service_requester,
|
||||||
|
@ -116,6 +116,13 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
burst_count=hs.config.ratelimiting.rc_invites_per_user.burst_count,
|
burst_count=hs.config.ratelimiting.rc_invites_per_user.burst_count,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self._third_party_invite_limiter = Ratelimiter(
|
||||||
|
store=self.store,
|
||||||
|
clock=self.clock,
|
||||||
|
rate_hz=hs.config.ratelimiting.rc_third_party_invite.per_second,
|
||||||
|
burst_count=hs.config.ratelimiting.rc_third_party_invite.burst_count,
|
||||||
|
)
|
||||||
|
|
||||||
self.request_ratelimiter = hs.get_request_ratelimiter()
|
self.request_ratelimiter = hs.get_request_ratelimiter()
|
||||||
|
|
||||||
@abc.abstractmethod
|
@abc.abstractmethod
|
||||||
@ -261,7 +268,8 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
target: UserID,
|
target: UserID,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
membership: str,
|
membership: str,
|
||||||
prev_event_ids: List[str],
|
allow_no_prev_events: bool = False,
|
||||||
|
prev_event_ids: Optional[List[str]] = None,
|
||||||
auth_event_ids: Optional[List[str]] = None,
|
auth_event_ids: Optional[List[str]] = None,
|
||||||
txn_id: Optional[str] = None,
|
txn_id: Optional[str] = None,
|
||||||
ratelimit: bool = True,
|
ratelimit: bool = True,
|
||||||
@ -279,8 +287,12 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
target:
|
target:
|
||||||
room_id:
|
room_id:
|
||||||
membership:
|
membership:
|
||||||
prev_event_ids: The event IDs to use as the prev events
|
|
||||||
|
|
||||||
|
allow_no_prev_events: Whether to allow this event to be created an empty
|
||||||
|
list of prev_events. Normally this is prohibited just because most
|
||||||
|
events should have a prev_event and we should only use this in special
|
||||||
|
cases like MSC2716.
|
||||||
|
prev_event_ids: The event IDs to use as the prev events
|
||||||
auth_event_ids:
|
auth_event_ids:
|
||||||
The event ids to use as the auth_events for the new event.
|
The event ids to use as the auth_events for the new event.
|
||||||
Should normally be left as None, which will cause them to be calculated
|
Should normally be left as None, which will cause them to be calculated
|
||||||
@ -337,6 +349,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
"membership": membership,
|
"membership": membership,
|
||||||
},
|
},
|
||||||
txn_id=txn_id,
|
txn_id=txn_id,
|
||||||
|
allow_no_prev_events=allow_no_prev_events,
|
||||||
prev_event_ids=prev_event_ids,
|
prev_event_ids=prev_event_ids,
|
||||||
auth_event_ids=auth_event_ids,
|
auth_event_ids=auth_event_ids,
|
||||||
require_consent=require_consent,
|
require_consent=require_consent,
|
||||||
@ -439,6 +452,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
require_consent: bool = True,
|
require_consent: bool = True,
|
||||||
outlier: bool = False,
|
outlier: bool = False,
|
||||||
historical: bool = False,
|
historical: bool = False,
|
||||||
|
allow_no_prev_events: bool = False,
|
||||||
prev_event_ids: Optional[List[str]] = None,
|
prev_event_ids: Optional[List[str]] = None,
|
||||||
auth_event_ids: Optional[List[str]] = None,
|
auth_event_ids: Optional[List[str]] = None,
|
||||||
) -> Tuple[str, int]:
|
) -> Tuple[str, int]:
|
||||||
@ -463,6 +477,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
historical: Indicates whether the message is being inserted
|
historical: Indicates whether the message is being inserted
|
||||||
back in time around some existing events. This is used to skip
|
back in time around some existing events. This is used to skip
|
||||||
a few checks and mark the event as backfilled.
|
a few checks and mark the event as backfilled.
|
||||||
|
allow_no_prev_events: Whether to allow this event to be created an empty
|
||||||
|
list of prev_events. Normally this is prohibited just because most
|
||||||
|
events should have a prev_event and we should only use this in special
|
||||||
|
cases like MSC2716.
|
||||||
prev_event_ids: The event IDs to use as the prev events
|
prev_event_ids: The event IDs to use as the prev events
|
||||||
auth_event_ids:
|
auth_event_ids:
|
||||||
The event ids to use as the auth_events for the new event.
|
The event ids to use as the auth_events for the new event.
|
||||||
@ -497,6 +515,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
require_consent=require_consent,
|
require_consent=require_consent,
|
||||||
outlier=outlier,
|
outlier=outlier,
|
||||||
historical=historical,
|
historical=historical,
|
||||||
|
allow_no_prev_events=allow_no_prev_events,
|
||||||
prev_event_ids=prev_event_ids,
|
prev_event_ids=prev_event_ids,
|
||||||
auth_event_ids=auth_event_ids,
|
auth_event_ids=auth_event_ids,
|
||||||
)
|
)
|
||||||
@ -518,6 +537,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
require_consent: bool = True,
|
require_consent: bool = True,
|
||||||
outlier: bool = False,
|
outlier: bool = False,
|
||||||
historical: bool = False,
|
historical: bool = False,
|
||||||
|
allow_no_prev_events: bool = False,
|
||||||
prev_event_ids: Optional[List[str]] = None,
|
prev_event_ids: Optional[List[str]] = None,
|
||||||
auth_event_ids: Optional[List[str]] = None,
|
auth_event_ids: Optional[List[str]] = None,
|
||||||
) -> Tuple[str, int]:
|
) -> Tuple[str, int]:
|
||||||
@ -544,6 +564,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
historical: Indicates whether the message is being inserted
|
historical: Indicates whether the message is being inserted
|
||||||
back in time around some existing events. This is used to skip
|
back in time around some existing events. This is used to skip
|
||||||
a few checks and mark the event as backfilled.
|
a few checks and mark the event as backfilled.
|
||||||
|
allow_no_prev_events: Whether to allow this event to be created an empty
|
||||||
|
list of prev_events. Normally this is prohibited just because most
|
||||||
|
events should have a prev_event and we should only use this in special
|
||||||
|
cases like MSC2716.
|
||||||
prev_event_ids: The event IDs to use as the prev events
|
prev_event_ids: The event IDs to use as the prev events
|
||||||
auth_event_ids:
|
auth_event_ids:
|
||||||
The event ids to use as the auth_events for the new event.
|
The event ids to use as the auth_events for the new event.
|
||||||
@ -673,6 +697,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
membership=effective_membership_state,
|
membership=effective_membership_state,
|
||||||
txn_id=txn_id,
|
txn_id=txn_id,
|
||||||
ratelimit=ratelimit,
|
ratelimit=ratelimit,
|
||||||
|
allow_no_prev_events=allow_no_prev_events,
|
||||||
prev_event_ids=prev_event_ids,
|
prev_event_ids=prev_event_ids,
|
||||||
auth_event_ids=auth_event_ids,
|
auth_event_ids=auth_event_ids,
|
||||||
content=content,
|
content=content,
|
||||||
@ -1295,7 +1320,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
|||||||
|
|
||||||
# We need to rate limit *before* we send out any 3PID invites, so we
|
# We need to rate limit *before* we send out any 3PID invites, so we
|
||||||
# can't just rely on the standard ratelimiting of events.
|
# can't just rely on the standard ratelimiting of events.
|
||||||
await self.request_ratelimiter.ratelimit(requester)
|
await self._third_party_invite_limiter.ratelimit(requester)
|
||||||
|
|
||||||
can_invite = await self.third_party_event_rules.check_threepid_can_be_invited(
|
can_invite = await self.third_party_event_rules.check_threepid_can_be_invited(
|
||||||
medium, address, room_id
|
medium, address, room_id
|
||||||
|
@ -43,6 +43,8 @@ class SearchHandler:
|
|||||||
self.state_store = self.storage.state
|
self.state_store = self.storage.state
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
|
|
||||||
|
self._msc3666_enabled = hs.config.experimental.msc3666_enabled
|
||||||
|
|
||||||
async def get_old_rooms_from_upgraded_room(self, room_id: str) -> Iterable[str]:
|
async def get_old_rooms_from_upgraded_room(self, room_id: str) -> Iterable[str]:
|
||||||
"""Retrieves room IDs of old rooms in the history of an upgraded room.
|
"""Retrieves room IDs of old rooms in the history of an upgraded room.
|
||||||
|
|
||||||
@ -238,8 +240,6 @@ class SearchHandler:
|
|||||||
|
|
||||||
results = search_result["results"]
|
results = search_result["results"]
|
||||||
|
|
||||||
results_map = {r["event"].event_id: r for r in results}
|
|
||||||
|
|
||||||
rank_map.update({r["event"].event_id: r["rank"] for r in results})
|
rank_map.update({r["event"].event_id: r["rank"] for r in results})
|
||||||
|
|
||||||
filtered_events = await search_filter.filter([r["event"] for r in results])
|
filtered_events = await search_filter.filter([r["event"] for r in results])
|
||||||
@ -420,12 +420,29 @@ class SearchHandler:
|
|||||||
|
|
||||||
time_now = self.clock.time_msec()
|
time_now = self.clock.time_msec()
|
||||||
|
|
||||||
|
aggregations = None
|
||||||
|
if self._msc3666_enabled:
|
||||||
|
aggregations = await self.store.get_bundled_aggregations(
|
||||||
|
# Generate an iterable of EventBase for all the events that will be
|
||||||
|
# returned, including contextual events.
|
||||||
|
itertools.chain(
|
||||||
|
# The events_before and events_after for each context.
|
||||||
|
itertools.chain.from_iterable(
|
||||||
|
itertools.chain(context["events_before"], context["events_after"]) # type: ignore[arg-type]
|
||||||
|
for context in contexts.values()
|
||||||
|
),
|
||||||
|
# The returned events.
|
||||||
|
allowed_events,
|
||||||
|
),
|
||||||
|
user.to_string(),
|
||||||
|
)
|
||||||
|
|
||||||
for context in contexts.values():
|
for context in contexts.values():
|
||||||
context["events_before"] = self._event_serializer.serialize_events(
|
context["events_before"] = self._event_serializer.serialize_events(
|
||||||
context["events_before"], time_now # type: ignore[arg-type]
|
context["events_before"], time_now, bundle_aggregations=aggregations # type: ignore[arg-type]
|
||||||
)
|
)
|
||||||
context["events_after"] = self._event_serializer.serialize_events(
|
context["events_after"] = self._event_serializer.serialize_events(
|
||||||
context["events_after"], time_now # type: ignore[arg-type]
|
context["events_after"], time_now, bundle_aggregations=aggregations # type: ignore[arg-type]
|
||||||
)
|
)
|
||||||
|
|
||||||
state_results = {}
|
state_results = {}
|
||||||
@ -442,7 +459,9 @@ class SearchHandler:
|
|||||||
results.append(
|
results.append(
|
||||||
{
|
{
|
||||||
"rank": rank_map[e.event_id],
|
"rank": rank_map[e.event_id],
|
||||||
"result": self._event_serializer.serialize_event(e, time_now),
|
"result": self._event_serializer.serialize_event(
|
||||||
|
e, time_now, bundle_aggregations=aggregations
|
||||||
|
),
|
||||||
"context": contexts.get(e.event_id, {}),
|
"context": contexts.get(e.event_id, {}),
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
@ -106,7 +106,7 @@ async def _sendmail(
|
|||||||
factory = build_sender_factory(hostname=smtphost if enable_tls else None)
|
factory = build_sender_factory(hostname=smtphost if enable_tls else None)
|
||||||
|
|
||||||
reactor.connectTCP(
|
reactor.connectTCP(
|
||||||
smtphost, # type: ignore[arg-type]
|
smtphost,
|
||||||
smtpport,
|
smtpport,
|
||||||
factory,
|
factory,
|
||||||
timeout=30,
|
timeout=30,
|
||||||
|
@ -1348,8 +1348,8 @@ class SyncHandler:
|
|||||||
if sync_result_builder.since_token is not None:
|
if sync_result_builder.since_token is not None:
|
||||||
since_stream_id = int(sync_result_builder.since_token.to_device_key)
|
since_stream_id = int(sync_result_builder.since_token.to_device_key)
|
||||||
|
|
||||||
if since_stream_id != int(now_token.to_device_key):
|
if device_id is not None and since_stream_id != int(now_token.to_device_key):
|
||||||
messages, stream_id = await self.store.get_new_messages_for_device(
|
messages, stream_id = await self.store.get_messages_for_device(
|
||||||
user_id, device_id, since_stream_id, now_token.to_device_key
|
user_id, device_id, since_stream_id, now_token.to_device_key
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -446,7 +446,7 @@ class TypingWriterHandler(FollowerTypingHandler):
|
|||||||
|
|
||||||
class TypingNotificationEventSource(EventSource[int, JsonDict]):
|
class TypingNotificationEventSource(EventSource[int, JsonDict]):
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.hs = hs
|
self._main_store = hs.get_datastore()
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
# We can't call get_typing_handler here because there's a cycle:
|
# We can't call get_typing_handler here because there's a cycle:
|
||||||
#
|
#
|
||||||
@ -487,7 +487,7 @@ class TypingNotificationEventSource(EventSource[int, JsonDict]):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
if not await service.matches_user_in_member_list(
|
if not await service.matches_user_in_member_list(
|
||||||
room_id, handler.store
|
room_id, self._main_store
|
||||||
):
|
):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -38,4 +38,4 @@ class UIAuthSessionDataConstants:
|
|||||||
# used during registration to store the registration token used (if required) so that:
|
# used during registration to store the registration token used (if required) so that:
|
||||||
# - we can prevent a token being used twice by one session
|
# - we can prevent a token being used twice by one session
|
||||||
# - we can 'use up' the token after registration has successfully completed
|
# - we can 'use up' the token after registration has successfully completed
|
||||||
REGISTRATION_TOKEN = "org.matrix.msc3231.login.registration_token"
|
REGISTRATION_TOKEN = "m.login.registration_token"
|
||||||
|
@ -20,6 +20,7 @@ from typing import (
|
|||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
Any,
|
Any,
|
||||||
BinaryIO,
|
BinaryIO,
|
||||||
|
Callable,
|
||||||
Dict,
|
Dict,
|
||||||
Iterable,
|
Iterable,
|
||||||
List,
|
List,
|
||||||
@ -321,21 +322,20 @@ class SimpleHttpClient:
|
|||||||
self._ip_whitelist = ip_whitelist
|
self._ip_whitelist = ip_whitelist
|
||||||
self._ip_blacklist = ip_blacklist
|
self._ip_blacklist = ip_blacklist
|
||||||
self._extra_treq_args = treq_args or {}
|
self._extra_treq_args = treq_args or {}
|
||||||
|
|
||||||
self.user_agent = hs.version_string
|
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
|
user_agent = hs.version_string
|
||||||
if hs.config.server.user_agent_suffix:
|
if hs.config.server.user_agent_suffix:
|
||||||
self.user_agent = "%s %s" % (
|
user_agent = "%s %s" % (
|
||||||
self.user_agent,
|
user_agent,
|
||||||
hs.config.server.user_agent_suffix,
|
hs.config.server.user_agent_suffix,
|
||||||
)
|
)
|
||||||
|
self.user_agent = user_agent.encode("ascii")
|
||||||
|
|
||||||
# We use this for our body producers to ensure that they use the correct
|
# We use this for our body producers to ensure that they use the correct
|
||||||
# reactor.
|
# reactor.
|
||||||
self._cooperator = Cooperator(scheduler=_make_scheduler(hs.get_reactor()))
|
self._cooperator = Cooperator(scheduler=_make_scheduler(hs.get_reactor()))
|
||||||
|
|
||||||
self.user_agent = self.user_agent.encode("ascii")
|
|
||||||
|
|
||||||
if self._ip_blacklist:
|
if self._ip_blacklist:
|
||||||
# If we have an IP blacklist, we need to use a DNS resolver which
|
# If we have an IP blacklist, we need to use a DNS resolver which
|
||||||
# filters out blacklisted IP addresses, to prevent DNS rebinding.
|
# filters out blacklisted IP addresses, to prevent DNS rebinding.
|
||||||
@ -693,12 +693,18 @@ class SimpleHttpClient:
|
|||||||
output_stream: BinaryIO,
|
output_stream: BinaryIO,
|
||||||
max_size: Optional[int] = None,
|
max_size: Optional[int] = None,
|
||||||
headers: Optional[RawHeaders] = None,
|
headers: Optional[RawHeaders] = None,
|
||||||
|
is_allowed_content_type: Optional[Callable[[str], bool]] = None,
|
||||||
) -> Tuple[int, Dict[bytes, List[bytes]], str, int]:
|
) -> Tuple[int, Dict[bytes, List[bytes]], str, int]:
|
||||||
"""GETs a file from a given URL
|
"""GETs a file from a given URL
|
||||||
Args:
|
Args:
|
||||||
url: The URL to GET
|
url: The URL to GET
|
||||||
output_stream: File to write the response body to.
|
output_stream: File to write the response body to.
|
||||||
headers: A map from header name to a list of values for that header
|
headers: A map from header name to a list of values for that header
|
||||||
|
is_allowed_content_type: A predicate to determine whether the
|
||||||
|
content type of the file we're downloading is allowed. If set and
|
||||||
|
it evaluates to False when called with the content type, the
|
||||||
|
request will be terminated before completing the download by
|
||||||
|
raising SynapseError.
|
||||||
Returns:
|
Returns:
|
||||||
A tuple of the file length, dict of the response
|
A tuple of the file length, dict of the response
|
||||||
headers, absolute URI of the response and HTTP response code.
|
headers, absolute URI of the response and HTTP response code.
|
||||||
@ -726,6 +732,17 @@ class SimpleHttpClient:
|
|||||||
HTTPStatus.BAD_GATEWAY, "Got error %d" % (response.code,), Codes.UNKNOWN
|
HTTPStatus.BAD_GATEWAY, "Got error %d" % (response.code,), Codes.UNKNOWN
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if is_allowed_content_type and b"Content-Type" in resp_headers:
|
||||||
|
content_type = resp_headers[b"Content-Type"][0].decode("ascii")
|
||||||
|
if not is_allowed_content_type(content_type):
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_GATEWAY,
|
||||||
|
(
|
||||||
|
"Requested file's content type not allowed for this operation: %s"
|
||||||
|
% content_type
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
# TODO: if our Content-Type is HTML or something, just read the first
|
# TODO: if our Content-Type is HTML or something, just read the first
|
||||||
# N bytes into RAM rather than saving it all to disk only to read it
|
# N bytes into RAM rather than saving it all to disk only to read it
|
||||||
# straight back in again
|
# straight back in again
|
||||||
|
@ -334,12 +334,11 @@ class MatrixFederationHttpClient:
|
|||||||
user_agent = hs.version_string
|
user_agent = hs.version_string
|
||||||
if hs.config.server.user_agent_suffix:
|
if hs.config.server.user_agent_suffix:
|
||||||
user_agent = "%s %s" % (user_agent, hs.config.server.user_agent_suffix)
|
user_agent = "%s %s" % (user_agent, hs.config.server.user_agent_suffix)
|
||||||
user_agent = user_agent.encode("ascii")
|
|
||||||
|
|
||||||
federation_agent = MatrixFederationAgent(
|
federation_agent = MatrixFederationAgent(
|
||||||
self.reactor,
|
self.reactor,
|
||||||
tls_client_options_factory,
|
tls_client_options_factory,
|
||||||
user_agent,
|
user_agent.encode("ascii"),
|
||||||
hs.config.server.federation_ip_range_whitelist,
|
hs.config.server.federation_ip_range_whitelist,
|
||||||
hs.config.server.federation_ip_range_blacklist,
|
hs.config.server.federation_ip_range_blacklist,
|
||||||
)
|
)
|
||||||
|
@ -443,10 +443,14 @@ def start_active_span(
|
|||||||
start_time=None,
|
start_time=None,
|
||||||
ignore_active_span=False,
|
ignore_active_span=False,
|
||||||
finish_on_close=True,
|
finish_on_close=True,
|
||||||
|
*,
|
||||||
|
tracer=None,
|
||||||
):
|
):
|
||||||
"""Starts an active opentracing span. Note, the scope doesn't become active
|
"""Starts an active opentracing span.
|
||||||
until it has been entered, however, the span starts from the time this
|
|
||||||
message is called.
|
Records the start time for the span, and sets it as the "active span" in the
|
||||||
|
scope manager.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
See opentracing.tracer
|
See opentracing.tracer
|
||||||
Returns:
|
Returns:
|
||||||
@ -456,7 +460,11 @@ def start_active_span(
|
|||||||
if opentracing is None:
|
if opentracing is None:
|
||||||
return noop_context_manager() # type: ignore[unreachable]
|
return noop_context_manager() # type: ignore[unreachable]
|
||||||
|
|
||||||
return opentracing.tracer.start_active_span(
|
if tracer is None:
|
||||||
|
# use the global tracer by default
|
||||||
|
tracer = opentracing.tracer
|
||||||
|
|
||||||
|
return tracer.start_active_span(
|
||||||
operation_name,
|
operation_name,
|
||||||
child_of=child_of,
|
child_of=child_of,
|
||||||
references=references,
|
references=references,
|
||||||
@ -468,21 +476,42 @@ def start_active_span(
|
|||||||
|
|
||||||
|
|
||||||
def start_active_span_follows_from(
|
def start_active_span_follows_from(
|
||||||
operation_name: str, contexts: Collection, inherit_force_tracing=False
|
operation_name: str,
|
||||||
|
contexts: Collection,
|
||||||
|
child_of=None,
|
||||||
|
start_time: Optional[float] = None,
|
||||||
|
*,
|
||||||
|
inherit_force_tracing=False,
|
||||||
|
tracer=None,
|
||||||
):
|
):
|
||||||
"""Starts an active opentracing span, with additional references to previous spans
|
"""Starts an active opentracing span, with additional references to previous spans
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
operation_name: name of the operation represented by the new span
|
operation_name: name of the operation represented by the new span
|
||||||
contexts: the previous spans to inherit from
|
contexts: the previous spans to inherit from
|
||||||
|
|
||||||
|
child_of: optionally override the parent span. If unset, the currently active
|
||||||
|
span will be the parent. (If there is no currently active span, the first
|
||||||
|
span in `contexts` will be the parent.)
|
||||||
|
|
||||||
|
start_time: optional override for the start time of the created span. Seconds
|
||||||
|
since the epoch.
|
||||||
|
|
||||||
inherit_force_tracing: if set, and any of the previous contexts have had tracing
|
inherit_force_tracing: if set, and any of the previous contexts have had tracing
|
||||||
forced, the new span will also have tracing forced.
|
forced, the new span will also have tracing forced.
|
||||||
|
tracer: override the opentracing tracer. By default the global tracer is used.
|
||||||
"""
|
"""
|
||||||
if opentracing is None:
|
if opentracing is None:
|
||||||
return noop_context_manager() # type: ignore[unreachable]
|
return noop_context_manager() # type: ignore[unreachable]
|
||||||
|
|
||||||
references = [opentracing.follows_from(context) for context in contexts]
|
references = [opentracing.follows_from(context) for context in contexts]
|
||||||
scope = start_active_span(operation_name, references=references)
|
scope = start_active_span(
|
||||||
|
operation_name,
|
||||||
|
child_of=child_of,
|
||||||
|
references=references,
|
||||||
|
start_time=start_time,
|
||||||
|
tracer=tracer,
|
||||||
|
)
|
||||||
|
|
||||||
if inherit_force_tracing and any(
|
if inherit_force_tracing and any(
|
||||||
is_context_forced_tracing(ctx) for ctx in contexts
|
is_context_forced_tracing(ctx) for ctx in contexts
|
||||||
|
@ -28,8 +28,9 @@ class LogContextScopeManager(ScopeManager):
|
|||||||
The LogContextScopeManager tracks the active scope in opentracing
|
The LogContextScopeManager tracks the active scope in opentracing
|
||||||
by using the log contexts which are native to synapse. This is so
|
by using the log contexts which are native to synapse. This is so
|
||||||
that the basic opentracing api can be used across twisted defereds.
|
that the basic opentracing api can be used across twisted defereds.
|
||||||
(I would love to break logcontexts and this into an OS package. but
|
|
||||||
let's wait for twisted's contexts to be released.)
|
It would be nice just to use opentracing's ContextVarsScopeManager,
|
||||||
|
but currently that doesn't work due to https://twistedmatrix.com/trac/ticket/10301.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, config):
|
def __init__(self, config):
|
||||||
@ -65,29 +66,45 @@ class LogContextScopeManager(ScopeManager):
|
|||||||
Scope.close() on the returned instance.
|
Scope.close() on the returned instance.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
enter_logcontext = False
|
|
||||||
ctx = current_context()
|
ctx = current_context()
|
||||||
|
|
||||||
if not ctx:
|
if not ctx:
|
||||||
# We don't want this scope to affect.
|
|
||||||
logger.error("Tried to activate scope outside of loggingcontext")
|
logger.error("Tried to activate scope outside of loggingcontext")
|
||||||
return Scope(None, span) # type: ignore[arg-type]
|
return Scope(None, span) # type: ignore[arg-type]
|
||||||
elif ctx.scope is not None:
|
|
||||||
# We want the logging scope to look exactly the same so we give it
|
if ctx.scope is not None:
|
||||||
# a blank suffix
|
# start a new logging context as a child of the existing one.
|
||||||
|
# Doing so -- rather than updating the existing logcontext -- means that
|
||||||
|
# creating several concurrent spans under the same logcontext works
|
||||||
|
# correctly.
|
||||||
ctx = nested_logging_context("")
|
ctx = nested_logging_context("")
|
||||||
enter_logcontext = True
|
enter_logcontext = True
|
||||||
|
else:
|
||||||
|
# if there is no span currently associated with the current logcontext, we
|
||||||
|
# just store the scope in it.
|
||||||
|
#
|
||||||
|
# This feels a bit dubious, but it does hack around a problem where a
|
||||||
|
# span outlasts its parent logcontext (which would otherwise lead to
|
||||||
|
# "Re-starting finished log context" errors).
|
||||||
|
enter_logcontext = False
|
||||||
|
|
||||||
scope = _LogContextScope(self, span, ctx, enter_logcontext, finish_on_close)
|
scope = _LogContextScope(self, span, ctx, enter_logcontext, finish_on_close)
|
||||||
ctx.scope = scope
|
ctx.scope = scope
|
||||||
|
if enter_logcontext:
|
||||||
|
ctx.__enter__()
|
||||||
|
|
||||||
return scope
|
return scope
|
||||||
|
|
||||||
|
|
||||||
class _LogContextScope(Scope):
|
class _LogContextScope(Scope):
|
||||||
"""
|
"""
|
||||||
A custom opentracing scope. The only significant difference is that it will
|
A custom opentracing scope, associated with a LogContext
|
||||||
close the log context it's related to if the logcontext was created specifically
|
|
||||||
for this scope.
|
* filters out _DefGen_Return exceptions which arise from calling
|
||||||
|
`defer.returnValue` in Twisted code
|
||||||
|
|
||||||
|
* When the scope is closed, the logcontext's active scope is reset to None.
|
||||||
|
and - if enter_logcontext was set - the logcontext is finished too.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, manager, span, logcontext, enter_logcontext, finish_on_close):
|
def __init__(self, manager, span, logcontext, enter_logcontext, finish_on_close):
|
||||||
@ -101,8 +118,7 @@ class _LogContextScope(Scope):
|
|||||||
logcontext (LogContext):
|
logcontext (LogContext):
|
||||||
the logcontext to which this scope is attached.
|
the logcontext to which this scope is attached.
|
||||||
enter_logcontext (Boolean):
|
enter_logcontext (Boolean):
|
||||||
if True the logcontext will be entered and exited when the scope
|
if True the logcontext will be exited when the scope is finished
|
||||||
is entered and exited respectively
|
|
||||||
finish_on_close (Boolean):
|
finish_on_close (Boolean):
|
||||||
if True finish the span when the scope is closed
|
if True finish the span when the scope is closed
|
||||||
"""
|
"""
|
||||||
@ -111,26 +127,28 @@ class _LogContextScope(Scope):
|
|||||||
self._finish_on_close = finish_on_close
|
self._finish_on_close = finish_on_close
|
||||||
self._enter_logcontext = enter_logcontext
|
self._enter_logcontext = enter_logcontext
|
||||||
|
|
||||||
def __enter__(self):
|
def __exit__(self, exc_type, value, traceback):
|
||||||
if self._enter_logcontext:
|
if exc_type == twisted.internet.defer._DefGen_Return:
|
||||||
self.logcontext.__enter__()
|
# filter out defer.returnValue() calls
|
||||||
|
exc_type = value = traceback = None
|
||||||
|
super().__exit__(exc_type, value, traceback)
|
||||||
|
|
||||||
return self
|
def __str__(self):
|
||||||
|
return f"Scope<{self.span}>"
|
||||||
def __exit__(self, type, value, traceback):
|
|
||||||
if type == twisted.internet.defer._DefGen_Return:
|
|
||||||
super().__exit__(None, None, None)
|
|
||||||
else:
|
|
||||||
super().__exit__(type, value, traceback)
|
|
||||||
if self._enter_logcontext:
|
|
||||||
self.logcontext.__exit__(type, value, traceback)
|
|
||||||
else: # the logcontext existed before the creation of the scope
|
|
||||||
self.logcontext.scope = None
|
|
||||||
|
|
||||||
def close(self):
|
def close(self):
|
||||||
if self.manager.active is not self:
|
active_scope = self.manager.active
|
||||||
logger.error("Tried to close a non-active scope!")
|
if active_scope is not self:
|
||||||
return
|
logger.error(
|
||||||
|
"Closing scope %s which is not the currently-active one %s",
|
||||||
|
self,
|
||||||
|
active_scope,
|
||||||
|
)
|
||||||
|
|
||||||
if self._finish_on_close:
|
if self._finish_on_close:
|
||||||
self.span.finish()
|
self.span.finish()
|
||||||
|
|
||||||
|
self.logcontext.scope = None
|
||||||
|
|
||||||
|
if self._enter_logcontext:
|
||||||
|
self.logcontext.__exit__(None, None, None)
|
||||||
|
@ -30,9 +30,11 @@ from typing import (
|
|||||||
Type,
|
Type,
|
||||||
TypeVar,
|
TypeVar,
|
||||||
Union,
|
Union,
|
||||||
|
cast,
|
||||||
)
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
from prometheus_client import CollectorRegistry, Counter, Gauge, Histogram, Metric
|
from prometheus_client import CollectorRegistry, Counter, Gauge, Histogram, Metric
|
||||||
from prometheus_client.core import (
|
from prometheus_client.core import (
|
||||||
REGISTRY,
|
REGISTRY,
|
||||||
@ -42,14 +44,14 @@ from prometheus_client.core import (
|
|||||||
|
|
||||||
from twisted.python.threadpool import ThreadPool
|
from twisted.python.threadpool import ThreadPool
|
||||||
|
|
||||||
import synapse.metrics._reactor_metrics
|
# This module is imported for its side effects; flake8 needn't warn that it's unused.
|
||||||
|
import synapse.metrics._reactor_metrics # noqa: F401
|
||||||
from synapse.metrics._exposition import (
|
from synapse.metrics._exposition import (
|
||||||
MetricsResource,
|
MetricsResource,
|
||||||
generate_latest,
|
generate_latest,
|
||||||
start_http_server,
|
start_http_server,
|
||||||
)
|
)
|
||||||
from synapse.metrics._gc import MIN_TIME_BETWEEN_GCS, install_gc_manager
|
from synapse.metrics._gc import MIN_TIME_BETWEEN_GCS, install_gc_manager
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@ -60,7 +62,7 @@ all_gauges: "Dict[str, Union[LaterGauge, InFlightGauge]]" = {}
|
|||||||
HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
|
HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
|
||||||
|
|
||||||
|
|
||||||
class RegistryProxy:
|
class _RegistryProxy:
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def collect() -> Iterable[Metric]:
|
def collect() -> Iterable[Metric]:
|
||||||
for metric in REGISTRY.collect():
|
for metric in REGISTRY.collect():
|
||||||
@ -68,6 +70,13 @@ class RegistryProxy:
|
|||||||
yield metric
|
yield metric
|
||||||
|
|
||||||
|
|
||||||
|
# A little bit nasty, but collect() above is static so a Protocol doesn't work.
|
||||||
|
# _RegistryProxy matches the signature of a CollectorRegistry instance enough
|
||||||
|
# for it to be usable in the contexts in which we use it.
|
||||||
|
# TODO Do something nicer about this.
|
||||||
|
RegistryProxy = cast(CollectorRegistry, _RegistryProxy)
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, hash=True, auto_attribs=True)
|
@attr.s(slots=True, hash=True, auto_attribs=True)
|
||||||
class LaterGauge:
|
class LaterGauge:
|
||||||
|
|
||||||
@ -409,7 +418,7 @@ build_info = Gauge(
|
|||||||
)
|
)
|
||||||
build_info.labels(
|
build_info.labels(
|
||||||
" ".join([platform.python_implementation(), platform.python_version()]),
|
" ".join([platform.python_implementation(), platform.python_version()]),
|
||||||
get_version_string(synapse),
|
get_distribution_version_string("matrix-synapse"),
|
||||||
" ".join([platform.system(), platform.release()]),
|
" ".join([platform.system(), platform.release()]),
|
||||||
).set(1)
|
).set(1)
|
||||||
|
|
||||||
|
@ -48,7 +48,6 @@ from synapse.events.spamcheck import (
|
|||||||
CHECK_USERNAME_FOR_SPAM_CALLBACK,
|
CHECK_USERNAME_FOR_SPAM_CALLBACK,
|
||||||
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK,
|
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK,
|
||||||
USER_MAY_CREATE_ROOM_CALLBACK,
|
USER_MAY_CREATE_ROOM_CALLBACK,
|
||||||
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK,
|
|
||||||
USER_MAY_INVITE_CALLBACK,
|
USER_MAY_INVITE_CALLBACK,
|
||||||
USER_MAY_JOIN_ROOM_CALLBACK,
|
USER_MAY_JOIN_ROOM_CALLBACK,
|
||||||
USER_MAY_PUBLISH_ROOM_CALLBACK,
|
USER_MAY_PUBLISH_ROOM_CALLBACK,
|
||||||
@ -72,6 +71,7 @@ from synapse.handlers.auth import (
|
|||||||
CHECK_3PID_AUTH_CALLBACK,
|
CHECK_3PID_AUTH_CALLBACK,
|
||||||
CHECK_AUTH_CALLBACK,
|
CHECK_AUTH_CALLBACK,
|
||||||
GET_USERNAME_FOR_REGISTRATION_CALLBACK,
|
GET_USERNAME_FOR_REGISTRATION_CALLBACK,
|
||||||
|
IS_3PID_ALLOWED_CALLBACK,
|
||||||
ON_LOGGED_OUT_CALLBACK,
|
ON_LOGGED_OUT_CALLBACK,
|
||||||
AuthHandler,
|
AuthHandler,
|
||||||
)
|
)
|
||||||
@ -211,14 +211,12 @@ class ModuleApi:
|
|||||||
|
|
||||||
def register_spam_checker_callbacks(
|
def register_spam_checker_callbacks(
|
||||||
self,
|
self,
|
||||||
|
*,
|
||||||
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
|
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
|
||||||
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
|
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
|
||||||
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
|
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
|
||||||
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
|
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
|
||||||
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
|
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
|
||||||
user_may_create_room_with_invites: Optional[
|
|
||||||
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
|
|
||||||
] = None,
|
|
||||||
user_may_create_room_alias: Optional[
|
user_may_create_room_alias: Optional[
|
||||||
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
|
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
|
||||||
] = None,
|
] = None,
|
||||||
@ -239,7 +237,6 @@ class ModuleApi:
|
|||||||
user_may_invite=user_may_invite,
|
user_may_invite=user_may_invite,
|
||||||
user_may_send_3pid_invite=user_may_send_3pid_invite,
|
user_may_send_3pid_invite=user_may_send_3pid_invite,
|
||||||
user_may_create_room=user_may_create_room,
|
user_may_create_room=user_may_create_room,
|
||||||
user_may_create_room_with_invites=user_may_create_room_with_invites,
|
|
||||||
user_may_create_room_alias=user_may_create_room_alias,
|
user_may_create_room_alias=user_may_create_room_alias,
|
||||||
user_may_publish_room=user_may_publish_room,
|
user_may_publish_room=user_may_publish_room,
|
||||||
check_username_for_spam=check_username_for_spam,
|
check_username_for_spam=check_username_for_spam,
|
||||||
@ -249,6 +246,7 @@ class ModuleApi:
|
|||||||
|
|
||||||
def register_account_validity_callbacks(
|
def register_account_validity_callbacks(
|
||||||
self,
|
self,
|
||||||
|
*,
|
||||||
is_user_expired: Optional[IS_USER_EXPIRED_CALLBACK] = None,
|
is_user_expired: Optional[IS_USER_EXPIRED_CALLBACK] = None,
|
||||||
on_user_registration: Optional[ON_USER_REGISTRATION_CALLBACK] = None,
|
on_user_registration: Optional[ON_USER_REGISTRATION_CALLBACK] = None,
|
||||||
on_legacy_send_mail: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None,
|
on_legacy_send_mail: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None,
|
||||||
@ -269,6 +267,7 @@ class ModuleApi:
|
|||||||
|
|
||||||
def register_third_party_rules_callbacks(
|
def register_third_party_rules_callbacks(
|
||||||
self,
|
self,
|
||||||
|
*,
|
||||||
check_event_allowed: Optional[CHECK_EVENT_ALLOWED_CALLBACK] = None,
|
check_event_allowed: Optional[CHECK_EVENT_ALLOWED_CALLBACK] = None,
|
||||||
on_create_room: Optional[ON_CREATE_ROOM_CALLBACK] = None,
|
on_create_room: Optional[ON_CREATE_ROOM_CALLBACK] = None,
|
||||||
check_threepid_can_be_invited: Optional[
|
check_threepid_can_be_invited: Optional[
|
||||||
@ -293,6 +292,7 @@ class ModuleApi:
|
|||||||
|
|
||||||
def register_presence_router_callbacks(
|
def register_presence_router_callbacks(
|
||||||
self,
|
self,
|
||||||
|
*,
|
||||||
get_users_for_states: Optional[GET_USERS_FOR_STATES_CALLBACK] = None,
|
get_users_for_states: Optional[GET_USERS_FOR_STATES_CALLBACK] = None,
|
||||||
get_interested_users: Optional[GET_INTERESTED_USERS_CALLBACK] = None,
|
get_interested_users: Optional[GET_INTERESTED_USERS_CALLBACK] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
@ -307,11 +307,13 @@ class ModuleApi:
|
|||||||
|
|
||||||
def register_password_auth_provider_callbacks(
|
def register_password_auth_provider_callbacks(
|
||||||
self,
|
self,
|
||||||
|
*,
|
||||||
check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None,
|
check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None,
|
||||||
on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None,
|
on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None,
|
||||||
auth_checkers: Optional[
|
auth_checkers: Optional[
|
||||||
Dict[Tuple[str, Tuple[str, ...]], CHECK_AUTH_CALLBACK]
|
Dict[Tuple[str, Tuple[str, ...]], CHECK_AUTH_CALLBACK]
|
||||||
] = None,
|
] = None,
|
||||||
|
is_3pid_allowed: Optional[IS_3PID_ALLOWED_CALLBACK] = None,
|
||||||
get_username_for_registration: Optional[
|
get_username_for_registration: Optional[
|
||||||
GET_USERNAME_FOR_REGISTRATION_CALLBACK
|
GET_USERNAME_FOR_REGISTRATION_CALLBACK
|
||||||
] = None,
|
] = None,
|
||||||
@ -323,12 +325,14 @@ class ModuleApi:
|
|||||||
return self._password_auth_provider.register_password_auth_provider_callbacks(
|
return self._password_auth_provider.register_password_auth_provider_callbacks(
|
||||||
check_3pid_auth=check_3pid_auth,
|
check_3pid_auth=check_3pid_auth,
|
||||||
on_logged_out=on_logged_out,
|
on_logged_out=on_logged_out,
|
||||||
|
is_3pid_allowed=is_3pid_allowed,
|
||||||
auth_checkers=auth_checkers,
|
auth_checkers=auth_checkers,
|
||||||
get_username_for_registration=get_username_for_registration,
|
get_username_for_registration=get_username_for_registration,
|
||||||
)
|
)
|
||||||
|
|
||||||
def register_background_update_controller_callbacks(
|
def register_background_update_controller_callbacks(
|
||||||
self,
|
self,
|
||||||
|
*,
|
||||||
on_update: ON_UPDATE_CALLBACK,
|
on_update: ON_UPDATE_CALLBACK,
|
||||||
default_batch_size: Optional[DEFAULT_BATCH_SIZE_CALLBACK] = None,
|
default_batch_size: Optional[DEFAULT_BATCH_SIZE_CALLBACK] = None,
|
||||||
min_batch_size: Optional[MIN_BATCH_SIZE_CALLBACK] = None,
|
min_batch_size: Optional[MIN_BATCH_SIZE_CALLBACK] = None,
|
||||||
@ -401,6 +405,32 @@ class ModuleApi:
|
|||||||
"""
|
"""
|
||||||
return self._hs.config.email.email_app_name
|
return self._hs.config.email.email_app_name
|
||||||
|
|
||||||
|
@property
|
||||||
|
def server_name(self) -> str:
|
||||||
|
"""The server name for the local homeserver.
|
||||||
|
|
||||||
|
Added in Synapse v1.53.0.
|
||||||
|
"""
|
||||||
|
return self._server_name
|
||||||
|
|
||||||
|
@property
|
||||||
|
def worker_name(self) -> Optional[str]:
|
||||||
|
"""The name of the worker this specific instance is running as per the
|
||||||
|
"worker_name" configuration setting, or None if it's the main process.
|
||||||
|
|
||||||
|
Added in Synapse v1.53.0.
|
||||||
|
"""
|
||||||
|
return self._hs.config.worker.worker_name
|
||||||
|
|
||||||
|
@property
|
||||||
|
def worker_app(self) -> Optional[str]:
|
||||||
|
"""The name of the worker app this specific instance is running as per the
|
||||||
|
"worker_app" configuration setting, or None if it's the main process.
|
||||||
|
|
||||||
|
Added in Synapse v1.53.0.
|
||||||
|
"""
|
||||||
|
return self._hs.config.worker.worker_app
|
||||||
|
|
||||||
async def get_userinfo_by_id(self, user_id: str) -> Optional[UserInfo]:
|
async def get_userinfo_by_id(self, user_id: str) -> Optional[UserInfo]:
|
||||||
"""Get user info by user_id
|
"""Get user info by user_id
|
||||||
|
|
||||||
|
@ -14,6 +14,7 @@
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import (
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
Awaitable,
|
Awaitable,
|
||||||
Callable,
|
Callable,
|
||||||
Collection,
|
Collection,
|
||||||
@ -32,7 +33,6 @@ from prometheus_client import Counter
|
|||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
import synapse.server
|
|
||||||
from synapse.api.constants import EventTypes, HistoryVisibility, Membership
|
from synapse.api.constants import EventTypes, HistoryVisibility, Membership
|
||||||
from synapse.api.errors import AuthError
|
from synapse.api.errors import AuthError
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
@ -53,6 +53,9 @@ from synapse.util.async_helpers import ObservableDeferred, timeout_deferred
|
|||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
from synapse.visibility import filter_events_for_client
|
from synapse.visibility import filter_events_for_client
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
notified_events_counter = Counter("synapse_notifier_notified_events", "")
|
notified_events_counter = Counter("synapse_notifier_notified_events", "")
|
||||||
@ -82,7 +85,7 @@ class _NotificationListener:
|
|||||||
|
|
||||||
__slots__ = ["deferred"]
|
__slots__ = ["deferred"]
|
||||||
|
|
||||||
def __init__(self, deferred):
|
def __init__(self, deferred: "defer.Deferred"):
|
||||||
self.deferred = deferred
|
self.deferred = deferred
|
||||||
|
|
||||||
|
|
||||||
@ -124,7 +127,7 @@ class _NotifierUserStream:
|
|||||||
stream_key: str,
|
stream_key: str,
|
||||||
stream_id: Union[int, RoomStreamToken],
|
stream_id: Union[int, RoomStreamToken],
|
||||||
time_now_ms: int,
|
time_now_ms: int,
|
||||||
):
|
) -> None:
|
||||||
"""Notify any listeners for this user of a new event from an
|
"""Notify any listeners for this user of a new event from an
|
||||||
event source.
|
event source.
|
||||||
Args:
|
Args:
|
||||||
@ -152,7 +155,7 @@ class _NotifierUserStream:
|
|||||||
self.notify_deferred = ObservableDeferred(defer.Deferred())
|
self.notify_deferred = ObservableDeferred(defer.Deferred())
|
||||||
noify_deferred.callback(self.current_token)
|
noify_deferred.callback(self.current_token)
|
||||||
|
|
||||||
def remove(self, notifier: "Notifier"):
|
def remove(self, notifier: "Notifier") -> None:
|
||||||
"""Remove this listener from all the indexes in the Notifier
|
"""Remove this listener from all the indexes in the Notifier
|
||||||
it knows about.
|
it knows about.
|
||||||
"""
|
"""
|
||||||
@ -188,7 +191,7 @@ class EventStreamResult:
|
|||||||
start_token: StreamToken
|
start_token: StreamToken
|
||||||
end_token: StreamToken
|
end_token: StreamToken
|
||||||
|
|
||||||
def __bool__(self):
|
def __bool__(self) -> bool:
|
||||||
return bool(self.events)
|
return bool(self.events)
|
||||||
|
|
||||||
|
|
||||||
@ -212,7 +215,7 @@ class Notifier:
|
|||||||
|
|
||||||
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
|
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
|
||||||
|
|
||||||
def __init__(self, hs: "synapse.server.HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.user_to_user_stream: Dict[str, _NotifierUserStream] = {}
|
self.user_to_user_stream: Dict[str, _NotifierUserStream] = {}
|
||||||
self.room_to_user_streams: Dict[str, Set[_NotifierUserStream]] = {}
|
self.room_to_user_streams: Dict[str, Set[_NotifierUserStream]] = {}
|
||||||
|
|
||||||
@ -248,7 +251,7 @@ class Notifier:
|
|||||||
# This is not a very cheap test to perform, but it's only executed
|
# This is not a very cheap test to perform, but it's only executed
|
||||||
# when rendering the metrics page, which is likely once per minute at
|
# when rendering the metrics page, which is likely once per minute at
|
||||||
# most when scraping it.
|
# most when scraping it.
|
||||||
def count_listeners():
|
def count_listeners() -> int:
|
||||||
all_user_streams: Set[_NotifierUserStream] = set()
|
all_user_streams: Set[_NotifierUserStream] = set()
|
||||||
|
|
||||||
for streams in list(self.room_to_user_streams.values()):
|
for streams in list(self.room_to_user_streams.values()):
|
||||||
@ -270,7 +273,7 @@ class Notifier:
|
|||||||
"synapse_notifier_users", "", [], lambda: len(self.user_to_user_stream)
|
"synapse_notifier_users", "", [], lambda: len(self.user_to_user_stream)
|
||||||
)
|
)
|
||||||
|
|
||||||
def add_replication_callback(self, cb: Callable[[], None]):
|
def add_replication_callback(self, cb: Callable[[], None]) -> None:
|
||||||
"""Add a callback that will be called when some new data is available.
|
"""Add a callback that will be called when some new data is available.
|
||||||
Callback is not given any arguments. It should *not* return a Deferred - if
|
Callback is not given any arguments. It should *not* return a Deferred - if
|
||||||
it needs to do any asynchronous work, a background thread should be started and
|
it needs to do any asynchronous work, a background thread should be started and
|
||||||
@ -284,7 +287,7 @@ class Notifier:
|
|||||||
event_pos: PersistedEventPosition,
|
event_pos: PersistedEventPosition,
|
||||||
max_room_stream_token: RoomStreamToken,
|
max_room_stream_token: RoomStreamToken,
|
||||||
extra_users: Optional[Collection[UserID]] = None,
|
extra_users: Optional[Collection[UserID]] = None,
|
||||||
):
|
) -> None:
|
||||||
"""Unwraps event and calls `on_new_room_event_args`."""
|
"""Unwraps event and calls `on_new_room_event_args`."""
|
||||||
await self.on_new_room_event_args(
|
await self.on_new_room_event_args(
|
||||||
event_pos=event_pos,
|
event_pos=event_pos,
|
||||||
@ -307,7 +310,7 @@ class Notifier:
|
|||||||
event_pos: PersistedEventPosition,
|
event_pos: PersistedEventPosition,
|
||||||
max_room_stream_token: RoomStreamToken,
|
max_room_stream_token: RoomStreamToken,
|
||||||
extra_users: Optional[Collection[UserID]] = None,
|
extra_users: Optional[Collection[UserID]] = None,
|
||||||
):
|
) -> None:
|
||||||
"""Used by handlers to inform the notifier something has happened
|
"""Used by handlers to inform the notifier something has happened
|
||||||
in the room, room event wise.
|
in the room, room event wise.
|
||||||
|
|
||||||
@ -338,7 +341,9 @@ class Notifier:
|
|||||||
|
|
||||||
self.notify_replication()
|
self.notify_replication()
|
||||||
|
|
||||||
def _notify_pending_new_room_events(self, max_room_stream_token: RoomStreamToken):
|
def _notify_pending_new_room_events(
|
||||||
|
self, max_room_stream_token: RoomStreamToken
|
||||||
|
) -> None:
|
||||||
"""Notify for the room events that were queued waiting for a previous
|
"""Notify for the room events that were queued waiting for a previous
|
||||||
event to be persisted.
|
event to be persisted.
|
||||||
Args:
|
Args:
|
||||||
@ -374,7 +379,7 @@ class Notifier:
|
|||||||
)
|
)
|
||||||
self._on_updated_room_token(max_room_stream_token)
|
self._on_updated_room_token(max_room_stream_token)
|
||||||
|
|
||||||
def _on_updated_room_token(self, max_room_stream_token: RoomStreamToken):
|
def _on_updated_room_token(self, max_room_stream_token: RoomStreamToken) -> None:
|
||||||
"""Poke services that might care that the room position has been
|
"""Poke services that might care that the room position has been
|
||||||
updated.
|
updated.
|
||||||
"""
|
"""
|
||||||
@ -386,13 +391,13 @@ class Notifier:
|
|||||||
if self.federation_sender:
|
if self.federation_sender:
|
||||||
self.federation_sender.notify_new_events(max_room_stream_token)
|
self.federation_sender.notify_new_events(max_room_stream_token)
|
||||||
|
|
||||||
def _notify_app_services(self, max_room_stream_token: RoomStreamToken):
|
def _notify_app_services(self, max_room_stream_token: RoomStreamToken) -> None:
|
||||||
try:
|
try:
|
||||||
self.appservice_handler.notify_interested_services(max_room_stream_token)
|
self.appservice_handler.notify_interested_services(max_room_stream_token)
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception("Error notifying application services of event")
|
logger.exception("Error notifying application services of event")
|
||||||
|
|
||||||
def _notify_pusher_pool(self, max_room_stream_token: RoomStreamToken):
|
def _notify_pusher_pool(self, max_room_stream_token: RoomStreamToken) -> None:
|
||||||
try:
|
try:
|
||||||
self._pusher_pool.on_new_notifications(max_room_stream_token)
|
self._pusher_pool.on_new_notifications(max_room_stream_token)
|
||||||
except Exception:
|
except Exception:
|
||||||
@ -461,7 +466,9 @@ class Notifier:
|
|||||||
users,
|
users,
|
||||||
)
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception("Error notifying application services of event")
|
logger.exception(
|
||||||
|
"Error notifying application services of ephemeral events"
|
||||||
|
)
|
||||||
|
|
||||||
def on_new_replication_data(self) -> None:
|
def on_new_replication_data(self) -> None:
|
||||||
"""Used to inform replication listeners that something has happened
|
"""Used to inform replication listeners that something has happened
|
||||||
@ -473,8 +480,8 @@ class Notifier:
|
|||||||
user_id: str,
|
user_id: str,
|
||||||
timeout: int,
|
timeout: int,
|
||||||
callback: Callable[[StreamToken, StreamToken], Awaitable[T]],
|
callback: Callable[[StreamToken, StreamToken], Awaitable[T]],
|
||||||
room_ids=None,
|
room_ids: Optional[Collection[str]] = None,
|
||||||
from_token=StreamToken.START,
|
from_token: StreamToken = StreamToken.START,
|
||||||
) -> T:
|
) -> T:
|
||||||
"""Wait until the callback returns a non empty response or the
|
"""Wait until the callback returns a non empty response or the
|
||||||
timeout fires.
|
timeout fires.
|
||||||
@ -698,14 +705,14 @@ class Notifier:
|
|||||||
for expired_stream in expired_streams:
|
for expired_stream in expired_streams:
|
||||||
expired_stream.remove(self)
|
expired_stream.remove(self)
|
||||||
|
|
||||||
def _register_with_keys(self, user_stream: _NotifierUserStream):
|
def _register_with_keys(self, user_stream: _NotifierUserStream) -> None:
|
||||||
self.user_to_user_stream[user_stream.user_id] = user_stream
|
self.user_to_user_stream[user_stream.user_id] = user_stream
|
||||||
|
|
||||||
for room in user_stream.rooms:
|
for room in user_stream.rooms:
|
||||||
s = self.room_to_user_streams.setdefault(room, set())
|
s = self.room_to_user_streams.setdefault(room, set())
|
||||||
s.add(user_stream)
|
s.add(user_stream)
|
||||||
|
|
||||||
def _user_joined_room(self, user_id: str, room_id: str):
|
def _user_joined_room(self, user_id: str, room_id: str) -> None:
|
||||||
new_user_stream = self.user_to_user_stream.get(user_id)
|
new_user_stream = self.user_to_user_stream.get(user_id)
|
||||||
if new_user_stream is not None:
|
if new_user_stream is not None:
|
||||||
room_streams = self.room_to_user_streams.setdefault(room_id, set())
|
room_streams = self.room_to_user_streams.setdefault(room_id, set())
|
||||||
@ -717,7 +724,7 @@ class Notifier:
|
|||||||
for cb in self.replication_callbacks:
|
for cb in self.replication_callbacks:
|
||||||
cb()
|
cb()
|
||||||
|
|
||||||
def notify_remote_server_up(self, server: str):
|
def notify_remote_server_up(self, server: str) -> None:
|
||||||
"""Notify any replication that a remote server has come back up"""
|
"""Notify any replication that a remote server has come back up"""
|
||||||
# We call federation_sender directly rather than registering as a
|
# We call federation_sender directly rather than registering as a
|
||||||
# callback as a) we already have a reference to it and b) it introduces
|
# callback as a) we already have a reference to it and b) it introduces
|
||||||
|
@ -20,15 +20,11 @@ from typing import Any, Dict, List
|
|||||||
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
|
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
|
||||||
|
|
||||||
|
|
||||||
def list_with_base_rules(
|
def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||||
rawrules: List[Dict[str, Any]], use_new_defaults: bool = False
|
|
||||||
) -> List[Dict[str, Any]]:
|
|
||||||
"""Combine the list of rules set by the user with the default push rules
|
"""Combine the list of rules set by the user with the default push rules
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
rawrules: The rules the user has modified or set.
|
rawrules: The rules the user has modified or set.
|
||||||
use_new_defaults: Whether to use the new experimental default rules when
|
|
||||||
appending or prepending default rules.
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A new list with the rules set by the user combined with the defaults.
|
A new list with the rules set by the user combined with the defaults.
|
||||||
@ -48,9 +44,7 @@ def list_with_base_rules(
|
|||||||
|
|
||||||
ruleslist.extend(
|
ruleslist.extend(
|
||||||
make_base_prepend_rules(
|
make_base_prepend_rules(
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
|
||||||
modified_base_rules,
|
|
||||||
use_new_defaults,
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -61,7 +55,6 @@ def list_with_base_rules(
|
|||||||
make_base_append_rules(
|
make_base_append_rules(
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
||||||
modified_base_rules,
|
modified_base_rules,
|
||||||
use_new_defaults,
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
current_prio_class -= 1
|
current_prio_class -= 1
|
||||||
@ -70,7 +63,6 @@ def list_with_base_rules(
|
|||||||
make_base_prepend_rules(
|
make_base_prepend_rules(
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
||||||
modified_base_rules,
|
modified_base_rules,
|
||||||
use_new_defaults,
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -79,18 +71,14 @@ def list_with_base_rules(
|
|||||||
while current_prio_class > 0:
|
while current_prio_class > 0:
|
||||||
ruleslist.extend(
|
ruleslist.extend(
|
||||||
make_base_append_rules(
|
make_base_append_rules(
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
|
||||||
modified_base_rules,
|
|
||||||
use_new_defaults,
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
current_prio_class -= 1
|
current_prio_class -= 1
|
||||||
if current_prio_class > 0:
|
if current_prio_class > 0:
|
||||||
ruleslist.extend(
|
ruleslist.extend(
|
||||||
make_base_prepend_rules(
|
make_base_prepend_rules(
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
|
||||||
modified_base_rules,
|
|
||||||
use_new_defaults,
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -98,24 +86,14 @@ def list_with_base_rules(
|
|||||||
|
|
||||||
|
|
||||||
def make_base_append_rules(
|
def make_base_append_rules(
|
||||||
kind: str,
|
kind: str, modified_base_rules: Dict[str, Dict[str, Any]]
|
||||||
modified_base_rules: Dict[str, Dict[str, Any]],
|
|
||||||
use_new_defaults: bool = False,
|
|
||||||
) -> List[Dict[str, Any]]:
|
) -> List[Dict[str, Any]]:
|
||||||
rules = []
|
rules = []
|
||||||
|
|
||||||
if kind == "override":
|
if kind == "override":
|
||||||
rules = (
|
rules = BASE_APPEND_OVERRIDE_RULES
|
||||||
NEW_APPEND_OVERRIDE_RULES
|
|
||||||
if use_new_defaults
|
|
||||||
else BASE_APPEND_OVERRIDE_RULES
|
|
||||||
)
|
|
||||||
elif kind == "underride":
|
elif kind == "underride":
|
||||||
rules = (
|
rules = BASE_APPEND_UNDERRIDE_RULES
|
||||||
NEW_APPEND_UNDERRIDE_RULES
|
|
||||||
if use_new_defaults
|
|
||||||
else BASE_APPEND_UNDERRIDE_RULES
|
|
||||||
)
|
|
||||||
elif kind == "content":
|
elif kind == "content":
|
||||||
rules = BASE_APPEND_CONTENT_RULES
|
rules = BASE_APPEND_CONTENT_RULES
|
||||||
|
|
||||||
@ -134,7 +112,6 @@ def make_base_append_rules(
|
|||||||
def make_base_prepend_rules(
|
def make_base_prepend_rules(
|
||||||
kind: str,
|
kind: str,
|
||||||
modified_base_rules: Dict[str, Dict[str, Any]],
|
modified_base_rules: Dict[str, Dict[str, Any]],
|
||||||
use_new_defaults: bool = False,
|
|
||||||
) -> List[Dict[str, Any]]:
|
) -> List[Dict[str, Any]]:
|
||||||
rules = []
|
rules = []
|
||||||
|
|
||||||
@ -301,135 +278,6 @@ BASE_APPEND_OVERRIDE_RULES = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
NEW_APPEND_OVERRIDE_RULES = [
|
|
||||||
{
|
|
||||||
"rule_id": "global/override/.m.rule.encrypted",
|
|
||||||
"conditions": [
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "type",
|
|
||||||
"pattern": "m.room.encrypted",
|
|
||||||
"_id": "_encrypted",
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"actions": ["notify"],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"rule_id": "global/override/.m.rule.suppress_notices",
|
|
||||||
"conditions": [
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "type",
|
|
||||||
"pattern": "m.room.message",
|
|
||||||
"_id": "_suppress_notices_type",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "content.msgtype",
|
|
||||||
"pattern": "m.notice",
|
|
||||||
"_id": "_suppress_notices",
|
|
||||||
},
|
|
||||||
],
|
|
||||||
"actions": [],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"rule_id": "global/underride/.m.rule.suppress_edits",
|
|
||||||
"conditions": [
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "m.relates_to.m.rel_type",
|
|
||||||
"pattern": "m.replace",
|
|
||||||
"_id": "_suppress_edits",
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"actions": [],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"rule_id": "global/override/.m.rule.invite_for_me",
|
|
||||||
"conditions": [
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "type",
|
|
||||||
"pattern": "m.room.member",
|
|
||||||
"_id": "_member",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "content.membership",
|
|
||||||
"pattern": "invite",
|
|
||||||
"_id": "_invite_member",
|
|
||||||
},
|
|
||||||
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
|
|
||||||
],
|
|
||||||
"actions": ["notify", {"set_tweak": "sound", "value": "default"}],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"rule_id": "global/override/.m.rule.contains_display_name",
|
|
||||||
"conditions": [{"kind": "contains_display_name"}],
|
|
||||||
"actions": [
|
|
||||||
"notify",
|
|
||||||
{"set_tweak": "sound", "value": "default"},
|
|
||||||
{"set_tweak": "highlight"},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"rule_id": "global/override/.m.rule.tombstone",
|
|
||||||
"conditions": [
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "type",
|
|
||||||
"pattern": "m.room.tombstone",
|
|
||||||
"_id": "_tombstone",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "state_key",
|
|
||||||
"pattern": "",
|
|
||||||
"_id": "_tombstone_statekey",
|
|
||||||
},
|
|
||||||
],
|
|
||||||
"actions": [
|
|
||||||
"notify",
|
|
||||||
{"set_tweak": "sound", "value": "default"},
|
|
||||||
{"set_tweak": "highlight"},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"rule_id": "global/override/.m.rule.roomnotif",
|
|
||||||
"conditions": [
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "content.body",
|
|
||||||
"pattern": "@room",
|
|
||||||
"_id": "_roomnotif_content",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"kind": "sender_notification_permission",
|
|
||||||
"key": "room",
|
|
||||||
"_id": "_roomnotif_pl",
|
|
||||||
},
|
|
||||||
],
|
|
||||||
"actions": [
|
|
||||||
"notify",
|
|
||||||
{"set_tweak": "highlight"},
|
|
||||||
{"set_tweak": "sound", "value": "default"},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"rule_id": "global/override/.m.rule.call",
|
|
||||||
"conditions": [
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "type",
|
|
||||||
"pattern": "m.call.invite",
|
|
||||||
"_id": "_call",
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"actions": ["notify", {"set_tweak": "sound", "value": "ring"}],
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
BASE_APPEND_UNDERRIDE_RULES = [
|
BASE_APPEND_UNDERRIDE_RULES = [
|
||||||
{
|
{
|
||||||
"rule_id": "global/underride/.m.rule.call",
|
"rule_id": "global/underride/.m.rule.call",
|
||||||
@ -538,36 +386,6 @@ BASE_APPEND_UNDERRIDE_RULES = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
NEW_APPEND_UNDERRIDE_RULES = [
|
|
||||||
{
|
|
||||||
"rule_id": "global/underride/.m.rule.room_one_to_one",
|
|
||||||
"conditions": [
|
|
||||||
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "content.body",
|
|
||||||
"pattern": "*",
|
|
||||||
"_id": "body",
|
|
||||||
},
|
|
||||||
],
|
|
||||||
"actions": ["notify", {"set_tweak": "sound", "value": "default"}],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"rule_id": "global/underride/.m.rule.message",
|
|
||||||
"conditions": [
|
|
||||||
{
|
|
||||||
"kind": "event_match",
|
|
||||||
"key": "content.body",
|
|
||||||
"pattern": "*",
|
|
||||||
"_id": "body",
|
|
||||||
},
|
|
||||||
],
|
|
||||||
"actions": ["notify"],
|
|
||||||
"enabled": False,
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
BASE_RULE_IDS = set()
|
BASE_RULE_IDS = set()
|
||||||
|
|
||||||
for r in BASE_APPEND_CONTENT_RULES:
|
for r in BASE_APPEND_CONTENT_RULES:
|
||||||
@ -589,26 +407,3 @@ for r in BASE_APPEND_UNDERRIDE_RULES:
|
|||||||
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
|
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
|
||||||
r["default"] = True
|
r["default"] = True
|
||||||
BASE_RULE_IDS.add(r["rule_id"])
|
BASE_RULE_IDS.add(r["rule_id"])
|
||||||
|
|
||||||
|
|
||||||
NEW_RULE_IDS = set()
|
|
||||||
|
|
||||||
for r in BASE_APPEND_CONTENT_RULES:
|
|
||||||
r["priority_class"] = PRIORITY_CLASS_MAP["content"]
|
|
||||||
r["default"] = True
|
|
||||||
NEW_RULE_IDS.add(r["rule_id"])
|
|
||||||
|
|
||||||
for r in BASE_PREPEND_OVERRIDE_RULES:
|
|
||||||
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
|
|
||||||
r["default"] = True
|
|
||||||
NEW_RULE_IDS.add(r["rule_id"])
|
|
||||||
|
|
||||||
for r in NEW_APPEND_OVERRIDE_RULES:
|
|
||||||
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
|
|
||||||
r["default"] = True
|
|
||||||
NEW_RULE_IDS.add(r["rule_id"])
|
|
||||||
|
|
||||||
for r in NEW_APPEND_UNDERRIDE_RULES:
|
|
||||||
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
|
|
||||||
r["default"] = True
|
|
||||||
NEW_RULE_IDS.add(r["rule_id"])
|
|
||||||
|
@ -51,7 +51,7 @@ REQUIREMENTS = [
|
|||||||
# we use the TYPE_CHECKER.redefine method added in jsonschema 3.0.0
|
# we use the TYPE_CHECKER.redefine method added in jsonschema 3.0.0
|
||||||
"jsonschema>=3.0.0",
|
"jsonschema>=3.0.0",
|
||||||
# frozendict 2.1.2 is broken on Debian 10: https://github.com/Marco-Sulla/python-frozendict/issues/41
|
# frozendict 2.1.2 is broken on Debian 10: https://github.com/Marco-Sulla/python-frozendict/issues/41
|
||||||
"frozendict>=1,<2.1.2",
|
"frozendict>=1,!=2.1.2",
|
||||||
"unpaddedbase64>=1.1.0",
|
"unpaddedbase64>=1.1.0",
|
||||||
"canonicaljson>=1.4.0",
|
"canonicaljson>=1.4.0",
|
||||||
# we use the type definitions added in signedjson 1.1.
|
# we use the type definitions added in signedjson 1.1.
|
||||||
@ -76,8 +76,7 @@ REQUIREMENTS = [
|
|||||||
"msgpack>=0.5.2",
|
"msgpack>=0.5.2",
|
||||||
"phonenumbers>=8.2.0",
|
"phonenumbers>=8.2.0",
|
||||||
# we use GaugeHistogramMetric, which was added in prom-client 0.4.0.
|
# we use GaugeHistogramMetric, which was added in prom-client 0.4.0.
|
||||||
# 0.13.0 has an incorrect type annotation, see #11832.
|
"prometheus_client>=0.4.0",
|
||||||
"prometheus_client>=0.4.0,<0.13.0",
|
|
||||||
# we use `order`, which arrived in attrs 19.2.0.
|
# we use `order`, which arrived in attrs 19.2.0.
|
||||||
# Note: 21.1.0 broke `/sync`, see #9936
|
# Note: 21.1.0 broke `/sync`, see #9936
|
||||||
"attrs>=19.2.0,!=21.1.0",
|
"attrs>=19.2.0,!=21.1.0",
|
||||||
@ -89,7 +88,7 @@ REQUIREMENTS = [
|
|||||||
# with the latest security patches.
|
# with the latest security patches.
|
||||||
"cryptography>=3.4.7",
|
"cryptography>=3.4.7",
|
||||||
"ijson>=3.1",
|
"ijson>=3.1",
|
||||||
"matrix-common==1.0.0",
|
"matrix-common~=1.1.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
CONDITIONAL_REQUIREMENTS = {
|
CONDITIONAL_REQUIREMENTS = {
|
||||||
|
@ -40,7 +40,7 @@ class ReplicationRestResource(JsonResource):
|
|||||||
super().__init__(hs, canonical_json=False, extract_context=True)
|
super().__init__(hs, canonical_json=False, extract_context=True)
|
||||||
self.register_servlets(hs)
|
self.register_servlets(hs)
|
||||||
|
|
||||||
def register_servlets(self, hs: "HomeServer"):
|
def register_servlets(self, hs: "HomeServer") -> None:
|
||||||
send_event.register_servlets(hs, self)
|
send_event.register_servlets(hs, self)
|
||||||
federation.register_servlets(hs, self)
|
federation.register_servlets(hs, self)
|
||||||
presence.register_servlets(hs, self)
|
presence.register_servlets(hs, self)
|
||||||
|
@ -15,16 +15,20 @@
|
|||||||
import abc
|
import abc
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
import urllib
|
import urllib.parse
|
||||||
from inspect import signature
|
from inspect import signature
|
||||||
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Dict, List, Tuple
|
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Dict, List, Tuple
|
||||||
|
|
||||||
from prometheus_client import Counter, Gauge
|
from prometheus_client import Counter, Gauge
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse.api.errors import HttpResponseException, SynapseError
|
from synapse.api.errors import HttpResponseException, SynapseError
|
||||||
from synapse.http import RequestTimedOutError
|
from synapse.http import RequestTimedOutError
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.logging import opentracing
|
from synapse.logging import opentracing
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import trace
|
||||||
|
from synapse.types import JsonDict
|
||||||
from synapse.util.caches.response_cache import ResponseCache
|
from synapse.util.caches.response_cache import ResponseCache
|
||||||
from synapse.util.stringutils import random_string
|
from synapse.util.stringutils import random_string
|
||||||
|
|
||||||
@ -113,10 +117,12 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
|||||||
if hs.config.worker.worker_replication_secret:
|
if hs.config.worker.worker_replication_secret:
|
||||||
self._replication_secret = hs.config.worker.worker_replication_secret
|
self._replication_secret = hs.config.worker.worker_replication_secret
|
||||||
|
|
||||||
def _check_auth(self, request) -> None:
|
def _check_auth(self, request: Request) -> None:
|
||||||
# Get the authorization header.
|
# Get the authorization header.
|
||||||
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
|
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
|
||||||
|
|
||||||
|
if not auth_headers:
|
||||||
|
raise RuntimeError("Missing Authorization header.")
|
||||||
if len(auth_headers) > 1:
|
if len(auth_headers) > 1:
|
||||||
raise RuntimeError("Too many Authorization headers.")
|
raise RuntimeError("Too many Authorization headers.")
|
||||||
parts = auth_headers[0].split(b" ")
|
parts = auth_headers[0].split(b" ")
|
||||||
@ -129,7 +135,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
|||||||
raise RuntimeError("Invalid Authorization header.")
|
raise RuntimeError("Invalid Authorization header.")
|
||||||
|
|
||||||
@abc.abstractmethod
|
@abc.abstractmethod
|
||||||
async def _serialize_payload(**kwargs):
|
async def _serialize_payload(**kwargs) -> JsonDict:
|
||||||
"""Static method that is called when creating a request.
|
"""Static method that is called when creating a request.
|
||||||
|
|
||||||
Concrete implementations should have explicit parameters (rather than
|
Concrete implementations should have explicit parameters (rather than
|
||||||
@ -144,19 +150,20 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
|||||||
return {}
|
return {}
|
||||||
|
|
||||||
@abc.abstractmethod
|
@abc.abstractmethod
|
||||||
async def _handle_request(self, request, **kwargs):
|
async def _handle_request(
|
||||||
|
self, request: Request, **kwargs: Any
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
"""Handle incoming request.
|
"""Handle incoming request.
|
||||||
|
|
||||||
This is called with the request object and PATH_ARGS.
|
This is called with the request object and PATH_ARGS.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
tuple[int, dict]: HTTP status code and a JSON serialisable dict
|
HTTP status code and a JSON serialisable dict to be used as response
|
||||||
to be used as response body of request.
|
body of request.
|
||||||
"""
|
"""
|
||||||
pass
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def make_client(cls, hs: "HomeServer"):
|
def make_client(cls, hs: "HomeServer") -> Callable:
|
||||||
"""Create a client that makes requests.
|
"""Create a client that makes requests.
|
||||||
|
|
||||||
Returns a callable that accepts the same parameters as
|
Returns a callable that accepts the same parameters as
|
||||||
@ -182,7 +189,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
|||||||
)
|
)
|
||||||
|
|
||||||
@trace(opname="outgoing_replication_request")
|
@trace(opname="outgoing_replication_request")
|
||||||
async def send_request(*, instance_name="master", **kwargs):
|
async def send_request(*, instance_name: str = "master", **kwargs: Any) -> Any:
|
||||||
with outgoing_gauge.track_inprogress():
|
with outgoing_gauge.track_inprogress():
|
||||||
if instance_name == local_instance_name:
|
if instance_name == local_instance_name:
|
||||||
raise Exception("Trying to send HTTP request to self")
|
raise Exception("Trying to send HTTP request to self")
|
||||||
@ -268,7 +275,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
|||||||
|
|
||||||
return send_request
|
return send_request
|
||||||
|
|
||||||
def register(self, http_server):
|
def register(self, http_server: HttpServer) -> None:
|
||||||
"""Called by the server to register this as a handler to the
|
"""Called by the server to register this as a handler to the
|
||||||
appropriate path.
|
appropriate path.
|
||||||
"""
|
"""
|
||||||
@ -289,7 +296,9 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
|||||||
self.__class__.__name__,
|
self.__class__.__name__,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _check_auth_and_handle(self, request, **kwargs):
|
async def _check_auth_and_handle(
|
||||||
|
self, request: Request, **kwargs: Any
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
"""Called on new incoming requests when caching is enabled. Checks
|
"""Called on new incoming requests when caching is enabled. Checks
|
||||||
if there is a cached response for the request and returns that,
|
if there is a cached response for the request and returns that,
|
||||||
otherwise calls `_handle_request` and caches its response.
|
otherwise calls `_handle_request` and caches its response.
|
||||||
|
@ -13,10 +13,14 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -48,14 +52,18 @@ class ReplicationUserAccountDataRestServlet(ReplicationEndpoint):
|
|||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(user_id, account_data_type, content):
|
async def _serialize_payload( # type: ignore[override]
|
||||||
|
user_id: str, account_data_type: str, content: JsonDict
|
||||||
|
) -> JsonDict:
|
||||||
payload = {
|
payload = {
|
||||||
"content": content,
|
"content": content,
|
||||||
}
|
}
|
||||||
|
|
||||||
return payload
|
return payload
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id, account_data_type):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str, account_data_type: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
max_stream_id = await self.handler.add_account_data_for_user(
|
max_stream_id = await self.handler.add_account_data_for_user(
|
||||||
@ -89,14 +97,18 @@ class ReplicationRoomAccountDataRestServlet(ReplicationEndpoint):
|
|||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(user_id, room_id, account_data_type, content):
|
async def _serialize_payload( # type: ignore[override]
|
||||||
|
user_id: str, room_id: str, account_data_type: str, content: JsonDict
|
||||||
|
) -> JsonDict:
|
||||||
payload = {
|
payload = {
|
||||||
"content": content,
|
"content": content,
|
||||||
}
|
}
|
||||||
|
|
||||||
return payload
|
return payload
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id, room_id, account_data_type):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str, room_id: str, account_data_type: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
max_stream_id = await self.handler.add_account_data_to_room(
|
max_stream_id = await self.handler.add_account_data_to_room(
|
||||||
@ -130,14 +142,18 @@ class ReplicationAddTagRestServlet(ReplicationEndpoint):
|
|||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(user_id, room_id, tag, content):
|
async def _serialize_payload( # type: ignore[override]
|
||||||
|
user_id: str, room_id: str, tag: str, content: JsonDict
|
||||||
|
) -> JsonDict:
|
||||||
payload = {
|
payload = {
|
||||||
"content": content,
|
"content": content,
|
||||||
}
|
}
|
||||||
|
|
||||||
return payload
|
return payload
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id, room_id, tag):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str, room_id: str, tag: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
max_stream_id = await self.handler.add_tag_to_room(
|
max_stream_id = await self.handler.add_tag_to_room(
|
||||||
@ -173,11 +189,13 @@ class ReplicationRemoveTagRestServlet(ReplicationEndpoint):
|
|||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(user_id, room_id, tag):
|
async def _serialize_payload(user_id: str, room_id: str, tag: str) -> JsonDict: # type: ignore[override]
|
||||||
|
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id, room_id, tag):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str, room_id: str, tag: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
max_stream_id = await self.handler.remove_tag_from_room(
|
max_stream_id = await self.handler.remove_tag_from_room(
|
||||||
user_id,
|
user_id,
|
||||||
room_id,
|
room_id,
|
||||||
@ -187,7 +205,7 @@ class ReplicationRemoveTagRestServlet(ReplicationEndpoint):
|
|||||||
return 200, {"max_stream_id": max_stream_id}
|
return 200, {"max_stream_id": max_stream_id}
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReplicationUserAccountDataRestServlet(hs).register(http_server)
|
ReplicationUserAccountDataRestServlet(hs).register(http_server)
|
||||||
ReplicationRoomAccountDataRestServlet(hs).register(http_server)
|
ReplicationRoomAccountDataRestServlet(hs).register(http_server)
|
||||||
ReplicationAddTagRestServlet(hs).register(http_server)
|
ReplicationAddTagRestServlet(hs).register(http_server)
|
||||||
|
@ -13,9 +13,13 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -63,14 +67,16 @@ class ReplicationUserDevicesResyncRestServlet(ReplicationEndpoint):
|
|||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(user_id):
|
async def _serialize_payload(user_id: str) -> JsonDict: # type: ignore[override]
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
user_devices = await self.device_list_updater.user_device_resync(user_id)
|
user_devices = await self.device_list_updater.user_device_resync(user_id)
|
||||||
|
|
||||||
return 200, user_devices
|
return 200, user_devices
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReplicationUserDevicesResyncRestServlet(hs).register(http_server)
|
ReplicationUserDevicesResyncRestServlet(hs).register(http_server)
|
||||||
|
@ -13,17 +13,22 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, List, Tuple
|
||||||
|
|
||||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
from twisted.web.server import Request
|
||||||
from synapse.events import make_event_from_dict
|
|
||||||
|
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
|
||||||
|
from synapse.events import EventBase, make_event_from_dict
|
||||||
from synapse.events.snapshot import EventContext
|
from synapse.events.snapshot import EventContext
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
|
from synapse.types import JsonDict
|
||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
from synapse.storage.databases.main import DataStore
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@ -69,14 +74,18 @@ class ReplicationFederationSendEventsRestServlet(ReplicationEndpoint):
|
|||||||
self.federation_event_handler = hs.get_federation_event_handler()
|
self.federation_event_handler = hs.get_federation_event_handler()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(store, room_id, event_and_contexts, backfilled):
|
async def _serialize_payload( # type: ignore[override]
|
||||||
|
store: "DataStore",
|
||||||
|
room_id: str,
|
||||||
|
event_and_contexts: List[Tuple[EventBase, EventContext]],
|
||||||
|
backfilled: bool,
|
||||||
|
) -> JsonDict:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
store
|
store
|
||||||
room_id (str)
|
room_id
|
||||||
event_and_contexts (list[tuple[FrozenEvent, EventContext]])
|
event_and_contexts
|
||||||
backfilled (bool): Whether or not the events are the result of
|
backfilled: Whether or not the events are the result of backfilling
|
||||||
backfilling
|
|
||||||
"""
|
"""
|
||||||
event_payloads = []
|
event_payloads = []
|
||||||
for event, context in event_and_contexts:
|
for event, context in event_and_contexts:
|
||||||
@ -102,7 +111,7 @@ class ReplicationFederationSendEventsRestServlet(ReplicationEndpoint):
|
|||||||
|
|
||||||
return payload
|
return payload
|
||||||
|
|
||||||
async def _handle_request(self, request):
|
async def _handle_request(self, request: Request) -> Tuple[int, JsonDict]: # type: ignore[override]
|
||||||
with Measure(self.clock, "repl_fed_send_events_parse"):
|
with Measure(self.clock, "repl_fed_send_events_parse"):
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
@ -163,10 +172,14 @@ class ReplicationFederationSendEduRestServlet(ReplicationEndpoint):
|
|||||||
self.registry = hs.get_federation_registry()
|
self.registry = hs.get_federation_registry()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(edu_type, origin, content):
|
async def _serialize_payload( # type: ignore[override]
|
||||||
|
edu_type: str, origin: str, content: JsonDict
|
||||||
|
) -> JsonDict:
|
||||||
return {"origin": origin, "content": content}
|
return {"origin": origin, "content": content}
|
||||||
|
|
||||||
async def _handle_request(self, request, edu_type):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, edu_type: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
with Measure(self.clock, "repl_fed_send_edu_parse"):
|
with Measure(self.clock, "repl_fed_send_edu_parse"):
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
@ -175,9 +188,9 @@ class ReplicationFederationSendEduRestServlet(ReplicationEndpoint):
|
|||||||
|
|
||||||
logger.info("Got %r edu from %s", edu_type, origin)
|
logger.info("Got %r edu from %s", edu_type, origin)
|
||||||
|
|
||||||
result = await self.registry.on_edu(edu_type, origin, edu_content)
|
await self.registry.on_edu(edu_type, origin, edu_content)
|
||||||
|
|
||||||
return 200, result
|
return 200, {}
|
||||||
|
|
||||||
|
|
||||||
class ReplicationGetQueryRestServlet(ReplicationEndpoint):
|
class ReplicationGetQueryRestServlet(ReplicationEndpoint):
|
||||||
@ -206,15 +219,17 @@ class ReplicationGetQueryRestServlet(ReplicationEndpoint):
|
|||||||
self.registry = hs.get_federation_registry()
|
self.registry = hs.get_federation_registry()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(query_type, args):
|
async def _serialize_payload(query_type: str, args: JsonDict) -> JsonDict: # type: ignore[override]
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
query_type (str)
|
query_type
|
||||||
args (dict): The arguments received for the given query type
|
args: The arguments received for the given query type
|
||||||
"""
|
"""
|
||||||
return {"args": args}
|
return {"args": args}
|
||||||
|
|
||||||
async def _handle_request(self, request, query_type):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, query_type: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
with Measure(self.clock, "repl_fed_query_parse"):
|
with Measure(self.clock, "repl_fed_query_parse"):
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
@ -248,14 +263,16 @@ class ReplicationCleanRoomRestServlet(ReplicationEndpoint):
|
|||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(room_id, args):
|
async def _serialize_payload(room_id: str) -> JsonDict: # type: ignore[override]
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
room_id (str)
|
room_id
|
||||||
"""
|
"""
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
async def _handle_request(self, request, room_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, room_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
await self.store.clean_room_for_join(room_id)
|
await self.store.clean_room_for_join(room_id)
|
||||||
|
|
||||||
return 200, {}
|
return 200, {}
|
||||||
@ -283,17 +300,19 @@ class ReplicationStoreRoomOnOutlierMembershipRestServlet(ReplicationEndpoint):
|
|||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(room_id, room_version):
|
async def _serialize_payload(room_id: str, room_version: RoomVersion) -> JsonDict: # type: ignore[override]
|
||||||
return {"room_version": room_version.identifier}
|
return {"room_version": room_version.identifier}
|
||||||
|
|
||||||
async def _handle_request(self, request, room_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, room_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
room_version = KNOWN_ROOM_VERSIONS[content["room_version"]]
|
room_version = KNOWN_ROOM_VERSIONS[content["room_version"]]
|
||||||
await self.store.maybe_store_room_on_outlier_membership(room_id, room_version)
|
await self.store.maybe_store_room_on_outlier_membership(room_id, room_version)
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReplicationFederationSendEventsRestServlet(hs).register(http_server)
|
ReplicationFederationSendEventsRestServlet(hs).register(http_server)
|
||||||
ReplicationFederationSendEduRestServlet(hs).register(http_server)
|
ReplicationFederationSendEduRestServlet(hs).register(http_server)
|
||||||
ReplicationGetQueryRestServlet(hs).register(http_server)
|
ReplicationGetQueryRestServlet(hs).register(http_server)
|
||||||
|
@ -13,10 +13,14 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Optional, Tuple, cast
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -39,25 +43,24 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
|
|||||||
self.registration_handler = hs.get_registration_handler()
|
self.registration_handler = hs.get_registration_handler()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(
|
async def _serialize_payload( # type: ignore[override]
|
||||||
user_id,
|
user_id: str,
|
||||||
device_id,
|
device_id: Optional[str],
|
||||||
initial_display_name,
|
initial_display_name: Optional[str],
|
||||||
is_guest,
|
is_guest: bool,
|
||||||
is_appservice_ghost,
|
is_appservice_ghost: bool,
|
||||||
should_issue_refresh_token,
|
should_issue_refresh_token: bool,
|
||||||
auth_provider_id,
|
auth_provider_id: Optional[str],
|
||||||
auth_provider_session_id,
|
auth_provider_session_id: Optional[str],
|
||||||
):
|
) -> JsonDict:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
user_id (int)
|
user_id
|
||||||
device_id (str|None): Device ID to use, if None a new one is
|
device_id: Device ID to use, if None a new one is generated.
|
||||||
generated.
|
initial_display_name
|
||||||
initial_display_name (str|None)
|
is_guest
|
||||||
is_guest (bool)
|
is_appservice_ghost
|
||||||
is_appservice_ghost (bool)
|
should_issue_refresh_token
|
||||||
should_issue_refresh_token (bool)
|
|
||||||
"""
|
"""
|
||||||
return {
|
return {
|
||||||
"device_id": device_id,
|
"device_id": device_id,
|
||||||
@ -69,7 +72,9 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
|
|||||||
"auth_provider_session_id": auth_provider_session_id,
|
"auth_provider_session_id": auth_provider_session_id,
|
||||||
}
|
}
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
device_id = content["device_id"]
|
device_id = content["device_id"]
|
||||||
@ -91,8 +96,8 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
|
|||||||
auth_provider_session_id=auth_provider_session_id,
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, res
|
return 200, cast(JsonDict, res)
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
RegisterDeviceReplicationServlet(hs).register(http_server)
|
RegisterDeviceReplicationServlet(hs).register(http_server)
|
||||||
|
@ -16,6 +16,7 @@ from typing import TYPE_CHECKING, List, Optional, Tuple
|
|||||||
|
|
||||||
from twisted.web.server import Request
|
from twisted.web.server import Request
|
||||||
|
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
@ -53,7 +54,7 @@ class ReplicationRemoteJoinRestServlet(ReplicationEndpoint):
|
|||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload( # type: ignore
|
async def _serialize_payload( # type: ignore[override]
|
||||||
requester: Requester,
|
requester: Requester,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
@ -77,7 +78,7 @@ class ReplicationRemoteJoinRestServlet(ReplicationEndpoint):
|
|||||||
"content": content,
|
"content": content,
|
||||||
}
|
}
|
||||||
|
|
||||||
async def _handle_request( # type: ignore
|
async def _handle_request( # type: ignore[override]
|
||||||
self, request: SynapseRequest, room_id: str, user_id: str
|
self, request: SynapseRequest, room_id: str, user_id: str
|
||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
@ -122,13 +123,13 @@ class ReplicationRemoteKnockRestServlet(ReplicationEndpoint):
|
|||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload( # type: ignore
|
async def _serialize_payload( # type: ignore[override]
|
||||||
requester: Requester,
|
requester: Requester,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
remote_room_hosts: List[str],
|
remote_room_hosts: List[str],
|
||||||
content: JsonDict,
|
content: JsonDict,
|
||||||
):
|
) -> JsonDict:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
requester: The user making the request, according to the access token.
|
requester: The user making the request, according to the access token.
|
||||||
@ -143,12 +144,12 @@ class ReplicationRemoteKnockRestServlet(ReplicationEndpoint):
|
|||||||
"content": content,
|
"content": content,
|
||||||
}
|
}
|
||||||
|
|
||||||
async def _handle_request( # type: ignore
|
async def _handle_request( # type: ignore[override]
|
||||||
self,
|
self,
|
||||||
request: SynapseRequest,
|
request: SynapseRequest,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
):
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
remote_room_hosts = content["remote_room_hosts"]
|
remote_room_hosts = content["remote_room_hosts"]
|
||||||
@ -192,7 +193,7 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
|
|||||||
self.member_handler = hs.get_room_member_handler()
|
self.member_handler = hs.get_room_member_handler()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload( # type: ignore
|
async def _serialize_payload( # type: ignore[override]
|
||||||
invite_event_id: str,
|
invite_event_id: str,
|
||||||
txn_id: Optional[str],
|
txn_id: Optional[str],
|
||||||
requester: Requester,
|
requester: Requester,
|
||||||
@ -215,7 +216,7 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
|
|||||||
"content": content,
|
"content": content,
|
||||||
}
|
}
|
||||||
|
|
||||||
async def _handle_request( # type: ignore
|
async def _handle_request( # type: ignore[override]
|
||||||
self, request: SynapseRequest, invite_event_id: str
|
self, request: SynapseRequest, invite_event_id: str
|
||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
@ -262,12 +263,12 @@ class ReplicationRemoteRescindKnockRestServlet(ReplicationEndpoint):
|
|||||||
self.member_handler = hs.get_room_member_handler()
|
self.member_handler = hs.get_room_member_handler()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload( # type: ignore
|
async def _serialize_payload( # type: ignore[override]
|
||||||
knock_event_id: str,
|
knock_event_id: str,
|
||||||
txn_id: Optional[str],
|
txn_id: Optional[str],
|
||||||
requester: Requester,
|
requester: Requester,
|
||||||
content: JsonDict,
|
content: JsonDict,
|
||||||
):
|
) -> JsonDict:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
knock_event_id: The ID of the knock to be rescinded.
|
knock_event_id: The ID of the knock to be rescinded.
|
||||||
@ -281,11 +282,11 @@ class ReplicationRemoteRescindKnockRestServlet(ReplicationEndpoint):
|
|||||||
"content": content,
|
"content": content,
|
||||||
}
|
}
|
||||||
|
|
||||||
async def _handle_request( # type: ignore
|
async def _handle_request( # type: ignore[override]
|
||||||
self,
|
self,
|
||||||
request: SynapseRequest,
|
request: SynapseRequest,
|
||||||
knock_event_id: str,
|
knock_event_id: str,
|
||||||
):
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
txn_id = content["txn_id"]
|
txn_id = content["txn_id"]
|
||||||
@ -329,7 +330,7 @@ class ReplicationUserJoinedLeftRoomRestServlet(ReplicationEndpoint):
|
|||||||
self.distributor = hs.get_distributor()
|
self.distributor = hs.get_distributor()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload( # type: ignore
|
async def _serialize_payload( # type: ignore[override]
|
||||||
room_id: str, user_id: str, change: str
|
room_id: str, user_id: str, change: str
|
||||||
) -> JsonDict:
|
) -> JsonDict:
|
||||||
"""
|
"""
|
||||||
@ -345,7 +346,7 @@ class ReplicationUserJoinedLeftRoomRestServlet(ReplicationEndpoint):
|
|||||||
|
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
async def _handle_request( # type: ignore
|
async def _handle_request( # type: ignore[override]
|
||||||
self, request: Request, room_id: str, user_id: str, change: str
|
self, request: Request, room_id: str, user_id: str, change: str
|
||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
logger.info("user membership change: %s in %s", user_id, room_id)
|
logger.info("user membership change: %s in %s", user_id, room_id)
|
||||||
@ -360,7 +361,7 @@ class ReplicationUserJoinedLeftRoomRestServlet(ReplicationEndpoint):
|
|||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReplicationRemoteJoinRestServlet(hs).register(http_server)
|
ReplicationRemoteJoinRestServlet(hs).register(http_server)
|
||||||
ReplicationRemoteRejectInviteRestServlet(hs).register(http_server)
|
ReplicationRemoteRejectInviteRestServlet(hs).register(http_server)
|
||||||
ReplicationUserJoinedLeftRoomRestServlet(hs).register(http_server)
|
ReplicationUserJoinedLeftRoomRestServlet(hs).register(http_server)
|
||||||
|
@ -13,11 +13,14 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
from synapse.types import UserID
|
from synapse.types import JsonDict, UserID
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -49,18 +52,17 @@ class ReplicationBumpPresenceActiveTime(ReplicationEndpoint):
|
|||||||
self._presence_handler = hs.get_presence_handler()
|
self._presence_handler = hs.get_presence_handler()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(user_id):
|
async def _serialize_payload(user_id: str) -> JsonDict: # type: ignore[override]
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
await self._presence_handler.bump_presence_active_time(
|
await self._presence_handler.bump_presence_active_time(
|
||||||
UserID.from_string(user_id)
|
UserID.from_string(user_id)
|
||||||
)
|
)
|
||||||
|
|
||||||
return (
|
return (200, {})
|
||||||
200,
|
|
||||||
{},
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ReplicationPresenceSetState(ReplicationEndpoint):
|
class ReplicationPresenceSetState(ReplicationEndpoint):
|
||||||
@ -92,16 +94,21 @@ class ReplicationPresenceSetState(ReplicationEndpoint):
|
|||||||
self._presence_handler = hs.get_presence_handler()
|
self._presence_handler = hs.get_presence_handler()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(
|
async def _serialize_payload( # type: ignore[override]
|
||||||
user_id, state, ignore_status_msg=False, force_notify=False
|
user_id: str,
|
||||||
):
|
state: JsonDict,
|
||||||
|
ignore_status_msg: bool = False,
|
||||||
|
force_notify: bool = False,
|
||||||
|
) -> JsonDict:
|
||||||
return {
|
return {
|
||||||
"state": state,
|
"state": state,
|
||||||
"ignore_status_msg": ignore_status_msg,
|
"ignore_status_msg": ignore_status_msg,
|
||||||
"force_notify": force_notify,
|
"force_notify": force_notify,
|
||||||
}
|
}
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
await self._presence_handler.set_state(
|
await self._presence_handler.set_state(
|
||||||
@ -111,12 +118,9 @@ class ReplicationPresenceSetState(ReplicationEndpoint):
|
|||||||
content["force_notify"],
|
content["force_notify"],
|
||||||
)
|
)
|
||||||
|
|
||||||
return (
|
return (200, {})
|
||||||
200,
|
|
||||||
{},
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReplicationBumpPresenceActiveTime(hs).register(http_server)
|
ReplicationBumpPresenceActiveTime(hs).register(http_server)
|
||||||
ReplicationPresenceSetState(hs).register(http_server)
|
ReplicationPresenceSetState(hs).register(http_server)
|
||||||
|
@ -13,10 +13,14 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -48,7 +52,7 @@ class ReplicationRemovePusherRestServlet(ReplicationEndpoint):
|
|||||||
self.pusher_pool = hs.get_pusherpool()
|
self.pusher_pool = hs.get_pusherpool()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(app_id, pushkey, user_id):
|
async def _serialize_payload(app_id: str, pushkey: str, user_id: str) -> JsonDict: # type: ignore[override]
|
||||||
payload = {
|
payload = {
|
||||||
"app_id": app_id,
|
"app_id": app_id,
|
||||||
"pushkey": pushkey,
|
"pushkey": pushkey,
|
||||||
@ -56,7 +60,9 @@ class ReplicationRemovePusherRestServlet(ReplicationEndpoint):
|
|||||||
|
|
||||||
return payload
|
return payload
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
app_id = content["app_id"]
|
app_id = content["app_id"]
|
||||||
@ -67,5 +73,5 @@ class ReplicationRemovePusherRestServlet(ReplicationEndpoint):
|
|||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReplicationRemovePusherRestServlet(hs).register(http_server)
|
ReplicationRemovePusherRestServlet(hs).register(http_server)
|
||||||
|
@ -13,10 +13,14 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Optional, Tuple
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -36,34 +40,34 @@ class ReplicationRegisterServlet(ReplicationEndpoint):
|
|||||||
self.registration_handler = hs.get_registration_handler()
|
self.registration_handler = hs.get_registration_handler()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(
|
async def _serialize_payload( # type: ignore[override]
|
||||||
user_id,
|
user_id: str,
|
||||||
password_hash,
|
password_hash: Optional[str],
|
||||||
was_guest,
|
was_guest: bool,
|
||||||
make_guest,
|
make_guest: bool,
|
||||||
appservice_id,
|
appservice_id: Optional[str],
|
||||||
create_profile_with_displayname,
|
create_profile_with_displayname: Optional[str],
|
||||||
admin,
|
admin: bool,
|
||||||
user_type,
|
user_type: Optional[str],
|
||||||
address,
|
address: Optional[str],
|
||||||
shadow_banned,
|
shadow_banned: bool,
|
||||||
):
|
) -> JsonDict:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
user_id (str): The desired user ID to register.
|
user_id: The desired user ID to register.
|
||||||
password_hash (str|None): Optional. The password hash for this user.
|
password_hash: Optional. The password hash for this user.
|
||||||
was_guest (bool): Optional. Whether this is a guest account being
|
was_guest: Optional. Whether this is a guest account being upgraded
|
||||||
upgraded to a non-guest account.
|
to a non-guest account.
|
||||||
make_guest (boolean): True if the the new user should be guest,
|
make_guest: True if the the new user should be guest, false to add a
|
||||||
false to add a regular user account.
|
regular user account.
|
||||||
appservice_id (str|None): The ID of the appservice registering the user.
|
appservice_id: The ID of the appservice registering the user.
|
||||||
create_profile_with_displayname (unicode|None): Optionally create a
|
create_profile_with_displayname: Optionally create a profile for the
|
||||||
profile for the user, setting their displayname to the given value
|
user, setting their displayname to the given value
|
||||||
admin (boolean): is an admin user?
|
admin: is an admin user?
|
||||||
user_type (str|None): type of user. One of the values from
|
user_type: type of user. One of the values from api.constants.UserTypes,
|
||||||
api.constants.UserTypes, or None for a normal user.
|
or None for a normal user.
|
||||||
address (str|None): the IP address used to perform the regitration.
|
address: the IP address used to perform the regitration.
|
||||||
shadow_banned (bool): Whether to shadow-ban the user
|
shadow_banned: Whether to shadow-ban the user
|
||||||
"""
|
"""
|
||||||
return {
|
return {
|
||||||
"password_hash": password_hash,
|
"password_hash": password_hash,
|
||||||
@ -77,7 +81,9 @@ class ReplicationRegisterServlet(ReplicationEndpoint):
|
|||||||
"shadow_banned": shadow_banned,
|
"shadow_banned": shadow_banned,
|
||||||
}
|
}
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
await self.registration_handler.check_registration_ratelimit(content["address"])
|
await self.registration_handler.check_registration_ratelimit(content["address"])
|
||||||
@ -110,18 +116,21 @@ class ReplicationPostRegisterActionsServlet(ReplicationEndpoint):
|
|||||||
self.registration_handler = hs.get_registration_handler()
|
self.registration_handler = hs.get_registration_handler()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(user_id, auth_result, access_token):
|
async def _serialize_payload( # type: ignore[override]
|
||||||
|
user_id: str, auth_result: JsonDict, access_token: Optional[str]
|
||||||
|
) -> JsonDict:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
user_id (str): The user ID that consented
|
user_id: The user ID that consented
|
||||||
auth_result (dict): The authenticated credentials of the newly
|
auth_result: The authenticated credentials of the newly registered user.
|
||||||
registered user.
|
access_token: The access token of the newly logged in
|
||||||
access_token (str|None): The access token of the newly logged in
|
|
||||||
device, or None if `inhibit_login` enabled.
|
device, or None if `inhibit_login` enabled.
|
||||||
"""
|
"""
|
||||||
return {"auth_result": auth_result, "access_token": access_token}
|
return {"auth_result": auth_result, "access_token": access_token}
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
auth_result = content["auth_result"]
|
auth_result = content["auth_result"]
|
||||||
@ -134,6 +143,6 @@ class ReplicationPostRegisterActionsServlet(ReplicationEndpoint):
|
|||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReplicationRegisterServlet(hs).register(http_server)
|
ReplicationRegisterServlet(hs).register(http_server)
|
||||||
ReplicationPostRegisterActionsServlet(hs).register(http_server)
|
ReplicationPostRegisterActionsServlet(hs).register(http_server)
|
||||||
|
@ -13,18 +13,22 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, List, Tuple
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||||
from synapse.events import make_event_from_dict
|
from synapse.events import EventBase, make_event_from_dict
|
||||||
from synapse.events.snapshot import EventContext
|
from synapse.events.snapshot import EventContext
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
from synapse.types import Requester, UserID
|
from synapse.types import JsonDict, Requester, UserID
|
||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
from synapse.storage.databases.main import DataStore
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@ -70,18 +74,24 @@ class ReplicationSendEventRestServlet(ReplicationEndpoint):
|
|||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(
|
async def _serialize_payload( # type: ignore[override]
|
||||||
event_id, store, event, context, requester, ratelimit, extra_users
|
event_id: str,
|
||||||
):
|
store: "DataStore",
|
||||||
|
event: EventBase,
|
||||||
|
context: EventContext,
|
||||||
|
requester: Requester,
|
||||||
|
ratelimit: bool,
|
||||||
|
extra_users: List[UserID],
|
||||||
|
) -> JsonDict:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
event_id (str)
|
event_id
|
||||||
store (DataStore)
|
store
|
||||||
requester (Requester)
|
requester
|
||||||
event (FrozenEvent)
|
event
|
||||||
context (EventContext)
|
context
|
||||||
ratelimit (bool)
|
ratelimit
|
||||||
extra_users (list(UserID)): Any extra users to notify about event
|
extra_users: Any extra users to notify about event
|
||||||
"""
|
"""
|
||||||
serialized_context = await context.serialize(event, store)
|
serialized_context = await context.serialize(event, store)
|
||||||
|
|
||||||
@ -100,7 +110,9 @@ class ReplicationSendEventRestServlet(ReplicationEndpoint):
|
|||||||
|
|
||||||
return payload
|
return payload
|
||||||
|
|
||||||
async def _handle_request(self, request, event_id):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, event_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
with Measure(self.clock, "repl_send_event_parse"):
|
with Measure(self.clock, "repl_send_event_parse"):
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
@ -120,8 +132,6 @@ class ReplicationSendEventRestServlet(ReplicationEndpoint):
|
|||||||
ratelimit = content["ratelimit"]
|
ratelimit = content["ratelimit"]
|
||||||
extra_users = [UserID.from_string(u) for u in content["extra_users"]]
|
extra_users = [UserID.from_string(u) for u in content["extra_users"]]
|
||||||
|
|
||||||
request.requester = requester
|
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
"Got event to send with ID: %s into room: %s", event.event_id, event.room_id
|
"Got event to send with ID: %s into room: %s", event.event_id, event.room_id
|
||||||
)
|
)
|
||||||
@ -139,5 +149,5 @@ class ReplicationSendEventRestServlet(ReplicationEndpoint):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReplicationSendEventRestServlet(hs).register(http_server)
|
ReplicationSendEventRestServlet(hs).register(http_server)
|
||||||
|
@ -13,11 +13,15 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import parse_integer
|
from synapse.http.servlet import parse_integer
|
||||||
from synapse.replication.http._base import ReplicationEndpoint
|
from synapse.replication.http._base import ReplicationEndpoint
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -57,10 +61,14 @@ class ReplicationGetStreamUpdates(ReplicationEndpoint):
|
|||||||
self.streams = hs.get_replication_streams()
|
self.streams = hs.get_replication_streams()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
async def _serialize_payload(stream_name, from_token, upto_token):
|
async def _serialize_payload( # type: ignore[override]
|
||||||
|
stream_name: str, from_token: int, upto_token: int
|
||||||
|
) -> JsonDict:
|
||||||
return {"from_token": from_token, "upto_token": upto_token}
|
return {"from_token": from_token, "upto_token": upto_token}
|
||||||
|
|
||||||
async def _handle_request(self, request, stream_name):
|
async def _handle_request( # type: ignore[override]
|
||||||
|
self, request: Request, stream_name: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
stream = self.streams.get(stream_name)
|
stream = self.streams.get(stream_name)
|
||||||
if stream is None:
|
if stream is None:
|
||||||
raise SynapseError(400, "Unknown stream")
|
raise SynapseError(400, "Unknown stream")
|
||||||
@ -78,5 +86,5 @@ class ReplicationGetStreamUpdates(ReplicationEndpoint):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server):
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReplicationGetStreamUpdates(hs).register(http_server)
|
ReplicationGetStreamUpdates(hs).register(http_server)
|
||||||
|
@ -40,7 +40,7 @@ class SlavedIdTracker(AbstractStreamIdTracker):
|
|||||||
for table, column in extra_tables:
|
for table, column in extra_tables:
|
||||||
self.advance(None, _load_current_id(db_conn, table, column))
|
self.advance(None, _load_current_id(db_conn, table, column))
|
||||||
|
|
||||||
def advance(self, instance_name: Optional[str], new_id: int):
|
def advance(self, instance_name: Optional[str], new_id: int) -> None:
|
||||||
self._current = (max if self.step > 0 else min)(self._current, new_id)
|
self._current = (max if self.step > 0 else min)(self._current, new_id)
|
||||||
|
|
||||||
def get_current_token(self) -> int:
|
def get_current_token(self) -> int:
|
||||||
|
@ -37,7 +37,9 @@ class SlavedClientIpStore(BaseSlavedStore):
|
|||||||
cache_name="client_ip_last_seen", max_size=50000
|
cache_name="client_ip_last_seen", max_size=50000
|
||||||
)
|
)
|
||||||
|
|
||||||
async def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id):
|
async def insert_client_ip(
|
||||||
|
self, user_id: str, access_token: str, ip: str, user_agent: str, device_id: str
|
||||||
|
) -> None:
|
||||||
now = int(self._clock.time_msec())
|
now = int(self._clock.time_msec())
|
||||||
key = (user_id, access_token, ip)
|
key = (user_id, access_token, ip)
|
||||||
|
|
||||||
|
@ -12,7 +12,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Any, Iterable
|
||||||
|
|
||||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||||
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
|
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
|
||||||
@ -60,7 +60,9 @@ class SlavedDeviceStore(EndToEndKeyWorkerStore, DeviceWorkerStore, BaseSlavedSto
|
|||||||
def get_device_stream_token(self) -> int:
|
def get_device_stream_token(self) -> int:
|
||||||
return self._device_list_id_gen.get_current_token()
|
return self._device_list_id_gen.get_current_token()
|
||||||
|
|
||||||
def process_replication_rows(self, stream_name, instance_name, token, rows):
|
def process_replication_rows(
|
||||||
|
self, stream_name: str, instance_name: str, token: int, rows: Iterable[Any]
|
||||||
|
) -> None:
|
||||||
if stream_name == DeviceListsStream.NAME:
|
if stream_name == DeviceListsStream.NAME:
|
||||||
self._device_list_id_gen.advance(instance_name, token)
|
self._device_list_id_gen.advance(instance_name, token)
|
||||||
self._invalidate_caches_for_devices(token, rows)
|
self._invalidate_caches_for_devices(token, rows)
|
||||||
@ -70,7 +72,9 @@ class SlavedDeviceStore(EndToEndKeyWorkerStore, DeviceWorkerStore, BaseSlavedSto
|
|||||||
self._user_signature_stream_cache.entity_has_changed(row.user_id, token)
|
self._user_signature_stream_cache.entity_has_changed(row.user_id, token)
|
||||||
return super().process_replication_rows(stream_name, instance_name, token, rows)
|
return super().process_replication_rows(stream_name, instance_name, token, rows)
|
||||||
|
|
||||||
def _invalidate_caches_for_devices(self, token, rows):
|
def _invalidate_caches_for_devices(
|
||||||
|
self, token: int, rows: Iterable[DeviceListsStream.DeviceListsStreamRow]
|
||||||
|
) -> None:
|
||||||
for row in rows:
|
for row in rows:
|
||||||
# The entities are either user IDs (starting with '@') whose devices
|
# The entities are either user IDs (starting with '@') whose devices
|
||||||
# have changed, or remote servers that we need to tell about
|
# have changed, or remote servers that we need to tell about
|
||||||
|
@ -12,7 +12,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Any, Iterable
|
||||||
|
|
||||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||||
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
|
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
|
||||||
@ -44,10 +44,12 @@ class SlavedGroupServerStore(GroupServerWorkerStore, BaseSlavedStore):
|
|||||||
self._group_updates_id_gen.get_current_token(),
|
self._group_updates_id_gen.get_current_token(),
|
||||||
)
|
)
|
||||||
|
|
||||||
def get_group_stream_token(self):
|
def get_group_stream_token(self) -> int:
|
||||||
return self._group_updates_id_gen.get_current_token()
|
return self._group_updates_id_gen.get_current_token()
|
||||||
|
|
||||||
def process_replication_rows(self, stream_name, instance_name, token, rows):
|
def process_replication_rows(
|
||||||
|
self, stream_name: str, instance_name: str, token: int, rows: Iterable[Any]
|
||||||
|
) -> None:
|
||||||
if stream_name == GroupServerStream.NAME:
|
if stream_name == GroupServerStream.NAME:
|
||||||
self._group_updates_id_gen.advance(instance_name, token)
|
self._group_updates_id_gen.advance(instance_name, token)
|
||||||
for row in rows:
|
for row in rows:
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
from typing import Any, Iterable
|
||||||
|
|
||||||
from synapse.replication.tcp.streams import PushRulesStream
|
from synapse.replication.tcp.streams import PushRulesStream
|
||||||
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
|
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
|
||||||
@ -20,10 +21,12 @@ from .events import SlavedEventStore
|
|||||||
|
|
||||||
|
|
||||||
class SlavedPushRuleStore(SlavedEventStore, PushRulesWorkerStore):
|
class SlavedPushRuleStore(SlavedEventStore, PushRulesWorkerStore):
|
||||||
def get_max_push_rules_stream_id(self):
|
def get_max_push_rules_stream_id(self) -> int:
|
||||||
return self._push_rules_stream_id_gen.get_current_token()
|
return self._push_rules_stream_id_gen.get_current_token()
|
||||||
|
|
||||||
def process_replication_rows(self, stream_name, instance_name, token, rows):
|
def process_replication_rows(
|
||||||
|
self, stream_name: str, instance_name: str, token: int, rows: Iterable[Any]
|
||||||
|
) -> None:
|
||||||
if stream_name == PushRulesStream.NAME:
|
if stream_name == PushRulesStream.NAME:
|
||||||
self._push_rules_stream_id_gen.advance(instance_name, token)
|
self._push_rules_stream_id_gen.advance(instance_name, token)
|
||||||
for row in rows:
|
for row in rows:
|
||||||
|
@ -12,7 +12,7 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, Any, Iterable
|
||||||
|
|
||||||
from synapse.replication.tcp.streams import PushersStream
|
from synapse.replication.tcp.streams import PushersStream
|
||||||
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
|
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
|
||||||
@ -41,8 +41,8 @@ class SlavedPusherStore(PusherWorkerStore, BaseSlavedStore):
|
|||||||
return self._pushers_id_gen.get_current_token()
|
return self._pushers_id_gen.get_current_token()
|
||||||
|
|
||||||
def process_replication_rows(
|
def process_replication_rows(
|
||||||
self, stream_name: str, instance_name: str, token, rows
|
self, stream_name: str, instance_name: str, token: int, rows: Iterable[Any]
|
||||||
) -> None:
|
) -> None:
|
||||||
if stream_name == PushersStream.NAME:
|
if stream_name == PushersStream.NAME:
|
||||||
self._pushers_id_gen.advance(instance_name, token) # type: ignore
|
self._pushers_id_gen.advance(instance_name, token)
|
||||||
return super().process_replication_rows(stream_name, instance_name, token, rows)
|
return super().process_replication_rows(stream_name, instance_name, token, rows)
|
||||||
|
@ -14,10 +14,12 @@
|
|||||||
"""A replication client for use by synapse workers.
|
"""A replication client for use by synapse workers.
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple
|
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Tuple
|
||||||
|
|
||||||
from twisted.internet.defer import Deferred
|
from twisted.internet.defer import Deferred
|
||||||
|
from twisted.internet.interfaces import IAddress, IConnector
|
||||||
from twisted.internet.protocol import ReconnectingClientFactory
|
from twisted.internet.protocol import ReconnectingClientFactory
|
||||||
|
from twisted.python.failure import Failure
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes
|
from synapse.api.constants import EventTypes
|
||||||
from synapse.federation import send_queue
|
from synapse.federation import send_queue
|
||||||
@ -79,10 +81,10 @@ class DirectTcpReplicationClientFactory(ReconnectingClientFactory):
|
|||||||
|
|
||||||
hs.get_reactor().addSystemEventTrigger("before", "shutdown", self.stopTrying)
|
hs.get_reactor().addSystemEventTrigger("before", "shutdown", self.stopTrying)
|
||||||
|
|
||||||
def startedConnecting(self, connector):
|
def startedConnecting(self, connector: IConnector) -> None:
|
||||||
logger.info("Connecting to replication: %r", connector.getDestination())
|
logger.info("Connecting to replication: %r", connector.getDestination())
|
||||||
|
|
||||||
def buildProtocol(self, addr):
|
def buildProtocol(self, addr: IAddress) -> ClientReplicationStreamProtocol:
|
||||||
logger.info("Connected to replication: %r", addr)
|
logger.info("Connected to replication: %r", addr)
|
||||||
return ClientReplicationStreamProtocol(
|
return ClientReplicationStreamProtocol(
|
||||||
self.hs,
|
self.hs,
|
||||||
@ -92,11 +94,11 @@ class DirectTcpReplicationClientFactory(ReconnectingClientFactory):
|
|||||||
self.command_handler,
|
self.command_handler,
|
||||||
)
|
)
|
||||||
|
|
||||||
def clientConnectionLost(self, connector, reason):
|
def clientConnectionLost(self, connector: IConnector, reason: Failure) -> None:
|
||||||
logger.error("Lost replication conn: %r", reason)
|
logger.error("Lost replication conn: %r", reason)
|
||||||
ReconnectingClientFactory.clientConnectionLost(self, connector, reason)
|
ReconnectingClientFactory.clientConnectionLost(self, connector, reason)
|
||||||
|
|
||||||
def clientConnectionFailed(self, connector, reason):
|
def clientConnectionFailed(self, connector: IConnector, reason: Failure) -> None:
|
||||||
logger.error("Failed to connect to replication: %r", reason)
|
logger.error("Failed to connect to replication: %r", reason)
|
||||||
ReconnectingClientFactory.clientConnectionFailed(self, connector, reason)
|
ReconnectingClientFactory.clientConnectionFailed(self, connector, reason)
|
||||||
|
|
||||||
@ -131,7 +133,7 @@ class ReplicationDataHandler:
|
|||||||
|
|
||||||
async def on_rdata(
|
async def on_rdata(
|
||||||
self, stream_name: str, instance_name: str, token: int, rows: list
|
self, stream_name: str, instance_name: str, token: int, rows: list
|
||||||
):
|
) -> None:
|
||||||
"""Called to handle a batch of replication data with a given stream token.
|
"""Called to handle a batch of replication data with a given stream token.
|
||||||
|
|
||||||
By default this just pokes the slave store. Can be overridden in subclasses to
|
By default this just pokes the slave store. Can be overridden in subclasses to
|
||||||
@ -252,14 +254,16 @@ class ReplicationDataHandler:
|
|||||||
# loop. (This maintains the order so no need to resort)
|
# loop. (This maintains the order so no need to resort)
|
||||||
waiting_list[:] = waiting_list[index_of_first_deferred_not_called:]
|
waiting_list[:] = waiting_list[index_of_first_deferred_not_called:]
|
||||||
|
|
||||||
async def on_position(self, stream_name: str, instance_name: str, token: int):
|
async def on_position(
|
||||||
|
self, stream_name: str, instance_name: str, token: int
|
||||||
|
) -> None:
|
||||||
await self.on_rdata(stream_name, instance_name, token, [])
|
await self.on_rdata(stream_name, instance_name, token, [])
|
||||||
|
|
||||||
# We poke the generic "replication" notifier to wake anything up that
|
# We poke the generic "replication" notifier to wake anything up that
|
||||||
# may be streaming.
|
# may be streaming.
|
||||||
self.notifier.notify_replication()
|
self.notifier.notify_replication()
|
||||||
|
|
||||||
def on_remote_server_up(self, server: str):
|
def on_remote_server_up(self, server: str) -> None:
|
||||||
"""Called when get a new REMOTE_SERVER_UP command."""
|
"""Called when get a new REMOTE_SERVER_UP command."""
|
||||||
|
|
||||||
# Let's wake up the transaction queue for the server in case we have
|
# Let's wake up the transaction queue for the server in case we have
|
||||||
@ -269,7 +273,7 @@ class ReplicationDataHandler:
|
|||||||
|
|
||||||
async def wait_for_stream_position(
|
async def wait_for_stream_position(
|
||||||
self, instance_name: str, stream_name: str, position: int
|
self, instance_name: str, stream_name: str, position: int
|
||||||
):
|
) -> None:
|
||||||
"""Wait until this instance has received updates up to and including
|
"""Wait until this instance has received updates up to and including
|
||||||
the given stream position.
|
the given stream position.
|
||||||
"""
|
"""
|
||||||
@ -304,7 +308,7 @@ class ReplicationDataHandler:
|
|||||||
"Finished waiting for repl stream %r to reach %s", stream_name, position
|
"Finished waiting for repl stream %r to reach %s", stream_name, position
|
||||||
)
|
)
|
||||||
|
|
||||||
def stop_pusher(self, user_id, app_id, pushkey):
|
def stop_pusher(self, user_id: str, app_id: str, pushkey: str) -> None:
|
||||||
if not self._notify_pushers:
|
if not self._notify_pushers:
|
||||||
return
|
return
|
||||||
|
|
||||||
@ -316,13 +320,13 @@ class ReplicationDataHandler:
|
|||||||
logger.info("Stopping pusher %r / %r", user_id, key)
|
logger.info("Stopping pusher %r / %r", user_id, key)
|
||||||
pusher.on_stop()
|
pusher.on_stop()
|
||||||
|
|
||||||
async def start_pusher(self, user_id, app_id, pushkey):
|
async def start_pusher(self, user_id: str, app_id: str, pushkey: str) -> None:
|
||||||
if not self._notify_pushers:
|
if not self._notify_pushers:
|
||||||
return
|
return
|
||||||
|
|
||||||
key = "%s:%s" % (app_id, pushkey)
|
key = "%s:%s" % (app_id, pushkey)
|
||||||
logger.info("Starting pusher %r / %r", user_id, key)
|
logger.info("Starting pusher %r / %r", user_id, key)
|
||||||
return await self._pusher_pool.start_pusher_by_id(app_id, pushkey, user_id)
|
await self._pusher_pool.start_pusher_by_id(app_id, pushkey, user_id)
|
||||||
|
|
||||||
|
|
||||||
class FederationSenderHandler:
|
class FederationSenderHandler:
|
||||||
@ -353,10 +357,12 @@ class FederationSenderHandler:
|
|||||||
|
|
||||||
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
|
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
|
||||||
|
|
||||||
def wake_destination(self, server: str):
|
def wake_destination(self, server: str) -> None:
|
||||||
self.federation_sender.wake_destination(server)
|
self.federation_sender.wake_destination(server)
|
||||||
|
|
||||||
async def process_replication_rows(self, stream_name, token, rows):
|
async def process_replication_rows(
|
||||||
|
self, stream_name: str, token: int, rows: list
|
||||||
|
) -> None:
|
||||||
# The federation stream contains things that we want to send out, e.g.
|
# The federation stream contains things that we want to send out, e.g.
|
||||||
# presence, typing, etc.
|
# presence, typing, etc.
|
||||||
if stream_name == "federation":
|
if stream_name == "federation":
|
||||||
@ -384,11 +390,12 @@ class FederationSenderHandler:
|
|||||||
for host in hosts:
|
for host in hosts:
|
||||||
self.federation_sender.send_device_messages(host)
|
self.federation_sender.send_device_messages(host)
|
||||||
|
|
||||||
async def _on_new_receipts(self, rows):
|
async def _on_new_receipts(
|
||||||
|
self, rows: Iterable[ReceiptsStream.ReceiptsStreamRow]
|
||||||
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
rows (Iterable[synapse.replication.tcp.streams.ReceiptsStream.ReceiptsStreamRow]):
|
rows: new receipts to be processed
|
||||||
new receipts to be processed
|
|
||||||
"""
|
"""
|
||||||
for receipt in rows:
|
for receipt in rows:
|
||||||
# we only want to send on receipts for our own users
|
# we only want to send on receipts for our own users
|
||||||
@ -408,7 +415,7 @@ class FederationSenderHandler:
|
|||||||
)
|
)
|
||||||
await self.federation_sender.send_read_receipt(receipt_info)
|
await self.federation_sender.send_read_receipt(receipt_info)
|
||||||
|
|
||||||
async def update_token(self, token):
|
async def update_token(self, token: int) -> None:
|
||||||
"""Update the record of where we have processed to in the federation stream.
|
"""Update the record of where we have processed to in the federation stream.
|
||||||
|
|
||||||
Called after we have processed a an update received over replication. Sends
|
Called after we have processed a an update received over replication. Sends
|
||||||
@ -428,7 +435,7 @@ class FederationSenderHandler:
|
|||||||
|
|
||||||
run_as_background_process("_save_and_send_ack", self._save_and_send_ack)
|
run_as_background_process("_save_and_send_ack", self._save_and_send_ack)
|
||||||
|
|
||||||
async def _save_and_send_ack(self):
|
async def _save_and_send_ack(self) -> None:
|
||||||
"""Save the current federation position in the database and send an ACK
|
"""Save the current federation position in the database and send an ACK
|
||||||
to master with where we're up to.
|
to master with where we're up to.
|
||||||
"""
|
"""
|
||||||
|
@ -18,12 +18,15 @@ allowed to be sent by which side.
|
|||||||
"""
|
"""
|
||||||
import abc
|
import abc
|
||||||
import logging
|
import logging
|
||||||
from typing import Tuple, Type
|
from typing import Optional, Tuple, Type, TypeVar
|
||||||
|
|
||||||
|
from synapse.replication.tcp.streams._base import StreamRow
|
||||||
from synapse.util import json_decoder, json_encoder
|
from synapse.util import json_decoder, json_encoder
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
T = TypeVar("T", bound="Command")
|
||||||
|
|
||||||
|
|
||||||
class Command(metaclass=abc.ABCMeta):
|
class Command(metaclass=abc.ABCMeta):
|
||||||
"""The base command class.
|
"""The base command class.
|
||||||
@ -38,7 +41,7 @@ class Command(metaclass=abc.ABCMeta):
|
|||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@abc.abstractmethod
|
@abc.abstractmethod
|
||||||
def from_line(cls, line):
|
def from_line(cls: Type[T], line: str) -> T:
|
||||||
"""Deserialises a line from the wire into this command. `line` does not
|
"""Deserialises a line from the wire into this command. `line` does not
|
||||||
include the command.
|
include the command.
|
||||||
"""
|
"""
|
||||||
@ -49,21 +52,24 @@ class Command(metaclass=abc.ABCMeta):
|
|||||||
prefix.
|
prefix.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def get_logcontext_id(self):
|
def get_logcontext_id(self) -> str:
|
||||||
"""Get a suitable string for the logcontext when processing this command"""
|
"""Get a suitable string for the logcontext when processing this command"""
|
||||||
|
|
||||||
# by default, we just use the command name.
|
# by default, we just use the command name.
|
||||||
return self.NAME
|
return self.NAME
|
||||||
|
|
||||||
|
|
||||||
|
SC = TypeVar("SC", bound="_SimpleCommand")
|
||||||
|
|
||||||
|
|
||||||
class _SimpleCommand(Command):
|
class _SimpleCommand(Command):
|
||||||
"""An implementation of Command whose argument is just a 'data' string."""
|
"""An implementation of Command whose argument is just a 'data' string."""
|
||||||
|
|
||||||
def __init__(self, data):
|
def __init__(self, data: str):
|
||||||
self.data = data
|
self.data = data
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_line(cls, line):
|
def from_line(cls: Type[SC], line: str) -> SC:
|
||||||
return cls(line)
|
return cls(line)
|
||||||
|
|
||||||
def to_line(self) -> str:
|
def to_line(self) -> str:
|
||||||
@ -109,14 +115,16 @@ class RdataCommand(Command):
|
|||||||
|
|
||||||
NAME = "RDATA"
|
NAME = "RDATA"
|
||||||
|
|
||||||
def __init__(self, stream_name, instance_name, token, row):
|
def __init__(
|
||||||
|
self, stream_name: str, instance_name: str, token: Optional[int], row: StreamRow
|
||||||
|
):
|
||||||
self.stream_name = stream_name
|
self.stream_name = stream_name
|
||||||
self.instance_name = instance_name
|
self.instance_name = instance_name
|
||||||
self.token = token
|
self.token = token
|
||||||
self.row = row
|
self.row = row
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_line(cls, line):
|
def from_line(cls: Type["RdataCommand"], line: str) -> "RdataCommand":
|
||||||
stream_name, instance_name, token, row_json = line.split(" ", 3)
|
stream_name, instance_name, token, row_json = line.split(" ", 3)
|
||||||
return cls(
|
return cls(
|
||||||
stream_name,
|
stream_name,
|
||||||
@ -125,7 +133,7 @@ class RdataCommand(Command):
|
|||||||
json_decoder.decode(row_json),
|
json_decoder.decode(row_json),
|
||||||
)
|
)
|
||||||
|
|
||||||
def to_line(self):
|
def to_line(self) -> str:
|
||||||
return " ".join(
|
return " ".join(
|
||||||
(
|
(
|
||||||
self.stream_name,
|
self.stream_name,
|
||||||
@ -135,7 +143,7 @@ class RdataCommand(Command):
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
def get_logcontext_id(self):
|
def get_logcontext_id(self) -> str:
|
||||||
return "RDATA-" + self.stream_name
|
return "RDATA-" + self.stream_name
|
||||||
|
|
||||||
|
|
||||||
@ -164,18 +172,20 @@ class PositionCommand(Command):
|
|||||||
|
|
||||||
NAME = "POSITION"
|
NAME = "POSITION"
|
||||||
|
|
||||||
def __init__(self, stream_name, instance_name, prev_token, new_token):
|
def __init__(
|
||||||
|
self, stream_name: str, instance_name: str, prev_token: int, new_token: int
|
||||||
|
):
|
||||||
self.stream_name = stream_name
|
self.stream_name = stream_name
|
||||||
self.instance_name = instance_name
|
self.instance_name = instance_name
|
||||||
self.prev_token = prev_token
|
self.prev_token = prev_token
|
||||||
self.new_token = new_token
|
self.new_token = new_token
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_line(cls, line):
|
def from_line(cls: Type["PositionCommand"], line: str) -> "PositionCommand":
|
||||||
stream_name, instance_name, prev_token, new_token = line.split(" ", 3)
|
stream_name, instance_name, prev_token, new_token = line.split(" ", 3)
|
||||||
return cls(stream_name, instance_name, int(prev_token), int(new_token))
|
return cls(stream_name, instance_name, int(prev_token), int(new_token))
|
||||||
|
|
||||||
def to_line(self):
|
def to_line(self) -> str:
|
||||||
return " ".join(
|
return " ".join(
|
||||||
(
|
(
|
||||||
self.stream_name,
|
self.stream_name,
|
||||||
@ -218,14 +228,14 @@ class ReplicateCommand(Command):
|
|||||||
|
|
||||||
NAME = "REPLICATE"
|
NAME = "REPLICATE"
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_line(cls, line):
|
def from_line(cls: Type[T], line: str) -> T:
|
||||||
return cls()
|
return cls()
|
||||||
|
|
||||||
def to_line(self):
|
def to_line(self) -> str:
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
|
|
||||||
@ -247,14 +257,16 @@ class UserSyncCommand(Command):
|
|||||||
|
|
||||||
NAME = "USER_SYNC"
|
NAME = "USER_SYNC"
|
||||||
|
|
||||||
def __init__(self, instance_id, user_id, is_syncing, last_sync_ms):
|
def __init__(
|
||||||
|
self, instance_id: str, user_id: str, is_syncing: bool, last_sync_ms: int
|
||||||
|
):
|
||||||
self.instance_id = instance_id
|
self.instance_id = instance_id
|
||||||
self.user_id = user_id
|
self.user_id = user_id
|
||||||
self.is_syncing = is_syncing
|
self.is_syncing = is_syncing
|
||||||
self.last_sync_ms = last_sync_ms
|
self.last_sync_ms = last_sync_ms
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_line(cls, line):
|
def from_line(cls: Type["UserSyncCommand"], line: str) -> "UserSyncCommand":
|
||||||
instance_id, user_id, state, last_sync_ms = line.split(" ", 3)
|
instance_id, user_id, state, last_sync_ms = line.split(" ", 3)
|
||||||
|
|
||||||
if state not in ("start", "end"):
|
if state not in ("start", "end"):
|
||||||
@ -262,7 +274,7 @@ class UserSyncCommand(Command):
|
|||||||
|
|
||||||
return cls(instance_id, user_id, state == "start", int(last_sync_ms))
|
return cls(instance_id, user_id, state == "start", int(last_sync_ms))
|
||||||
|
|
||||||
def to_line(self):
|
def to_line(self) -> str:
|
||||||
return " ".join(
|
return " ".join(
|
||||||
(
|
(
|
||||||
self.instance_id,
|
self.instance_id,
|
||||||
@ -286,14 +298,16 @@ class ClearUserSyncsCommand(Command):
|
|||||||
|
|
||||||
NAME = "CLEAR_USER_SYNC"
|
NAME = "CLEAR_USER_SYNC"
|
||||||
|
|
||||||
def __init__(self, instance_id):
|
def __init__(self, instance_id: str):
|
||||||
self.instance_id = instance_id
|
self.instance_id = instance_id
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_line(cls, line):
|
def from_line(
|
||||||
|
cls: Type["ClearUserSyncsCommand"], line: str
|
||||||
|
) -> "ClearUserSyncsCommand":
|
||||||
return cls(line)
|
return cls(line)
|
||||||
|
|
||||||
def to_line(self):
|
def to_line(self) -> str:
|
||||||
return self.instance_id
|
return self.instance_id
|
||||||
|
|
||||||
|
|
||||||
@ -316,7 +330,9 @@ class FederationAckCommand(Command):
|
|||||||
self.token = token
|
self.token = token
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_line(cls, line: str) -> "FederationAckCommand":
|
def from_line(
|
||||||
|
cls: Type["FederationAckCommand"], line: str
|
||||||
|
) -> "FederationAckCommand":
|
||||||
instance_name, token = line.split(" ")
|
instance_name, token = line.split(" ")
|
||||||
return cls(instance_name, int(token))
|
return cls(instance_name, int(token))
|
||||||
|
|
||||||
@ -334,7 +350,15 @@ class UserIpCommand(Command):
|
|||||||
|
|
||||||
NAME = "USER_IP"
|
NAME = "USER_IP"
|
||||||
|
|
||||||
def __init__(self, user_id, access_token, ip, user_agent, device_id, last_seen):
|
def __init__(
|
||||||
|
self,
|
||||||
|
user_id: str,
|
||||||
|
access_token: str,
|
||||||
|
ip: str,
|
||||||
|
user_agent: str,
|
||||||
|
device_id: str,
|
||||||
|
last_seen: int,
|
||||||
|
):
|
||||||
self.user_id = user_id
|
self.user_id = user_id
|
||||||
self.access_token = access_token
|
self.access_token = access_token
|
||||||
self.ip = ip
|
self.ip = ip
|
||||||
@ -343,14 +367,14 @@ class UserIpCommand(Command):
|
|||||||
self.last_seen = last_seen
|
self.last_seen = last_seen
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_line(cls, line):
|
def from_line(cls: Type["UserIpCommand"], line: str) -> "UserIpCommand":
|
||||||
user_id, jsn = line.split(" ", 1)
|
user_id, jsn = line.split(" ", 1)
|
||||||
|
|
||||||
access_token, ip, user_agent, device_id, last_seen = json_decoder.decode(jsn)
|
access_token, ip, user_agent, device_id, last_seen = json_decoder.decode(jsn)
|
||||||
|
|
||||||
return cls(user_id, access_token, ip, user_agent, device_id, last_seen)
|
return cls(user_id, access_token, ip, user_agent, device_id, last_seen)
|
||||||
|
|
||||||
def to_line(self):
|
def to_line(self) -> str:
|
||||||
return (
|
return (
|
||||||
self.user_id
|
self.user_id
|
||||||
+ " "
|
+ " "
|
||||||
|
@ -261,7 +261,7 @@ class ReplicationCommandHandler:
|
|||||||
"process-replication-data", self._unsafe_process_queue, stream_name
|
"process-replication-data", self._unsafe_process_queue, stream_name
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _unsafe_process_queue(self, stream_name: str):
|
async def _unsafe_process_queue(self, stream_name: str) -> None:
|
||||||
"""Processes the command queue for the given stream, until it is empty
|
"""Processes the command queue for the given stream, until it is empty
|
||||||
|
|
||||||
Does not check if there is already a thread processing the queue, hence "unsafe"
|
Does not check if there is already a thread processing the queue, hence "unsafe"
|
||||||
@ -294,7 +294,7 @@ class ReplicationCommandHandler:
|
|||||||
# This shouldn't be possible
|
# This shouldn't be possible
|
||||||
raise Exception("Unrecognised command %s in stream queue", cmd.NAME)
|
raise Exception("Unrecognised command %s in stream queue", cmd.NAME)
|
||||||
|
|
||||||
def start_replication(self, hs: "HomeServer"):
|
def start_replication(self, hs: "HomeServer") -> None:
|
||||||
"""Helper method to start a replication connection to the remote server
|
"""Helper method to start a replication connection to the remote server
|
||||||
using TCP.
|
using TCP.
|
||||||
"""
|
"""
|
||||||
@ -318,7 +318,7 @@ class ReplicationCommandHandler:
|
|||||||
hs, outbound_redis_connection
|
hs, outbound_redis_connection
|
||||||
)
|
)
|
||||||
hs.get_reactor().connectTCP(
|
hs.get_reactor().connectTCP(
|
||||||
hs.config.redis.redis_host, # type: ignore[arg-type]
|
hs.config.redis.redis_host,
|
||||||
hs.config.redis.redis_port,
|
hs.config.redis.redis_port,
|
||||||
self._factory,
|
self._factory,
|
||||||
timeout=30,
|
timeout=30,
|
||||||
@ -330,7 +330,7 @@ class ReplicationCommandHandler:
|
|||||||
host = hs.config.worker.worker_replication_host
|
host = hs.config.worker.worker_replication_host
|
||||||
port = hs.config.worker.worker_replication_port
|
port = hs.config.worker.worker_replication_port
|
||||||
hs.get_reactor().connectTCP(
|
hs.get_reactor().connectTCP(
|
||||||
host, # type: ignore[arg-type]
|
host,
|
||||||
port,
|
port,
|
||||||
self._factory,
|
self._factory,
|
||||||
timeout=30,
|
timeout=30,
|
||||||
@ -345,10 +345,10 @@ class ReplicationCommandHandler:
|
|||||||
"""Get a list of streams that this instances replicates."""
|
"""Get a list of streams that this instances replicates."""
|
||||||
return self._streams_to_replicate
|
return self._streams_to_replicate
|
||||||
|
|
||||||
def on_REPLICATE(self, conn: IReplicationConnection, cmd: ReplicateCommand):
|
def on_REPLICATE(self, conn: IReplicationConnection, cmd: ReplicateCommand) -> None:
|
||||||
self.send_positions_to_connection(conn)
|
self.send_positions_to_connection(conn)
|
||||||
|
|
||||||
def send_positions_to_connection(self, conn: IReplicationConnection):
|
def send_positions_to_connection(self, conn: IReplicationConnection) -> None:
|
||||||
"""Send current position of all streams this process is source of to
|
"""Send current position of all streams this process is source of to
|
||||||
the connection.
|
the connection.
|
||||||
"""
|
"""
|
||||||
@ -392,7 +392,7 @@ class ReplicationCommandHandler:
|
|||||||
|
|
||||||
def on_FEDERATION_ACK(
|
def on_FEDERATION_ACK(
|
||||||
self, conn: IReplicationConnection, cmd: FederationAckCommand
|
self, conn: IReplicationConnection, cmd: FederationAckCommand
|
||||||
):
|
) -> None:
|
||||||
federation_ack_counter.inc()
|
federation_ack_counter.inc()
|
||||||
|
|
||||||
if self._federation_sender:
|
if self._federation_sender:
|
||||||
@ -408,7 +408,7 @@ class ReplicationCommandHandler:
|
|||||||
else:
|
else:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
async def _handle_user_ip(self, cmd: UserIpCommand):
|
async def _handle_user_ip(self, cmd: UserIpCommand) -> None:
|
||||||
await self._store.insert_client_ip(
|
await self._store.insert_client_ip(
|
||||||
cmd.user_id,
|
cmd.user_id,
|
||||||
cmd.access_token,
|
cmd.access_token,
|
||||||
@ -421,7 +421,7 @@ class ReplicationCommandHandler:
|
|||||||
assert self._server_notices_sender is not None
|
assert self._server_notices_sender is not None
|
||||||
await self._server_notices_sender.on_user_ip(cmd.user_id)
|
await self._server_notices_sender.on_user_ip(cmd.user_id)
|
||||||
|
|
||||||
def on_RDATA(self, conn: IReplicationConnection, cmd: RdataCommand):
|
def on_RDATA(self, conn: IReplicationConnection, cmd: RdataCommand) -> None:
|
||||||
if cmd.instance_name == self._instance_name:
|
if cmd.instance_name == self._instance_name:
|
||||||
# Ignore RDATA that are just our own echoes
|
# Ignore RDATA that are just our own echoes
|
||||||
return
|
return
|
||||||
@ -497,7 +497,7 @@ class ReplicationCommandHandler:
|
|||||||
|
|
||||||
async def on_rdata(
|
async def on_rdata(
|
||||||
self, stream_name: str, instance_name: str, token: int, rows: list
|
self, stream_name: str, instance_name: str, token: int, rows: list
|
||||||
):
|
) -> None:
|
||||||
"""Called to handle a batch of replication data with a given stream token.
|
"""Called to handle a batch of replication data with a given stream token.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@ -512,7 +512,7 @@ class ReplicationCommandHandler:
|
|||||||
stream_name, instance_name, token, rows
|
stream_name, instance_name, token, rows
|
||||||
)
|
)
|
||||||
|
|
||||||
def on_POSITION(self, conn: IReplicationConnection, cmd: PositionCommand):
|
def on_POSITION(self, conn: IReplicationConnection, cmd: PositionCommand) -> None:
|
||||||
if cmd.instance_name == self._instance_name:
|
if cmd.instance_name == self._instance_name:
|
||||||
# Ignore POSITION that are just our own echoes
|
# Ignore POSITION that are just our own echoes
|
||||||
return
|
return
|
||||||
@ -581,7 +581,7 @@ class ReplicationCommandHandler:
|
|||||||
|
|
||||||
def on_REMOTE_SERVER_UP(
|
def on_REMOTE_SERVER_UP(
|
||||||
self, conn: IReplicationConnection, cmd: RemoteServerUpCommand
|
self, conn: IReplicationConnection, cmd: RemoteServerUpCommand
|
||||||
):
|
) -> None:
|
||||||
"""Called when get a new REMOTE_SERVER_UP command."""
|
"""Called when get a new REMOTE_SERVER_UP command."""
|
||||||
self._replication_data_handler.on_remote_server_up(cmd.data)
|
self._replication_data_handler.on_remote_server_up(cmd.data)
|
||||||
|
|
||||||
@ -604,7 +604,7 @@ class ReplicationCommandHandler:
|
|||||||
# between two instances, but that is not currently supported).
|
# between two instances, but that is not currently supported).
|
||||||
self.send_command(cmd, ignore_conn=conn)
|
self.send_command(cmd, ignore_conn=conn)
|
||||||
|
|
||||||
def new_connection(self, connection: IReplicationConnection):
|
def new_connection(self, connection: IReplicationConnection) -> None:
|
||||||
"""Called when we have a new connection."""
|
"""Called when we have a new connection."""
|
||||||
self._connections.append(connection)
|
self._connections.append(connection)
|
||||||
|
|
||||||
@ -631,7 +631,7 @@ class ReplicationCommandHandler:
|
|||||||
UserSyncCommand(self._instance_id, user_id, True, now)
|
UserSyncCommand(self._instance_id, user_id, True, now)
|
||||||
)
|
)
|
||||||
|
|
||||||
def lost_connection(self, connection: IReplicationConnection):
|
def lost_connection(self, connection: IReplicationConnection) -> None:
|
||||||
"""Called when a connection is closed/lost."""
|
"""Called when a connection is closed/lost."""
|
||||||
# we no longer need _streams_by_connection for this connection.
|
# we no longer need _streams_by_connection for this connection.
|
||||||
streams = self._streams_by_connection.pop(connection, None)
|
streams = self._streams_by_connection.pop(connection, None)
|
||||||
@ -653,7 +653,7 @@ class ReplicationCommandHandler:
|
|||||||
|
|
||||||
def send_command(
|
def send_command(
|
||||||
self, cmd: Command, ignore_conn: Optional[IReplicationConnection] = None
|
self, cmd: Command, ignore_conn: Optional[IReplicationConnection] = None
|
||||||
):
|
) -> None:
|
||||||
"""Send a command to all connected connections.
|
"""Send a command to all connected connections.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@ -680,7 +680,7 @@ class ReplicationCommandHandler:
|
|||||||
else:
|
else:
|
||||||
logger.warning("Dropping command as not connected: %r", cmd.NAME)
|
logger.warning("Dropping command as not connected: %r", cmd.NAME)
|
||||||
|
|
||||||
def send_federation_ack(self, token: int):
|
def send_federation_ack(self, token: int) -> None:
|
||||||
"""Ack data for the federation stream. This allows the master to drop
|
"""Ack data for the federation stream. This allows the master to drop
|
||||||
data stored purely in memory.
|
data stored purely in memory.
|
||||||
"""
|
"""
|
||||||
@ -688,7 +688,7 @@ class ReplicationCommandHandler:
|
|||||||
|
|
||||||
def send_user_sync(
|
def send_user_sync(
|
||||||
self, instance_id: str, user_id: str, is_syncing: bool, last_sync_ms: int
|
self, instance_id: str, user_id: str, is_syncing: bool, last_sync_ms: int
|
||||||
):
|
) -> None:
|
||||||
"""Poke the master that a user has started/stopped syncing."""
|
"""Poke the master that a user has started/stopped syncing."""
|
||||||
self.send_command(
|
self.send_command(
|
||||||
UserSyncCommand(instance_id, user_id, is_syncing, last_sync_ms)
|
UserSyncCommand(instance_id, user_id, is_syncing, last_sync_ms)
|
||||||
@ -702,15 +702,15 @@ class ReplicationCommandHandler:
|
|||||||
user_agent: str,
|
user_agent: str,
|
||||||
device_id: str,
|
device_id: str,
|
||||||
last_seen: int,
|
last_seen: int,
|
||||||
):
|
) -> None:
|
||||||
"""Tell the master that the user made a request."""
|
"""Tell the master that the user made a request."""
|
||||||
cmd = UserIpCommand(user_id, access_token, ip, user_agent, device_id, last_seen)
|
cmd = UserIpCommand(user_id, access_token, ip, user_agent, device_id, last_seen)
|
||||||
self.send_command(cmd)
|
self.send_command(cmd)
|
||||||
|
|
||||||
def send_remote_server_up(self, server: str):
|
def send_remote_server_up(self, server: str) -> None:
|
||||||
self.send_command(RemoteServerUpCommand(server))
|
self.send_command(RemoteServerUpCommand(server))
|
||||||
|
|
||||||
def stream_update(self, stream_name: str, token: str, data: Any):
|
def stream_update(self, stream_name: str, token: Optional[int], data: Any) -> None:
|
||||||
"""Called when a new update is available to stream to clients.
|
"""Called when a new update is available to stream to clients.
|
||||||
|
|
||||||
We need to check if the client is interested in the stream or not
|
We need to check if the client is interested in the stream or not
|
||||||
|
@ -49,7 +49,7 @@ import fcntl
|
|||||||
import logging
|
import logging
|
||||||
import struct
|
import struct
|
||||||
from inspect import isawaitable
|
from inspect import isawaitable
|
||||||
from typing import TYPE_CHECKING, Collection, List, Optional
|
from typing import TYPE_CHECKING, Any, Collection, List, Optional
|
||||||
|
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
from zope.interface import Interface, implementer
|
from zope.interface import Interface, implementer
|
||||||
@ -123,7 +123,7 @@ class ConnectionStates:
|
|||||||
class IReplicationConnection(Interface):
|
class IReplicationConnection(Interface):
|
||||||
"""An interface for replication connections."""
|
"""An interface for replication connections."""
|
||||||
|
|
||||||
def send_command(cmd: Command):
|
def send_command(cmd: Command) -> None:
|
||||||
"""Send the command down the connection"""
|
"""Send the command down the connection"""
|
||||||
|
|
||||||
|
|
||||||
@ -190,7 +190,7 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
"replication-conn", self.conn_id
|
"replication-conn", self.conn_id
|
||||||
)
|
)
|
||||||
|
|
||||||
def connectionMade(self):
|
def connectionMade(self) -> None:
|
||||||
logger.info("[%s] Connection established", self.id())
|
logger.info("[%s] Connection established", self.id())
|
||||||
|
|
||||||
self.state = ConnectionStates.ESTABLISHED
|
self.state = ConnectionStates.ESTABLISHED
|
||||||
@ -207,11 +207,11 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
|
|
||||||
# Always send the initial PING so that the other side knows that they
|
# Always send the initial PING so that the other side knows that they
|
||||||
# can time us out.
|
# can time us out.
|
||||||
self.send_command(PingCommand(self.clock.time_msec()))
|
self.send_command(PingCommand(str(self.clock.time_msec())))
|
||||||
|
|
||||||
self.command_handler.new_connection(self)
|
self.command_handler.new_connection(self)
|
||||||
|
|
||||||
def send_ping(self):
|
def send_ping(self) -> None:
|
||||||
"""Periodically sends a ping and checks if we should close the connection
|
"""Periodically sends a ping and checks if we should close the connection
|
||||||
due to the other side timing out.
|
due to the other side timing out.
|
||||||
"""
|
"""
|
||||||
@ -226,7 +226,7 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
self.transport.abortConnection()
|
self.transport.abortConnection()
|
||||||
else:
|
else:
|
||||||
if now - self.last_sent_command >= PING_TIME:
|
if now - self.last_sent_command >= PING_TIME:
|
||||||
self.send_command(PingCommand(now))
|
self.send_command(PingCommand(str(now)))
|
||||||
|
|
||||||
if (
|
if (
|
||||||
self.received_ping
|
self.received_ping
|
||||||
@ -239,12 +239,12 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
)
|
)
|
||||||
self.send_error("ping timeout")
|
self.send_error("ping timeout")
|
||||||
|
|
||||||
def lineReceived(self, line: bytes):
|
def lineReceived(self, line: bytes) -> None:
|
||||||
"""Called when we've received a line"""
|
"""Called when we've received a line"""
|
||||||
with PreserveLoggingContext(self._logging_context):
|
with PreserveLoggingContext(self._logging_context):
|
||||||
self._parse_and_dispatch_line(line)
|
self._parse_and_dispatch_line(line)
|
||||||
|
|
||||||
def _parse_and_dispatch_line(self, line: bytes):
|
def _parse_and_dispatch_line(self, line: bytes) -> None:
|
||||||
if line.strip() == "":
|
if line.strip() == "":
|
||||||
# Ignore blank lines
|
# Ignore blank lines
|
||||||
return
|
return
|
||||||
@ -309,24 +309,24 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
if not handled:
|
if not handled:
|
||||||
logger.warning("Unhandled command: %r", cmd)
|
logger.warning("Unhandled command: %r", cmd)
|
||||||
|
|
||||||
def close(self):
|
def close(self) -> None:
|
||||||
logger.warning("[%s] Closing connection", self.id())
|
logger.warning("[%s] Closing connection", self.id())
|
||||||
self.time_we_closed = self.clock.time_msec()
|
self.time_we_closed = self.clock.time_msec()
|
||||||
assert self.transport is not None
|
assert self.transport is not None
|
||||||
self.transport.loseConnection()
|
self.transport.loseConnection()
|
||||||
self.on_connection_closed()
|
self.on_connection_closed()
|
||||||
|
|
||||||
def send_error(self, error_string, *args):
|
def send_error(self, error_string: str, *args: Any) -> None:
|
||||||
"""Send an error to remote and close the connection."""
|
"""Send an error to remote and close the connection."""
|
||||||
self.send_command(ErrorCommand(error_string % args))
|
self.send_command(ErrorCommand(error_string % args))
|
||||||
self.close()
|
self.close()
|
||||||
|
|
||||||
def send_command(self, cmd, do_buffer=True):
|
def send_command(self, cmd: Command, do_buffer: bool = True) -> None:
|
||||||
"""Send a command if connection has been established.
|
"""Send a command if connection has been established.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
cmd (Command)
|
cmd
|
||||||
do_buffer (bool): Whether to buffer the message or always attempt
|
do_buffer: Whether to buffer the message or always attempt
|
||||||
to send the command. This is mostly used to send an error
|
to send the command. This is mostly used to send an error
|
||||||
message if we're about to close the connection due our buffers
|
message if we're about to close the connection due our buffers
|
||||||
becoming full.
|
becoming full.
|
||||||
@ -357,7 +357,7 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
|
|
||||||
self.last_sent_command = self.clock.time_msec()
|
self.last_sent_command = self.clock.time_msec()
|
||||||
|
|
||||||
def _queue_command(self, cmd):
|
def _queue_command(self, cmd: Command) -> None:
|
||||||
"""Queue the command until the connection is ready to write to again."""
|
"""Queue the command until the connection is ready to write to again."""
|
||||||
logger.debug("[%s] Queueing as conn %r, cmd: %r", self.id(), self.state, cmd)
|
logger.debug("[%s] Queueing as conn %r, cmd: %r", self.id(), self.state, cmd)
|
||||||
self.pending_commands.append(cmd)
|
self.pending_commands.append(cmd)
|
||||||
@ -370,20 +370,20 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
self.send_command(ErrorCommand("Failed to keep up"), do_buffer=False)
|
self.send_command(ErrorCommand("Failed to keep up"), do_buffer=False)
|
||||||
self.close()
|
self.close()
|
||||||
|
|
||||||
def _send_pending_commands(self):
|
def _send_pending_commands(self) -> None:
|
||||||
"""Send any queued commandes"""
|
"""Send any queued commandes"""
|
||||||
pending = self.pending_commands
|
pending = self.pending_commands
|
||||||
self.pending_commands = []
|
self.pending_commands = []
|
||||||
for cmd in pending:
|
for cmd in pending:
|
||||||
self.send_command(cmd)
|
self.send_command(cmd)
|
||||||
|
|
||||||
def on_PING(self, line):
|
def on_PING(self, cmd: PingCommand) -> None:
|
||||||
self.received_ping = True
|
self.received_ping = True
|
||||||
|
|
||||||
def on_ERROR(self, cmd):
|
def on_ERROR(self, cmd: ErrorCommand) -> None:
|
||||||
logger.error("[%s] Remote reported error: %r", self.id(), cmd.data)
|
logger.error("[%s] Remote reported error: %r", self.id(), cmd.data)
|
||||||
|
|
||||||
def pauseProducing(self):
|
def pauseProducing(self) -> None:
|
||||||
"""This is called when both the kernel send buffer and the twisted
|
"""This is called when both the kernel send buffer and the twisted
|
||||||
tcp connection send buffers have become full.
|
tcp connection send buffers have become full.
|
||||||
|
|
||||||
@ -394,26 +394,26 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
logger.info("[%s] Pause producing", self.id())
|
logger.info("[%s] Pause producing", self.id())
|
||||||
self.state = ConnectionStates.PAUSED
|
self.state = ConnectionStates.PAUSED
|
||||||
|
|
||||||
def resumeProducing(self):
|
def resumeProducing(self) -> None:
|
||||||
"""The remote has caught up after we started buffering!"""
|
"""The remote has caught up after we started buffering!"""
|
||||||
logger.info("[%s] Resume producing", self.id())
|
logger.info("[%s] Resume producing", self.id())
|
||||||
self.state = ConnectionStates.ESTABLISHED
|
self.state = ConnectionStates.ESTABLISHED
|
||||||
self._send_pending_commands()
|
self._send_pending_commands()
|
||||||
|
|
||||||
def stopProducing(self):
|
def stopProducing(self) -> None:
|
||||||
"""We're never going to send any more data (normally because either
|
"""We're never going to send any more data (normally because either
|
||||||
we or the remote has closed the connection)
|
we or the remote has closed the connection)
|
||||||
"""
|
"""
|
||||||
logger.info("[%s] Stop producing", self.id())
|
logger.info("[%s] Stop producing", self.id())
|
||||||
self.on_connection_closed()
|
self.on_connection_closed()
|
||||||
|
|
||||||
def connectionLost(self, reason):
|
def connectionLost(self, reason: Failure) -> None: # type: ignore[override]
|
||||||
logger.info("[%s] Replication connection closed: %r", self.id(), reason)
|
logger.info("[%s] Replication connection closed: %r", self.id(), reason)
|
||||||
if isinstance(reason, Failure):
|
if isinstance(reason, Failure):
|
||||||
assert reason.type is not None
|
assert reason.type is not None
|
||||||
connection_close_counter.labels(reason.type.__name__).inc()
|
connection_close_counter.labels(reason.type.__name__).inc()
|
||||||
else:
|
else:
|
||||||
connection_close_counter.labels(reason.__class__.__name__).inc()
|
connection_close_counter.labels(reason.__class__.__name__).inc() # type: ignore[unreachable]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Remove us from list of connections to be monitored
|
# Remove us from list of connections to be monitored
|
||||||
@ -427,7 +427,7 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
|
|
||||||
self.on_connection_closed()
|
self.on_connection_closed()
|
||||||
|
|
||||||
def on_connection_closed(self):
|
def on_connection_closed(self) -> None:
|
||||||
logger.info("[%s] Connection was closed", self.id())
|
logger.info("[%s] Connection was closed", self.id())
|
||||||
|
|
||||||
self.state = ConnectionStates.CLOSED
|
self.state = ConnectionStates.CLOSED
|
||||||
@ -445,7 +445,7 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
# the sentinel context is now active, which may not be correct.
|
# the sentinel context is now active, which may not be correct.
|
||||||
# PreserveLoggingContext() will restore the correct logging context.
|
# PreserveLoggingContext() will restore the correct logging context.
|
||||||
|
|
||||||
def __str__(self):
|
def __str__(self) -> str:
|
||||||
addr = None
|
addr = None
|
||||||
if self.transport:
|
if self.transport:
|
||||||
addr = str(self.transport.getPeer())
|
addr = str(self.transport.getPeer())
|
||||||
@ -455,10 +455,10 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
|
|||||||
addr,
|
addr,
|
||||||
)
|
)
|
||||||
|
|
||||||
def id(self):
|
def id(self) -> str:
|
||||||
return "%s-%s" % (self.name, self.conn_id)
|
return "%s-%s" % (self.name, self.conn_id)
|
||||||
|
|
||||||
def lineLengthExceeded(self, line):
|
def lineLengthExceeded(self, line: str) -> None:
|
||||||
"""Called when we receive a line that is above the maximum line length"""
|
"""Called when we receive a line that is above the maximum line length"""
|
||||||
self.send_error("Line length exceeded")
|
self.send_error("Line length exceeded")
|
||||||
|
|
||||||
@ -474,11 +474,11 @@ class ServerReplicationStreamProtocol(BaseReplicationStreamProtocol):
|
|||||||
|
|
||||||
self.server_name = server_name
|
self.server_name = server_name
|
||||||
|
|
||||||
def connectionMade(self):
|
def connectionMade(self) -> None:
|
||||||
self.send_command(ServerCommand(self.server_name))
|
self.send_command(ServerCommand(self.server_name))
|
||||||
super().connectionMade()
|
super().connectionMade()
|
||||||
|
|
||||||
def on_NAME(self, cmd):
|
def on_NAME(self, cmd: NameCommand) -> None:
|
||||||
logger.info("[%s] Renamed to %r", self.id(), cmd.data)
|
logger.info("[%s] Renamed to %r", self.id(), cmd.data)
|
||||||
self.name = cmd.data
|
self.name = cmd.data
|
||||||
|
|
||||||
@ -500,19 +500,19 @@ class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol):
|
|||||||
self.client_name = client_name
|
self.client_name = client_name
|
||||||
self.server_name = server_name
|
self.server_name = server_name
|
||||||
|
|
||||||
def connectionMade(self):
|
def connectionMade(self) -> None:
|
||||||
self.send_command(NameCommand(self.client_name))
|
self.send_command(NameCommand(self.client_name))
|
||||||
super().connectionMade()
|
super().connectionMade()
|
||||||
|
|
||||||
# Once we've connected subscribe to the necessary streams
|
# Once we've connected subscribe to the necessary streams
|
||||||
self.replicate()
|
self.replicate()
|
||||||
|
|
||||||
def on_SERVER(self, cmd):
|
def on_SERVER(self, cmd: ServerCommand) -> None:
|
||||||
if cmd.data != self.server_name:
|
if cmd.data != self.server_name:
|
||||||
logger.error("[%s] Connected to wrong remote: %r", self.id(), cmd.data)
|
logger.error("[%s] Connected to wrong remote: %r", self.id(), cmd.data)
|
||||||
self.send_error("Wrong remote")
|
self.send_error("Wrong remote")
|
||||||
|
|
||||||
def replicate(self):
|
def replicate(self) -> None:
|
||||||
"""Send the subscription request to the server"""
|
"""Send the subscription request to the server"""
|
||||||
logger.info("[%s] Subscribing to replication streams", self.id())
|
logger.info("[%s] Subscribing to replication streams", self.id())
|
||||||
|
|
||||||
@ -529,7 +529,7 @@ pending_commands = LaterGauge(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def transport_buffer_size(protocol):
|
def transport_buffer_size(protocol: BaseReplicationStreamProtocol) -> int:
|
||||||
if protocol.transport:
|
if protocol.transport:
|
||||||
size = len(protocol.transport.dataBuffer) + protocol.transport._tempDataLen
|
size = len(protocol.transport.dataBuffer) + protocol.transport._tempDataLen
|
||||||
return size
|
return size
|
||||||
@ -544,7 +544,9 @@ transport_send_buffer = LaterGauge(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def transport_kernel_read_buffer_size(protocol, read=True):
|
def transport_kernel_read_buffer_size(
|
||||||
|
protocol: BaseReplicationStreamProtocol, read: bool = True
|
||||||
|
) -> int:
|
||||||
SIOCINQ = 0x541B
|
SIOCINQ = 0x541B
|
||||||
SIOCOUTQ = 0x5411
|
SIOCOUTQ = 0x5411
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
from inspect import isawaitable
|
from inspect import isawaitable
|
||||||
from typing import TYPE_CHECKING, Generic, Optional, Type, TypeVar, cast
|
from typing import TYPE_CHECKING, Any, Generic, Optional, Type, TypeVar, cast
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
import txredisapi
|
import txredisapi
|
||||||
@ -62,7 +62,7 @@ class ConstantProperty(Generic[T, V]):
|
|||||||
def __get__(self, obj: Optional[T], objtype: Optional[Type[T]] = None) -> V:
|
def __get__(self, obj: Optional[T], objtype: Optional[Type[T]] = None) -> V:
|
||||||
return self.constant
|
return self.constant
|
||||||
|
|
||||||
def __set__(self, obj: Optional[T], value: V):
|
def __set__(self, obj: Optional[T], value: V) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
@ -95,7 +95,7 @@ class RedisSubscriber(txredisapi.SubscriberProtocol):
|
|||||||
synapse_stream_name: str
|
synapse_stream_name: str
|
||||||
synapse_outbound_redis_connection: txredisapi.RedisProtocol
|
synapse_outbound_redis_connection: txredisapi.RedisProtocol
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args: Any, **kwargs: Any):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
# a logcontext which we use for processing incoming commands. We declare it as a
|
# a logcontext which we use for processing incoming commands. We declare it as a
|
||||||
@ -108,12 +108,12 @@ class RedisSubscriber(txredisapi.SubscriberProtocol):
|
|||||||
"replication_command_handler"
|
"replication_command_handler"
|
||||||
)
|
)
|
||||||
|
|
||||||
def connectionMade(self):
|
def connectionMade(self) -> None:
|
||||||
logger.info("Connected to redis")
|
logger.info("Connected to redis")
|
||||||
super().connectionMade()
|
super().connectionMade()
|
||||||
run_as_background_process("subscribe-replication", self._send_subscribe)
|
run_as_background_process("subscribe-replication", self._send_subscribe)
|
||||||
|
|
||||||
async def _send_subscribe(self):
|
async def _send_subscribe(self) -> None:
|
||||||
# it's important to make sure that we only send the REPLICATE command once we
|
# it's important to make sure that we only send the REPLICATE command once we
|
||||||
# have successfully subscribed to the stream - otherwise we might miss the
|
# have successfully subscribed to the stream - otherwise we might miss the
|
||||||
# POSITION response sent back by the other end.
|
# POSITION response sent back by the other end.
|
||||||
@ -131,12 +131,12 @@ class RedisSubscriber(txredisapi.SubscriberProtocol):
|
|||||||
# otherside won't know we've connected and so won't issue a REPLICATE.
|
# otherside won't know we've connected and so won't issue a REPLICATE.
|
||||||
self.synapse_handler.send_positions_to_connection(self)
|
self.synapse_handler.send_positions_to_connection(self)
|
||||||
|
|
||||||
def messageReceived(self, pattern: str, channel: str, message: str):
|
def messageReceived(self, pattern: str, channel: str, message: str) -> None:
|
||||||
"""Received a message from redis."""
|
"""Received a message from redis."""
|
||||||
with PreserveLoggingContext(self._logging_context):
|
with PreserveLoggingContext(self._logging_context):
|
||||||
self._parse_and_dispatch_message(message)
|
self._parse_and_dispatch_message(message)
|
||||||
|
|
||||||
def _parse_and_dispatch_message(self, message: str):
|
def _parse_and_dispatch_message(self, message: str) -> None:
|
||||||
if message.strip() == "":
|
if message.strip() == "":
|
||||||
# Ignore blank lines
|
# Ignore blank lines
|
||||||
return
|
return
|
||||||
@ -181,7 +181,7 @@ class RedisSubscriber(txredisapi.SubscriberProtocol):
|
|||||||
"replication-" + cmd.get_logcontext_id(), lambda: res
|
"replication-" + cmd.get_logcontext_id(), lambda: res
|
||||||
)
|
)
|
||||||
|
|
||||||
def connectionLost(self, reason):
|
def connectionLost(self, reason: Failure) -> None: # type: ignore[override]
|
||||||
logger.info("Lost connection to redis")
|
logger.info("Lost connection to redis")
|
||||||
super().connectionLost(reason)
|
super().connectionLost(reason)
|
||||||
self.synapse_handler.lost_connection(self)
|
self.synapse_handler.lost_connection(self)
|
||||||
@ -193,17 +193,17 @@ class RedisSubscriber(txredisapi.SubscriberProtocol):
|
|||||||
# the sentinel context is now active, which may not be correct.
|
# the sentinel context is now active, which may not be correct.
|
||||||
# PreserveLoggingContext() will restore the correct logging context.
|
# PreserveLoggingContext() will restore the correct logging context.
|
||||||
|
|
||||||
def send_command(self, cmd: Command):
|
def send_command(self, cmd: Command) -> None:
|
||||||
"""Send a command if connection has been established.
|
"""Send a command if connection has been established.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
cmd (Command)
|
cmd: The command to send
|
||||||
"""
|
"""
|
||||||
run_as_background_process(
|
run_as_background_process(
|
||||||
"send-cmd", self._async_send_command, cmd, bg_start_span=False
|
"send-cmd", self._async_send_command, cmd, bg_start_span=False
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _async_send_command(self, cmd: Command):
|
async def _async_send_command(self, cmd: Command) -> None:
|
||||||
"""Encode a replication command and send it over our outbound connection"""
|
"""Encode a replication command and send it over our outbound connection"""
|
||||||
string = "%s %s" % (cmd.NAME, cmd.to_line())
|
string = "%s %s" % (cmd.NAME, cmd.to_line())
|
||||||
if "\n" in string:
|
if "\n" in string:
|
||||||
@ -259,7 +259,7 @@ class SynapseRedisFactory(txredisapi.RedisFactory):
|
|||||||
hs.get_clock().looping_call(self._send_ping, 30 * 1000)
|
hs.get_clock().looping_call(self._send_ping, 30 * 1000)
|
||||||
|
|
||||||
@wrap_as_background_process("redis_ping")
|
@wrap_as_background_process("redis_ping")
|
||||||
async def _send_ping(self):
|
async def _send_ping(self) -> None:
|
||||||
for connection in self.pool:
|
for connection in self.pool:
|
||||||
try:
|
try:
|
||||||
await make_deferred_yieldable(connection.ping())
|
await make_deferred_yieldable(connection.ping())
|
||||||
@ -269,13 +269,13 @@ class SynapseRedisFactory(txredisapi.RedisFactory):
|
|||||||
# ReconnectingClientFactory has some logging (if you enable `self.noisy`), but
|
# ReconnectingClientFactory has some logging (if you enable `self.noisy`), but
|
||||||
# it's rubbish. We add our own here.
|
# it's rubbish. We add our own here.
|
||||||
|
|
||||||
def startedConnecting(self, connector: IConnector):
|
def startedConnecting(self, connector: IConnector) -> None:
|
||||||
logger.info(
|
logger.info(
|
||||||
"Connecting to redis server %s", format_address(connector.getDestination())
|
"Connecting to redis server %s", format_address(connector.getDestination())
|
||||||
)
|
)
|
||||||
super().startedConnecting(connector)
|
super().startedConnecting(connector)
|
||||||
|
|
||||||
def clientConnectionFailed(self, connector: IConnector, reason: Failure):
|
def clientConnectionFailed(self, connector: IConnector, reason: Failure) -> None:
|
||||||
logger.info(
|
logger.info(
|
||||||
"Connection to redis server %s failed: %s",
|
"Connection to redis server %s failed: %s",
|
||||||
format_address(connector.getDestination()),
|
format_address(connector.getDestination()),
|
||||||
@ -283,7 +283,7 @@ class SynapseRedisFactory(txredisapi.RedisFactory):
|
|||||||
)
|
)
|
||||||
super().clientConnectionFailed(connector, reason)
|
super().clientConnectionFailed(connector, reason)
|
||||||
|
|
||||||
def clientConnectionLost(self, connector: IConnector, reason: Failure):
|
def clientConnectionLost(self, connector: IConnector, reason: Failure) -> None:
|
||||||
logger.info(
|
logger.info(
|
||||||
"Connection to redis server %s lost: %s",
|
"Connection to redis server %s lost: %s",
|
||||||
format_address(connector.getDestination()),
|
format_address(connector.getDestination()),
|
||||||
@ -330,7 +330,7 @@ class RedisDirectTcpReplicationClientFactory(SynapseRedisFactory):
|
|||||||
|
|
||||||
self.synapse_outbound_redis_connection = outbound_redis_connection
|
self.synapse_outbound_redis_connection = outbound_redis_connection
|
||||||
|
|
||||||
def buildProtocol(self, addr):
|
def buildProtocol(self, addr: IAddress) -> RedisSubscriber:
|
||||||
p = super().buildProtocol(addr)
|
p = super().buildProtocol(addr)
|
||||||
p = cast(RedisSubscriber, p)
|
p = cast(RedisSubscriber, p)
|
||||||
|
|
||||||
@ -373,7 +373,7 @@ def lazyConnection(
|
|||||||
|
|
||||||
reactor = hs.get_reactor()
|
reactor = hs.get_reactor()
|
||||||
reactor.connectTCP(
|
reactor.connectTCP(
|
||||||
host, # type: ignore[arg-type]
|
host,
|
||||||
port,
|
port,
|
||||||
factory,
|
factory,
|
||||||
timeout=30,
|
timeout=30,
|
||||||
|
@ -16,16 +16,18 @@
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
import random
|
import random
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, List, Optional, Tuple
|
||||||
|
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
|
|
||||||
|
from twisted.internet.interfaces import IAddress
|
||||||
from twisted.internet.protocol import ServerFactory
|
from twisted.internet.protocol import ServerFactory
|
||||||
|
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.replication.tcp.commands import PositionCommand
|
from synapse.replication.tcp.commands import PositionCommand
|
||||||
from synapse.replication.tcp.protocol import ServerReplicationStreamProtocol
|
from synapse.replication.tcp.protocol import ServerReplicationStreamProtocol
|
||||||
from synapse.replication.tcp.streams import EventsStream
|
from synapse.replication.tcp.streams import EventsStream
|
||||||
|
from synapse.replication.tcp.streams._base import StreamRow, Token
|
||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
@ -56,7 +58,7 @@ class ReplicationStreamProtocolFactory(ServerFactory):
|
|||||||
# listener config again or always starting a `ReplicationStreamer`.)
|
# listener config again or always starting a `ReplicationStreamer`.)
|
||||||
hs.get_replication_streamer()
|
hs.get_replication_streamer()
|
||||||
|
|
||||||
def buildProtocol(self, addr):
|
def buildProtocol(self, addr: IAddress) -> ServerReplicationStreamProtocol:
|
||||||
return ServerReplicationStreamProtocol(
|
return ServerReplicationStreamProtocol(
|
||||||
self.server_name, self.clock, self.command_handler
|
self.server_name, self.clock, self.command_handler
|
||||||
)
|
)
|
||||||
@ -105,7 +107,7 @@ class ReplicationStreamer:
|
|||||||
if any(EventsStream.NAME == s.NAME for s in self.streams):
|
if any(EventsStream.NAME == s.NAME for s in self.streams):
|
||||||
self.clock.looping_call(self.on_notifier_poke, 1000)
|
self.clock.looping_call(self.on_notifier_poke, 1000)
|
||||||
|
|
||||||
def on_notifier_poke(self):
|
def on_notifier_poke(self) -> None:
|
||||||
"""Checks if there is actually any new data and sends it to the
|
"""Checks if there is actually any new data and sends it to the
|
||||||
connections if there are.
|
connections if there are.
|
||||||
|
|
||||||
@ -137,7 +139,7 @@ class ReplicationStreamer:
|
|||||||
|
|
||||||
run_as_background_process("replication_notifier", self._run_notifier_loop)
|
run_as_background_process("replication_notifier", self._run_notifier_loop)
|
||||||
|
|
||||||
async def _run_notifier_loop(self):
|
async def _run_notifier_loop(self) -> None:
|
||||||
self.is_looping = True
|
self.is_looping = True
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@ -238,7 +240,9 @@ class ReplicationStreamer:
|
|||||||
self.is_looping = False
|
self.is_looping = False
|
||||||
|
|
||||||
|
|
||||||
def _batch_updates(updates):
|
def _batch_updates(
|
||||||
|
updates: List[Tuple[Token, StreamRow]]
|
||||||
|
) -> List[Tuple[Optional[Token], StreamRow]]:
|
||||||
"""Takes a list of updates of form [(token, row)] and sets the token to
|
"""Takes a list of updates of form [(token, row)] and sets the token to
|
||||||
None for all rows where the next row has the same token. This is used to
|
None for all rows where the next row has the same token. This is used to
|
||||||
implement batching.
|
implement batching.
|
||||||
@ -254,7 +258,7 @@ def _batch_updates(updates):
|
|||||||
if not updates:
|
if not updates:
|
||||||
return []
|
return []
|
||||||
|
|
||||||
new_updates = []
|
new_updates: List[Tuple[Optional[Token], StreamRow]] = []
|
||||||
for i, update in enumerate(updates[:-1]):
|
for i, update in enumerate(updates[:-1]):
|
||||||
if update[0] == updates[i + 1][0]:
|
if update[0] == updates[i + 1][0]:
|
||||||
new_updates.append((None, update[1]))
|
new_updates.append((None, update[1]))
|
||||||
|
@ -90,7 +90,7 @@ class Stream:
|
|||||||
ROW_TYPE: Any = None
|
ROW_TYPE: Any = None
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def parse_row(cls, row: StreamRow):
|
def parse_row(cls, row: StreamRow) -> Any:
|
||||||
"""Parse a row received over replication
|
"""Parse a row received over replication
|
||||||
|
|
||||||
By default, assumes that the row data is an array object and passes its contents
|
By default, assumes that the row data is an array object and passes its contents
|
||||||
@ -139,7 +139,7 @@ class Stream:
|
|||||||
# The token from which we last asked for updates
|
# The token from which we last asked for updates
|
||||||
self.last_token = self.current_token(self.local_instance_name)
|
self.last_token = self.current_token(self.local_instance_name)
|
||||||
|
|
||||||
def discard_updates_and_advance(self):
|
def discard_updates_and_advance(self) -> None:
|
||||||
"""Called when the stream should advance but the updates would be discarded,
|
"""Called when the stream should advance but the updates would be discarded,
|
||||||
e.g. when there are no currently connected workers.
|
e.g. when there are no currently connected workers.
|
||||||
"""
|
"""
|
||||||
@ -200,7 +200,7 @@ def current_token_without_instance(
|
|||||||
return lambda instance_name: current_token()
|
return lambda instance_name: current_token()
|
||||||
|
|
||||||
|
|
||||||
def make_http_update_function(hs, stream_name: str) -> UpdateFunction:
|
def make_http_update_function(hs: "HomeServer", stream_name: str) -> UpdateFunction:
|
||||||
"""Makes a suitable function for use as an `update_function` that queries
|
"""Makes a suitable function for use as an `update_function` that queries
|
||||||
the master process for updates.
|
the master process for updates.
|
||||||
"""
|
"""
|
||||||
|
@ -13,12 +13,16 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import heapq
|
import heapq
|
||||||
from collections.abc import Iterable
|
from typing import TYPE_CHECKING, Iterable, Optional, Tuple, Type, TypeVar, cast
|
||||||
from typing import TYPE_CHECKING, Optional, Tuple, Type
|
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
from ._base import Stream, StreamUpdateResult, Token
|
from synapse.replication.tcp.streams._base import (
|
||||||
|
Stream,
|
||||||
|
StreamRow,
|
||||||
|
StreamUpdateResult,
|
||||||
|
Token,
|
||||||
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -58,6 +62,9 @@ class EventsStreamRow:
|
|||||||
data: "BaseEventsStreamRow"
|
data: "BaseEventsStreamRow"
|
||||||
|
|
||||||
|
|
||||||
|
T = TypeVar("T", bound="BaseEventsStreamRow")
|
||||||
|
|
||||||
|
|
||||||
class BaseEventsStreamRow:
|
class BaseEventsStreamRow:
|
||||||
"""Base class for rows to be sent in the events stream.
|
"""Base class for rows to be sent in the events stream.
|
||||||
|
|
||||||
@ -68,7 +75,7 @@ class BaseEventsStreamRow:
|
|||||||
TypeId: str
|
TypeId: str
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_data(cls, data):
|
def from_data(cls: Type[T], data: Iterable[Optional[str]]) -> T:
|
||||||
"""Parse the data from the replication stream into a row.
|
"""Parse the data from the replication stream into a row.
|
||||||
|
|
||||||
By default we just call the constructor with the data list as arguments
|
By default we just call the constructor with the data list as arguments
|
||||||
@ -221,7 +228,7 @@ class EventsStream(Stream):
|
|||||||
return updates, upper_limit, limited
|
return updates, upper_limit, limited
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def parse_row(cls, row):
|
def parse_row(cls, row: StreamRow) -> "EventsStreamRow":
|
||||||
(typ, data) = row
|
(typ, data) = cast(Tuple[str, Iterable[Optional[str]]], row)
|
||||||
data = TypeToRow[typ].from_data(data)
|
event_stream_row_data = TypeToRow[typ].from_data(data)
|
||||||
return EventsStreamRow(typ, data)
|
return EventsStreamRow(typ, event_stream_row_data)
|
||||||
|
@ -20,7 +20,8 @@ import platform
|
|||||||
from http import HTTPStatus
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Optional, Tuple
|
from typing import TYPE_CHECKING, Optional, Tuple
|
||||||
|
|
||||||
import synapse
|
from matrix_common.versionstring import get_distribution_version_string
|
||||||
|
|
||||||
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||||
from synapse.http.server import HttpServer, JsonResource
|
from synapse.http.server import HttpServer, JsonResource
|
||||||
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
||||||
@ -88,7 +89,6 @@ from synapse.rest.admin.users import (
|
|||||||
WhoisRestServlet,
|
WhoisRestServlet,
|
||||||
)
|
)
|
||||||
from synapse.types import JsonDict, RoomStreamToken
|
from synapse.types import JsonDict, RoomStreamToken
|
||||||
from synapse.util.versionstring import get_version_string
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -101,7 +101,7 @@ class VersionServlet(RestServlet):
|
|||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.res = {
|
self.res = {
|
||||||
"server_version": get_version_string(synapse),
|
"server_version": get_distribution_version_string("matrix-synapse"),
|
||||||
"python_version": platform.python_version(),
|
"python_version": platform.python_version(),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -385,7 +385,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
|
|||||||
send_attempt = body["send_attempt"]
|
send_attempt = body["send_attempt"]
|
||||||
next_link = body.get("next_link") # Optional param
|
next_link = body.get("next_link") # Optional param
|
||||||
|
|
||||||
if not check_3pid_allowed(self.hs, "email", email):
|
if not await check_3pid_allowed(self.hs, "email", email):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
403,
|
403,
|
||||||
"Your email domain is not authorized on this server",
|
"Your email domain is not authorized on this server",
|
||||||
@ -468,7 +468,7 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
|
|||||||
|
|
||||||
msisdn = phone_number_to_msisdn(country, phone_number)
|
msisdn = phone_number_to_msisdn(country, phone_number)
|
||||||
|
|
||||||
if not check_3pid_allowed(self.hs, "msisdn", msisdn):
|
if not await check_3pid_allowed(self.hs, "msisdn", msisdn):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
403,
|
403,
|
||||||
"Account phone numbers are not authorized on this server",
|
"Account phone numbers are not authorized on this server",
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, MSC3244_CAPABILITIES
|
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, MSC3244_CAPABILITIES
|
||||||
@ -54,6 +55,15 @@ class CapabilitiesRestServlet(RestServlet):
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
"m.change_password": {"enabled": change_password},
|
"m.change_password": {"enabled": change_password},
|
||||||
|
"m.set_displayname": {
|
||||||
|
"enabled": self.config.registration.enable_set_displayname
|
||||||
|
},
|
||||||
|
"m.set_avatar_url": {
|
||||||
|
"enabled": self.config.registration.enable_set_avatar_url
|
||||||
|
},
|
||||||
|
"m.3pid_changes": {
|
||||||
|
"enabled": self.config.registration.enable_3pid_changes
|
||||||
|
},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -62,6 +72,9 @@ class CapabilitiesRestServlet(RestServlet):
|
|||||||
"org.matrix.msc3244.room_capabilities"
|
"org.matrix.msc3244.room_capabilities"
|
||||||
] = MSC3244_CAPABILITIES
|
] = MSC3244_CAPABILITIES
|
||||||
|
|
||||||
|
# Must be removed in later versions.
|
||||||
|
# Is only included for migration.
|
||||||
|
# Also the parts in `synapse/config/experimental.py`.
|
||||||
if self.config.experimental.msc3283_enabled:
|
if self.config.experimental.msc3283_enabled:
|
||||||
response["capabilities"]["org.matrix.msc3283.set_displayname"] = {
|
response["capabilities"]["org.matrix.msc3283.set_displayname"] = {
|
||||||
"enabled": self.config.registration.enable_set_displayname
|
"enabled": self.config.registration.enable_set_displayname
|
||||||
@ -76,7 +89,7 @@ class CapabilitiesRestServlet(RestServlet):
|
|||||||
if self.config.experimental.msc3440_enabled:
|
if self.config.experimental.msc3440_enabled:
|
||||||
response["capabilities"]["io.element.thread"] = {"enabled": True}
|
response["capabilities"]["io.element.thread"] = {"enabled": True}
|
||||||
|
|
||||||
return 200, response
|
return HTTPStatus.OK, response
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
|
@ -29,7 +29,7 @@ from synapse.http.servlet import (
|
|||||||
parse_string,
|
parse_string,
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.push.baserules import BASE_RULE_IDS, NEW_RULE_IDS
|
from synapse.push.baserules import BASE_RULE_IDS
|
||||||
from synapse.push.clientformat import format_push_rules_for_user
|
from synapse.push.clientformat import format_push_rules_for_user
|
||||||
from synapse.push.rulekinds import PRIORITY_CLASS_MAP
|
from synapse.push.rulekinds import PRIORITY_CLASS_MAP
|
||||||
from synapse.rest.client._base import client_patterns
|
from synapse.rest.client._base import client_patterns
|
||||||
@ -61,10 +61,6 @@ class PushRuleRestServlet(RestServlet):
|
|||||||
self.notifier = hs.get_notifier()
|
self.notifier = hs.get_notifier()
|
||||||
self._is_worker = hs.config.worker.worker_app is not None
|
self._is_worker = hs.config.worker.worker_app is not None
|
||||||
|
|
||||||
self._users_new_default_push_rules = (
|
|
||||||
hs.config.server.users_new_default_push_rules
|
|
||||||
)
|
|
||||||
|
|
||||||
async def on_PUT(self, request: SynapseRequest, path: str) -> Tuple[int, JsonDict]:
|
async def on_PUT(self, request: SynapseRequest, path: str) -> Tuple[int, JsonDict]:
|
||||||
if self._is_worker:
|
if self._is_worker:
|
||||||
raise Exception("Cannot handle PUT /push_rules on worker")
|
raise Exception("Cannot handle PUT /push_rules on worker")
|
||||||
@ -217,12 +213,7 @@ class PushRuleRestServlet(RestServlet):
|
|||||||
rule_id = spec.rule_id
|
rule_id = spec.rule_id
|
||||||
is_default_rule = rule_id.startswith(".")
|
is_default_rule = rule_id.startswith(".")
|
||||||
if is_default_rule:
|
if is_default_rule:
|
||||||
if user_id in self._users_new_default_push_rules:
|
if namespaced_rule_id not in BASE_RULE_IDS:
|
||||||
rule_ids = NEW_RULE_IDS
|
|
||||||
else:
|
|
||||||
rule_ids = BASE_RULE_IDS
|
|
||||||
|
|
||||||
if namespaced_rule_id not in rule_ids:
|
|
||||||
raise SynapseError(404, "Unknown rule %r" % (namespaced_rule_id,))
|
raise SynapseError(404, "Unknown rule %r" % (namespaced_rule_id,))
|
||||||
await self.store.set_push_rule_actions(
|
await self.store.set_push_rule_actions(
|
||||||
user_id, namespaced_rule_id, actions, is_default_rule
|
user_id, namespaced_rule_id, actions, is_default_rule
|
||||||
|
@ -112,7 +112,7 @@ class EmailRegisterRequestTokenRestServlet(RestServlet):
|
|||||||
send_attempt = body["send_attempt"]
|
send_attempt = body["send_attempt"]
|
||||||
next_link = body.get("next_link") # Optional param
|
next_link = body.get("next_link") # Optional param
|
||||||
|
|
||||||
if not check_3pid_allowed(self.hs, "email", email):
|
if not await check_3pid_allowed(self.hs, "email", email, registration=True):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
403,
|
403,
|
||||||
"Your email domain is not authorized to register on this server",
|
"Your email domain is not authorized to register on this server",
|
||||||
@ -192,7 +192,7 @@ class MsisdnRegisterRequestTokenRestServlet(RestServlet):
|
|||||||
|
|
||||||
msisdn = phone_number_to_msisdn(country, phone_number)
|
msisdn = phone_number_to_msisdn(country, phone_number)
|
||||||
|
|
||||||
if not check_3pid_allowed(self.hs, "msisdn", msisdn):
|
if not await check_3pid_allowed(self.hs, "msisdn", msisdn, registration=True):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
403,
|
403,
|
||||||
"Phone numbers are not authorized to register on this server",
|
"Phone numbers are not authorized to register on this server",
|
||||||
@ -368,7 +368,7 @@ class RegistrationTokenValidityRestServlet(RestServlet):
|
|||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
GET /_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity?token=abcd
|
GET /_matrix/client/v1/register/m.login.registration_token/validity?token=abcd
|
||||||
|
|
||||||
200 OK
|
200 OK
|
||||||
|
|
||||||
@ -378,9 +378,8 @@ class RegistrationTokenValidityRestServlet(RestServlet):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
PATTERNS = client_patterns(
|
PATTERNS = client_patterns(
|
||||||
f"/org.matrix.msc3231/register/{LoginType.REGISTRATION_TOKEN}/validity",
|
f"/register/{LoginType.REGISTRATION_TOKEN}/validity",
|
||||||
releases=(),
|
releases=("v1",),
|
||||||
unstable=True,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
@ -617,7 +616,9 @@ class RegisterRestServlet(RestServlet):
|
|||||||
medium = auth_result[login_type]["medium"]
|
medium = auth_result[login_type]["medium"]
|
||||||
address = auth_result[login_type]["address"]
|
address = auth_result[login_type]["address"]
|
||||||
|
|
||||||
if not check_3pid_allowed(self.hs, medium, address):
|
if not await check_3pid_allowed(
|
||||||
|
self.hs, medium, address, registration=True
|
||||||
|
):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
403,
|
403,
|
||||||
"Third party identifiers (email/phone numbers)"
|
"Third party identifiers (email/phone numbers)"
|
||||||
|
@ -32,14 +32,45 @@ from synapse.storage.relations import (
|
|||||||
PaginationChunk,
|
PaginationChunk,
|
||||||
RelationPaginationToken,
|
RelationPaginationToken,
|
||||||
)
|
)
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict, RoomStreamToken, StreamToken
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
from synapse.storage.databases.main import DataStore
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
async def _parse_token(
|
||||||
|
store: "DataStore", token: Optional[str]
|
||||||
|
) -> Optional[StreamToken]:
|
||||||
|
"""
|
||||||
|
For backwards compatibility support RelationPaginationToken, but new pagination
|
||||||
|
tokens are generated as full StreamTokens, to be compatible with /sync and /messages.
|
||||||
|
"""
|
||||||
|
if not token:
|
||||||
|
return None
|
||||||
|
# Luckily the format for StreamToken and RelationPaginationToken differ enough
|
||||||
|
# that they can easily be separated. An "_" appears in the serialization of
|
||||||
|
# RoomStreamToken (as part of StreamToken), but RelationPaginationToken uses
|
||||||
|
# "-" only for separators.
|
||||||
|
if "_" in token:
|
||||||
|
return await StreamToken.from_string(store, token)
|
||||||
|
else:
|
||||||
|
relation_token = RelationPaginationToken.from_string(token)
|
||||||
|
return StreamToken(
|
||||||
|
room_key=RoomStreamToken(relation_token.topological, relation_token.stream),
|
||||||
|
presence_key=0,
|
||||||
|
typing_key=0,
|
||||||
|
receipt_key=0,
|
||||||
|
account_data_key=0,
|
||||||
|
push_rules_key=0,
|
||||||
|
to_device_key=0,
|
||||||
|
device_list_key=0,
|
||||||
|
groups_key=0,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class RelationPaginationServlet(RestServlet):
|
class RelationPaginationServlet(RestServlet):
|
||||||
"""API to paginate relations on an event by topological ordering, optionally
|
"""API to paginate relations on an event by topological ordering, optionally
|
||||||
filtered by relation type and event type.
|
filtered by relation type and event type.
|
||||||
@ -80,6 +111,9 @@ class RelationPaginationServlet(RestServlet):
|
|||||||
raise SynapseError(404, "Unknown parent event.")
|
raise SynapseError(404, "Unknown parent event.")
|
||||||
|
|
||||||
limit = parse_integer(request, "limit", default=5)
|
limit = parse_integer(request, "limit", default=5)
|
||||||
|
direction = parse_string(
|
||||||
|
request, "org.matrix.msc3715.dir", default="b", allowed_values=["f", "b"]
|
||||||
|
)
|
||||||
from_token_str = parse_string(request, "from")
|
from_token_str = parse_string(request, "from")
|
||||||
to_token_str = parse_string(request, "to")
|
to_token_str = parse_string(request, "to")
|
||||||
|
|
||||||
@ -88,13 +122,8 @@ class RelationPaginationServlet(RestServlet):
|
|||||||
pagination_chunk = PaginationChunk(chunk=[])
|
pagination_chunk = PaginationChunk(chunk=[])
|
||||||
else:
|
else:
|
||||||
# Return the relations
|
# Return the relations
|
||||||
from_token = None
|
from_token = await _parse_token(self.store, from_token_str)
|
||||||
if from_token_str:
|
to_token = await _parse_token(self.store, to_token_str)
|
||||||
from_token = RelationPaginationToken.from_string(from_token_str)
|
|
||||||
|
|
||||||
to_token = None
|
|
||||||
if to_token_str:
|
|
||||||
to_token = RelationPaginationToken.from_string(to_token_str)
|
|
||||||
|
|
||||||
pagination_chunk = await self.store.get_relations_for_event(
|
pagination_chunk = await self.store.get_relations_for_event(
|
||||||
event_id=parent_id,
|
event_id=parent_id,
|
||||||
@ -102,6 +131,7 @@ class RelationPaginationServlet(RestServlet):
|
|||||||
relation_type=relation_type,
|
relation_type=relation_type,
|
||||||
event_type=event_type,
|
event_type=event_type,
|
||||||
limit=limit,
|
limit=limit,
|
||||||
|
direction=direction,
|
||||||
from_token=from_token,
|
from_token=from_token,
|
||||||
to_token=to_token,
|
to_token=to_token,
|
||||||
)
|
)
|
||||||
@ -125,7 +155,7 @@ class RelationPaginationServlet(RestServlet):
|
|||||||
events, now, bundle_aggregations=aggregations
|
events, now, bundle_aggregations=aggregations
|
||||||
)
|
)
|
||||||
|
|
||||||
return_value = pagination_chunk.to_dict()
|
return_value = await pagination_chunk.to_dict(self.store)
|
||||||
return_value["chunk"] = serialized_events
|
return_value["chunk"] = serialized_events
|
||||||
return_value["original_event"] = original_event
|
return_value["original_event"] = original_event
|
||||||
|
|
||||||
@ -216,7 +246,7 @@ class RelationAggregationPaginationServlet(RestServlet):
|
|||||||
to_token=to_token,
|
to_token=to_token,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, pagination_chunk.to_dict()
|
return 200, await pagination_chunk.to_dict(self.store)
|
||||||
|
|
||||||
|
|
||||||
class RelationAggregationGroupPaginationServlet(RestServlet):
|
class RelationAggregationGroupPaginationServlet(RestServlet):
|
||||||
@ -287,13 +317,8 @@ class RelationAggregationGroupPaginationServlet(RestServlet):
|
|||||||
from_token_str = parse_string(request, "from")
|
from_token_str = parse_string(request, "from")
|
||||||
to_token_str = parse_string(request, "to")
|
to_token_str = parse_string(request, "to")
|
||||||
|
|
||||||
from_token = None
|
from_token = await _parse_token(self.store, from_token_str)
|
||||||
if from_token_str:
|
to_token = await _parse_token(self.store, to_token_str)
|
||||||
from_token = RelationPaginationToken.from_string(from_token_str)
|
|
||||||
|
|
||||||
to_token = None
|
|
||||||
if to_token_str:
|
|
||||||
to_token = RelationPaginationToken.from_string(to_token_str)
|
|
||||||
|
|
||||||
result = await self.store.get_relations_for_event(
|
result = await self.store.get_relations_for_event(
|
||||||
event_id=parent_id,
|
event_id=parent_id,
|
||||||
@ -313,7 +338,7 @@ class RelationAggregationGroupPaginationServlet(RestServlet):
|
|||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
serialized_events = self._event_serializer.serialize_events(events, now)
|
serialized_events = self._event_serializer.serialize_events(events, now)
|
||||||
|
|
||||||
return_value = result.to_dict()
|
return_value = await result.to_dict(self.store)
|
||||||
return_value["chunk"] = serialized_events
|
return_value["chunk"] = serialized_events
|
||||||
|
|
||||||
return 200, return_value
|
return 200, return_value
|
||||||
|
@ -131,6 +131,14 @@ class RoomBatchSendEventRestServlet(RestServlet):
|
|||||||
prev_event_ids_from_query
|
prev_event_ids_from_query
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if not auth_event_ids:
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"No auth events found for given prev_event query parameter. The prev_event=%s probably does not exist."
|
||||||
|
% prev_event_ids_from_query,
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
|
)
|
||||||
|
|
||||||
state_event_ids_at_start = []
|
state_event_ids_at_start = []
|
||||||
# Create and persist all of the state events that float off on their own
|
# Create and persist all of the state events that float off on their own
|
||||||
# before the batch. These will most likely be all of the invite/member
|
# before the batch. These will most likely be all of the invite/member
|
||||||
@ -197,21 +205,12 @@ class RoomBatchSendEventRestServlet(RestServlet):
|
|||||||
EventContentFields.MSC2716_NEXT_BATCH_ID
|
EventContentFields.MSC2716_NEXT_BATCH_ID
|
||||||
]
|
]
|
||||||
|
|
||||||
# Also connect the historical event chain to the end of the floating
|
|
||||||
# state chain, which causes the HS to ask for the state at the start of
|
|
||||||
# the batch later. If there is no state chain to connect to, just make
|
|
||||||
# the insertion event float itself.
|
|
||||||
prev_event_ids = []
|
|
||||||
if len(state_event_ids_at_start):
|
|
||||||
prev_event_ids = [state_event_ids_at_start[-1]]
|
|
||||||
|
|
||||||
# Create and persist all of the historical events as well as insertion
|
# Create and persist all of the historical events as well as insertion
|
||||||
# and batch meta events to make the batch navigable in the DAG.
|
# and batch meta events to make the batch navigable in the DAG.
|
||||||
event_ids, next_batch_id = await self.room_batch_handler.handle_batch_of_events(
|
event_ids, next_batch_id = await self.room_batch_handler.handle_batch_of_events(
|
||||||
events_to_create=events_to_create,
|
events_to_create=events_to_create,
|
||||||
room_id=room_id,
|
room_id=room_id,
|
||||||
batch_id_to_connect_to=batch_id_to_connect_to,
|
batch_id_to_connect_to=batch_id_to_connect_to,
|
||||||
initial_prev_event_ids=prev_event_ids,
|
|
||||||
inherited_depth=inherited_depth,
|
inherited_depth=inherited_depth,
|
||||||
auth_event_ids=auth_event_ids,
|
auth_event_ids=auth_event_ids,
|
||||||
app_service_requester=requester,
|
app_service_requester=requester,
|
||||||
|
@ -403,6 +403,7 @@ class PreviewUrlResource(DirectServeJsonResource):
|
|||||||
output_stream=output_stream,
|
output_stream=output_stream,
|
||||||
max_size=self.max_spider_size,
|
max_size=self.max_spider_size,
|
||||||
headers={"Accept-Language": self.url_preview_accept_language},
|
headers={"Accept-Language": self.url_preview_accept_language},
|
||||||
|
is_allowed_content_type=_is_previewable,
|
||||||
)
|
)
|
||||||
except SynapseError:
|
except SynapseError:
|
||||||
# Pass SynapseErrors through directly, so that the servlet
|
# Pass SynapseErrors through directly, so that the servlet
|
||||||
@ -761,3 +762,10 @@ def _is_html(content_type: str) -> bool:
|
|||||||
|
|
||||||
def _is_json(content_type: str) -> bool:
|
def _is_json(content_type: str) -> bool:
|
||||||
return content_type.lower().startswith("application/json")
|
return content_type.lower().startswith("application/json")
|
||||||
|
|
||||||
|
|
||||||
|
def _is_previewable(content_type: str) -> bool:
|
||||||
|
"""Returns True for content types for which we will perform URL preview and False
|
||||||
|
otherwise."""
|
||||||
|
|
||||||
|
return _is_html(content_type) or _is_media(content_type) or _is_json(content_type)
|
||||||
|
@ -49,10 +49,14 @@ class UploadResource(DirectServeJsonResource):
|
|||||||
|
|
||||||
async def _async_render_POST(self, request: SynapseRequest) -> None:
|
async def _async_render_POST(self, request: SynapseRequest) -> None:
|
||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self.auth.get_user_by_req(request)
|
||||||
content_length = request.getHeader("Content-Length")
|
raw_content_length = request.getHeader("Content-Length")
|
||||||
if content_length is None:
|
if raw_content_length is None:
|
||||||
raise SynapseError(msg="Request must specify a Content-Length", code=400)
|
raise SynapseError(msg="Request must specify a Content-Length", code=400)
|
||||||
if int(content_length) > self.max_upload_size:
|
try:
|
||||||
|
content_length = int(raw_content_length)
|
||||||
|
except ValueError:
|
||||||
|
raise SynapseError(msg="Content-Length value is invalid", code=400)
|
||||||
|
if content_length > self.max_upload_size:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
msg="Upload request body is too large",
|
msg="Upload request body is too large",
|
||||||
code=413,
|
code=413,
|
||||||
@ -66,7 +70,8 @@ class UploadResource(DirectServeJsonResource):
|
|||||||
upload_name: Optional[str] = upload_name_bytes.decode("utf8")
|
upload_name: Optional[str] = upload_name_bytes.decode("utf8")
|
||||||
except UnicodeDecodeError:
|
except UnicodeDecodeError:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
msg="Invalid UTF-8 filename parameter: %r" % (upload_name), code=400
|
msg="Invalid UTF-8 filename parameter: %r" % (upload_name_bytes,),
|
||||||
|
code=400,
|
||||||
)
|
)
|
||||||
|
|
||||||
# If the name is falsey (e.g. an empty byte string) ensure it is None.
|
# If the name is falsey (e.g. an empty byte string) ensure it is None.
|
||||||
|
@ -233,8 +233,8 @@ class HomeServer(metaclass=abc.ABCMeta):
|
|||||||
self,
|
self,
|
||||||
hostname: str,
|
hostname: str,
|
||||||
config: HomeServerConfig,
|
config: HomeServerConfig,
|
||||||
reactor=None,
|
reactor: Optional[ISynapseReactor] = None,
|
||||||
version_string="Synapse",
|
version_string: str = "Synapse",
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
@ -244,7 +244,7 @@ class HomeServer(metaclass=abc.ABCMeta):
|
|||||||
if not reactor:
|
if not reactor:
|
||||||
from twisted.internet import reactor as _reactor
|
from twisted.internet import reactor as _reactor
|
||||||
|
|
||||||
reactor = _reactor
|
reactor = cast(ISynapseReactor, _reactor)
|
||||||
|
|
||||||
self._reactor = reactor
|
self._reactor = reactor
|
||||||
self.hostname = hostname
|
self.hostname = hostname
|
||||||
@ -264,7 +264,7 @@ class HomeServer(metaclass=abc.ABCMeta):
|
|||||||
self._module_web_resources: Dict[str, Resource] = {}
|
self._module_web_resources: Dict[str, Resource] = {}
|
||||||
self._module_web_resources_consumed = False
|
self._module_web_resources_consumed = False
|
||||||
|
|
||||||
def register_module_web_resource(self, path: str, resource: Resource):
|
def register_module_web_resource(self, path: str, resource: Resource) -> None:
|
||||||
"""Allows a module to register a web resource to be served at the given path.
|
"""Allows a module to register a web resource to be served at the given path.
|
||||||
|
|
||||||
If multiple modules register a resource for the same path, the module that
|
If multiple modules register a resource for the same path, the module that
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user