mirror of
https://git.anonymousland.org/anonymousland/synapse-product.git
synced 2024-12-12 02:54:21 -05:00
Revert "Revert accidental fast-forward merge from v1.49.0rc1"
This reverts commit 158d73ebdd
.
This commit is contained in:
parent
158d73ebdd
commit
4dd9ea8f4f
2
.github/workflows/tests.yml
vendored
2
.github/workflows/tests.yml
vendored
@ -374,7 +374,7 @@ jobs:
|
|||||||
working-directory: complement/dockerfiles
|
working-directory: complement/dockerfiles
|
||||||
|
|
||||||
# Run Complement
|
# Run Complement
|
||||||
- run: go test -v -tags synapse_blacklist,msc2403,msc2946,msc3083 ./tests/...
|
- run: go test -v -tags synapse_blacklist,msc2403 ./tests/...
|
||||||
env:
|
env:
|
||||||
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
|
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
|
||||||
working-directory: complement
|
working-directory: complement
|
||||||
|
93
CHANGES.md
93
CHANGES.md
@ -1,3 +1,96 @@
|
|||||||
|
Synapse 1.49.0rc1 (2021-12-07)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
We've decided to move the existing, somewhat stagnant pages from the GitHub wiki
|
||||||
|
to the [documentation website](https://matrix-org.github.io/synapse/latest/).
|
||||||
|
|
||||||
|
This was done for two reasons. The first was to ensure that changes are checked by
|
||||||
|
multiple authors before being committed (everyone makes mistakes!) and the second
|
||||||
|
was visibility of the documentation. Not everyone knows that Synapse has some very
|
||||||
|
useful information hidden away in its GitHub wiki pages. Bringing them to the
|
||||||
|
documentation website should help with visibility, as well as keep all Synapse documentation
|
||||||
|
in one, easily-searchable location.
|
||||||
|
|
||||||
|
Note that contributions to the documentation website happen through [GitHub pull
|
||||||
|
requests](https://github.com/matrix-org/synapse/pulls). Please visit [#synapse-dev:matrix.org](https://matrix.to/#/#synapse-dev:matrix.org)
|
||||||
|
if you need help with the process!
|
||||||
|
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Add [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) experimental client and federation API endpoints to get the closest event to a given timestamp. ([\#9445](https://github.com/matrix-org/synapse/issues/9445))
|
||||||
|
- Include bundled relation aggregations during a limited `/sync` request and `/relations` request, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675). ([\#11284](https://github.com/matrix-org/synapse/issues/11284), [\#11478](https://github.com/matrix-org/synapse/issues/11478))
|
||||||
|
- Add plugin support for controlling database background updates. ([\#11306](https://github.com/matrix-org/synapse/issues/11306), [\#11475](https://github.com/matrix-org/synapse/issues/11475), [\#11479](https://github.com/matrix-org/synapse/issues/11479))
|
||||||
|
- Support the stable API endpoints for [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946): the room `/hierarchy` endpoint. ([\#11329](https://github.com/matrix-org/synapse/issues/11329))
|
||||||
|
- Add admin API to get some information about federation status with remote servers. ([\#11407](https://github.com/matrix-org/synapse/issues/11407))
|
||||||
|
- Support expiry of refresh tokens and expiry of the overall session when refresh tokens are in use. ([\#11425](https://github.com/matrix-org/synapse/issues/11425))
|
||||||
|
- Stabilise support for [MSC2918](https://github.com/matrix-org/matrix-doc/blob/main/proposals/2918-refreshtokens.md#msc2918-refresh-tokens) refresh tokens as they have now been merged into the Matrix specification. ([\#11435](https://github.com/matrix-org/synapse/issues/11435), [\#11522](https://github.com/matrix-org/synapse/issues/11522))
|
||||||
|
- Update [MSC2918 refresh token](https://github.com/matrix-org/matrix-doc/blob/main/proposals/2918-refreshtokens.md#msc2918-refresh-tokens) support to confirm with the latest revision: accept the `refresh_tokens` parameter in the request body rather than in the URL parameters. ([\#11430](https://github.com/matrix-org/synapse/issues/11430))
|
||||||
|
- Support configuring the lifetime of non-refreshable access tokens separately to refreshable access tokens. ([\#11445](https://github.com/matrix-org/synapse/issues/11445))
|
||||||
|
- Expose `synapse_homeserver` and `synapse_worker` commands as entry points to run Synapse's main process and worker processes, respectively. Contributed by @Ma27. ([\#11449](https://github.com/matrix-org/synapse/issues/11449))
|
||||||
|
- `synctl stop` will now wait for Synapse to exit before returning. ([\#11459](https://github.com/matrix-org/synapse/issues/11459), [\#11490](https://github.com/matrix-org/synapse/issues/11490))
|
||||||
|
- Extend the "delete room" admin api to work correctly on rooms which have previously been partially deleted. ([\#11523](https://github.com/matrix-org/synapse/issues/11523))
|
||||||
|
- Add support for the `/_matrix/client/v3/login/sso/redirect/{idpId}` API from Matrix v1.1. This endpoint was overlooked when support for v3 endpoints was added in Synapse 1.48.0rc1. ([\#11451](https://github.com/matrix-org/synapse/issues/11451))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix using [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) batch sending in combination with event persistence workers. Contributed by @tulir at Beeper. ([\#11220](https://github.com/matrix-org/synapse/issues/11220))
|
||||||
|
- Fix a long-standing bug where all requests that read events from the database could get stuck as a result of losing the database connection, properly this time. Also fix a race condition introduced in the previous insufficient fix in Synapse 1.47.0. ([\#11376](https://github.com/matrix-org/synapse/issues/11376))
|
||||||
|
- The `/send_join` response now includes the stable `event` field instead of the unstable field from [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083). ([\#11413](https://github.com/matrix-org/synapse/issues/11413))
|
||||||
|
- Fix a bug introduced in Synapse 1.47.0 where `send_join` could fail due to an outdated `ijson` version. ([\#11439](https://github.com/matrix-org/synapse/issues/11439), [\#11441](https://github.com/matrix-org/synapse/issues/11441), [\#11460](https://github.com/matrix-org/synapse/issues/11460))
|
||||||
|
- Fix a bug introduced in Synapse 1.36.0 which could cause problems fetching event-signing keys from trusted key servers. ([\#11440](https://github.com/matrix-org/synapse/issues/11440))
|
||||||
|
- Fix a bug introduced in Synapse 1.47.1 where the media repository would fail to work if the media store path contained any symbolic links. ([\#11446](https://github.com/matrix-org/synapse/issues/11446))
|
||||||
|
- Fix an `LruCache` corruption bug, introduced in Synapse 1.38.0, that would cause certain requests to fail until the next Synapse restart. ([\#11454](https://github.com/matrix-org/synapse/issues/11454))
|
||||||
|
- Fix a long-standing bug where invites from ignored users were included in incremental syncs. ([\#11511](https://github.com/matrix-org/synapse/issues/11511))
|
||||||
|
- Fix a regression in Synapse 1.48.0 where presence workers would not clear their presence updates over replication on shutdown. ([\#11518](https://github.com/matrix-org/synapse/issues/11518))
|
||||||
|
- Fix a regression in Synapse 1.48.0 where the module API's `looping_background_call` method would spam errors to the logs when given a non-async function. ([\#11524](https://github.com/matrix-org/synapse/issues/11524))
|
||||||
|
|
||||||
|
|
||||||
|
Updates to the Docker image
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
- Update `Dockerfile-workers` to healthcheck all workers in the container. ([\#11429](https://github.com/matrix-org/synapse/issues/11429))
|
||||||
|
|
||||||
|
|
||||||
|
Improved Documentation
|
||||||
|
----------------------
|
||||||
|
|
||||||
|
- Update the media repository documentation. ([\#11415](https://github.com/matrix-org/synapse/issues/11415))
|
||||||
|
- Update section about backward extremities in the room DAG concepts doc to correct the misconception about backward extremities indicating whether we have fetched an events' `prev_events`. ([\#11469](https://github.com/matrix-org/synapse/issues/11469))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Add `Final` annotation to string constants in `synapse.api.constants` so that they get typed as `Literal`s. ([\#11356](https://github.com/matrix-org/synapse/issues/11356))
|
||||||
|
- Add a check to ensure that users cannot start the Synapse master process when `worker_app` is set. ([\#11416](https://github.com/matrix-org/synapse/issues/11416))
|
||||||
|
- Add a note about postgres memory management and hugepages to postgres doc. ([\#11467](https://github.com/matrix-org/synapse/issues/11467))
|
||||||
|
- Add missing type hints to `synapse.config` module. ([\#11465](https://github.com/matrix-org/synapse/issues/11465))
|
||||||
|
- Add missing type hints to `synapse.federation`. ([\#11483](https://github.com/matrix-org/synapse/issues/11483))
|
||||||
|
- Add type annotations to `tests.storage.test_appservice`. ([\#11488](https://github.com/matrix-org/synapse/issues/11488), [\#11492](https://github.com/matrix-org/synapse/issues/11492))
|
||||||
|
- Add type annotations to some of the configuration surrounding refresh tokens. ([\#11428](https://github.com/matrix-org/synapse/issues/11428))
|
||||||
|
- Add type hints to `synapse/tests/rest/admin`. ([\#11501](https://github.com/matrix-org/synapse/issues/11501))
|
||||||
|
- Add type hints to storage classes. ([\#11411](https://github.com/matrix-org/synapse/issues/11411))
|
||||||
|
- Add wiki pages to documentation website. ([\#11402](https://github.com/matrix-org/synapse/issues/11402))
|
||||||
|
- Clean up `tests.storage.test_main` to remove use of legacy code. ([\#11493](https://github.com/matrix-org/synapse/issues/11493))
|
||||||
|
- Clean up `tests.test_visibility` to remove legacy code. ([\#11495](https://github.com/matrix-org/synapse/issues/11495))
|
||||||
|
- Convert status codes to `HTTPStatus` in `synapse.rest.admin`. ([\#11452](https://github.com/matrix-org/synapse/issues/11452), [\#11455](https://github.com/matrix-org/synapse/issues/11455))
|
||||||
|
- Extend the `scripts-dev/sign_json` script to support signing events. ([\#11486](https://github.com/matrix-org/synapse/issues/11486))
|
||||||
|
- Improve internal types in push code. ([\#11409](https://github.com/matrix-org/synapse/issues/11409))
|
||||||
|
- Improve type annotations in `synapse.module_api`. ([\#11029](https://github.com/matrix-org/synapse/issues/11029))
|
||||||
|
- Improve type hints for `LruCache`. ([\#11453](https://github.com/matrix-org/synapse/issues/11453))
|
||||||
|
- Preparation for database schema simplifications: disambiguate queries on `state_key`. ([\#11497](https://github.com/matrix-org/synapse/issues/11497))
|
||||||
|
- Refactor `backfilled` into specific behavior function arguments (`_persist_events_and_state_updates` and downstream calls). ([\#11417](https://github.com/matrix-org/synapse/issues/11417))
|
||||||
|
- Refactor `get_version_string` to fix-up types and duplicated code. ([\#11468](https://github.com/matrix-org/synapse/issues/11468))
|
||||||
|
- Refactor various parts of the `/sync` handler. ([\#11494](https://github.com/matrix-org/synapse/issues/11494), [\#11515](https://github.com/matrix-org/synapse/issues/11515))
|
||||||
|
- Remove unnecessary `json.dumps` from `tests.rest.admin`. ([\#11461](https://github.com/matrix-org/synapse/issues/11461))
|
||||||
|
- Save the OpenID Connect session ID on login. ([\#11482](https://github.com/matrix-org/synapse/issues/11482))
|
||||||
|
- Update and clean up recently ported documentation pages. ([\#11466](https://github.com/matrix-org/synapse/issues/11466))
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.48.0 (2021-11-30)
|
Synapse 1.48.0 (2021-11-30)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
|
6
debian/changelog
vendored
6
debian/changelog
vendored
@ -1,3 +1,9 @@
|
|||||||
|
matrix-synapse-py3 (1.49.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.49.0~rc1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Dec 2021 13:52:21 +0000
|
||||||
|
|
||||||
matrix-synapse-py3 (1.48.0) stable; urgency=medium
|
matrix-synapse-py3 (1.48.0) stable; urgency=medium
|
||||||
|
|
||||||
* New synapse release 1.48.0.
|
* New synapse release 1.48.0.
|
||||||
|
@ -21,3 +21,6 @@ VOLUME ["/data"]
|
|||||||
# files to run the desired worker configuration. Will start supervisord.
|
# files to run the desired worker configuration. Will start supervisord.
|
||||||
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
|
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
|
||||||
ENTRYPOINT ["/configure_workers_and_start.py"]
|
ENTRYPOINT ["/configure_workers_and_start.py"]
|
||||||
|
|
||||||
|
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
|
||||||
|
CMD /bin/sh /healthcheck.sh
|
||||||
|
6
docker/conf-workers/healthcheck.sh.j2
Normal file
6
docker/conf-workers/healthcheck.sh.j2
Normal file
@ -0,0 +1,6 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
# This healthcheck script is designed to return OK when every
|
||||||
|
# host involved returns OK
|
||||||
|
{%- for healthcheck_url in healthcheck_urls %}
|
||||||
|
curl -fSs {{ healthcheck_url }} || exit 1
|
||||||
|
{%- endfor %}
|
@ -474,10 +474,16 @@ def generate_worker_files(environ, config_path: str, data_dir: str):
|
|||||||
|
|
||||||
# Determine the load-balancing upstreams to configure
|
# Determine the load-balancing upstreams to configure
|
||||||
nginx_upstream_config = ""
|
nginx_upstream_config = ""
|
||||||
|
|
||||||
|
# At the same time, prepare a list of internal endpoints to healthcheck
|
||||||
|
# starting with the main process which exists even if no workers do.
|
||||||
|
healthcheck_urls = ["http://localhost:8080/health"]
|
||||||
|
|
||||||
for upstream_worker_type, upstream_worker_ports in nginx_upstreams.items():
|
for upstream_worker_type, upstream_worker_ports in nginx_upstreams.items():
|
||||||
body = ""
|
body = ""
|
||||||
for port in upstream_worker_ports:
|
for port in upstream_worker_ports:
|
||||||
body += " server localhost:%d;\n" % (port,)
|
body += " server localhost:%d;\n" % (port,)
|
||||||
|
healthcheck_urls.append("http://localhost:%d/health" % (port,))
|
||||||
|
|
||||||
# Add to the list of configured upstreams
|
# Add to the list of configured upstreams
|
||||||
nginx_upstream_config += NGINX_UPSTREAM_CONFIG_BLOCK.format(
|
nginx_upstream_config += NGINX_UPSTREAM_CONFIG_BLOCK.format(
|
||||||
@ -510,6 +516,13 @@ def generate_worker_files(environ, config_path: str, data_dir: str):
|
|||||||
worker_config=supervisord_config,
|
worker_config=supervisord_config,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# healthcheck config
|
||||||
|
convert(
|
||||||
|
"/conf/healthcheck.sh.j2",
|
||||||
|
"/healthcheck.sh",
|
||||||
|
healthcheck_urls=healthcheck_urls,
|
||||||
|
)
|
||||||
|
|
||||||
# Ensure the logging directory exists
|
# Ensure the logging directory exists
|
||||||
log_dir = data_dir + "/logs"
|
log_dir = data_dir + "/logs"
|
||||||
if not os.path.exists(log_dir):
|
if not os.path.exists(log_dir):
|
||||||
|
@ -44,6 +44,7 @@
|
|||||||
- [Presence router callbacks](modules/presence_router_callbacks.md)
|
- [Presence router callbacks](modules/presence_router_callbacks.md)
|
||||||
- [Account validity callbacks](modules/account_validity_callbacks.md)
|
- [Account validity callbacks](modules/account_validity_callbacks.md)
|
||||||
- [Password auth provider callbacks](modules/password_auth_provider_callbacks.md)
|
- [Password auth provider callbacks](modules/password_auth_provider_callbacks.md)
|
||||||
|
- [Background update controller callbacks](modules/background_update_controller_callbacks.md)
|
||||||
- [Porting a legacy module to the new interface](modules/porting_legacy_module.md)
|
- [Porting a legacy module to the new interface](modules/porting_legacy_module.md)
|
||||||
- [Workers](workers.md)
|
- [Workers](workers.md)
|
||||||
- [Using `synctl` with Workers](synctl_workers.md)
|
- [Using `synctl` with Workers](synctl_workers.md)
|
||||||
@ -64,9 +65,15 @@
|
|||||||
- [Statistics](admin_api/statistics.md)
|
- [Statistics](admin_api/statistics.md)
|
||||||
- [Users](admin_api/user_admin_api.md)
|
- [Users](admin_api/user_admin_api.md)
|
||||||
- [Server Version](admin_api/version_api.md)
|
- [Server Version](admin_api/version_api.md)
|
||||||
|
- [Federation](usage/administration/admin_api/federation.md)
|
||||||
- [Manhole](manhole.md)
|
- [Manhole](manhole.md)
|
||||||
- [Monitoring](metrics-howto.md)
|
- [Monitoring](metrics-howto.md)
|
||||||
|
- [Understanding Synapse Through Grafana Graphs](usage/administration/understanding_synapse_through_grafana_graphs.md)
|
||||||
|
- [Useful SQL for Admins](usage/administration/useful_sql_for_admins.md)
|
||||||
|
- [Database Maintenance Tools](usage/administration/database_maintenance_tools.md)
|
||||||
|
- [State Groups](usage/administration/state_groups.md)
|
||||||
- [Request log format](usage/administration/request_log.md)
|
- [Request log format](usage/administration/request_log.md)
|
||||||
|
- [Admin FAQ](usage/administration/admin_faq.md)
|
||||||
- [Scripts]()
|
- [Scripts]()
|
||||||
|
|
||||||
# Development
|
# Development
|
||||||
@ -94,3 +101,4 @@
|
|||||||
|
|
||||||
# Other
|
# Other
|
||||||
- [Dependency Deprecation Policy](deprecation_policy.md)
|
- [Dependency Deprecation Policy](deprecation_policy.md)
|
||||||
|
- [Running Synapse on a Single-Board Computer](other/running_synapse_on_single_board_computers.md)
|
||||||
|
@ -38,16 +38,15 @@ Most-recent-in-time events in the DAG which are not referenced by any other even
|
|||||||
The forward extremities of a room are used as the `prev_events` when the next event is sent.
|
The forward extremities of a room are used as the `prev_events` when the next event is sent.
|
||||||
|
|
||||||
|
|
||||||
## Backwards extremity
|
## Backward extremity
|
||||||
|
|
||||||
The current marker of where we have backfilled up to and will generally be the
|
The current marker of where we have backfilled up to and will generally be the
|
||||||
oldest-in-time events we know of in the DAG.
|
`prev_events` of the oldest-in-time events we have in the DAG. This gives a starting point when
|
||||||
|
backfilling history.
|
||||||
|
|
||||||
This is an event where we haven't fetched all of the `prev_events` for.
|
When we persist a non-outlier event, we clear it as a backward extremity and set
|
||||||
|
all of its `prev_events` as the new backward extremities if they aren't already
|
||||||
Once we have fetched all of its `prev_events`, it's unmarked as a backwards
|
persisted in the `events` table.
|
||||||
extremity (although we may have formed new backwards extremities from the prev
|
|
||||||
events during the backfilling process).
|
|
||||||
|
|
||||||
|
|
||||||
## Outliers
|
## Outliers
|
||||||
@ -56,8 +55,7 @@ We mark an event as an `outlier` when we haven't figured out the state for the
|
|||||||
room at that point in the DAG yet.
|
room at that point in the DAG yet.
|
||||||
|
|
||||||
We won't *necessarily* have the `prev_events` of an `outlier` in the database,
|
We won't *necessarily* have the `prev_events` of an `outlier` in the database,
|
||||||
but it's entirely possible that we *might*. The status of whether we have all of
|
but it's entirely possible that we *might*.
|
||||||
the `prev_events` is marked as a [backwards extremity](#backwards-extremity).
|
|
||||||
|
|
||||||
For example, when we fetch the event auth chain or state for a given event, we
|
For example, when we fetch the event auth chain or state for a given event, we
|
||||||
mark all of those claimed auth events as outliers because we haven't done the
|
mark all of those claimed auth events as outliers because we haven't done the
|
||||||
|
@ -2,29 +2,80 @@
|
|||||||
|
|
||||||
*Synapse implementation-specific details for the media repository*
|
*Synapse implementation-specific details for the media repository*
|
||||||
|
|
||||||
The media repository is where attachments and avatar photos are stored.
|
The media repository
|
||||||
It stores attachment content and thumbnails for media uploaded by local users.
|
* stores avatars, attachments and their thumbnails for media uploaded by local
|
||||||
It caches attachment content and thumbnails for media uploaded by remote users.
|
users.
|
||||||
|
* caches avatars, attachments and their thumbnails for media uploaded by remote
|
||||||
|
users.
|
||||||
|
* caches resources and thumbnails used for
|
||||||
|
[URL previews](development/url_previews.md).
|
||||||
|
|
||||||
## Storage
|
All media in Matrix can be identified by a unique
|
||||||
|
[MXC URI](https://spec.matrix.org/latest/client-server-api/#matrix-content-mxc-uris),
|
||||||
|
consisting of a server name and media ID:
|
||||||
|
```
|
||||||
|
mxc://<server-name>/<media-id>
|
||||||
|
```
|
||||||
|
|
||||||
Each item of media is assigned a `media_id` when it is uploaded.
|
## Local Media
|
||||||
The `media_id` is a randomly chosen, URL safe 24 character string.
|
Synapse generates 24 character media IDs for content uploaded by local users.
|
||||||
|
These media IDs consist of upper and lowercase letters and are case-sensitive.
|
||||||
|
Other homeserver implementations may generate media IDs differently.
|
||||||
|
|
||||||
Metadata such as the MIME type, upload time and length are stored in the
|
Local media is recorded in the `local_media_repository` table, which includes
|
||||||
sqlite3 database indexed by `media_id`.
|
metadata such as MIME types, upload times and file sizes.
|
||||||
|
Note that this table is shared by the URL cache, which has a different media ID
|
||||||
|
scheme.
|
||||||
|
|
||||||
Content is stored on the filesystem under a `"local_content"` directory.
|
### Paths
|
||||||
|
A file with media ID `aabbcccccccccccccccccccc` and its `128x96` `image/jpeg`
|
||||||
|
thumbnail, created by scaling, would be stored at:
|
||||||
|
```
|
||||||
|
local_content/aa/bb/cccccccccccccccccccc
|
||||||
|
local_thumbnails/aa/bb/cccccccccccccccccccc/128-96-image-jpeg-scale
|
||||||
|
```
|
||||||
|
|
||||||
Thumbnails are stored under a `"local_thumbnails"` directory.
|
## Remote Media
|
||||||
|
When media from a remote homeserver is requested from Synapse, it is assigned
|
||||||
|
a local `filesystem_id`, with the same format as locally-generated media IDs,
|
||||||
|
as described above.
|
||||||
|
|
||||||
The item with `media_id` `"aabbccccccccdddddddddddd"` is stored under
|
A record of remote media is stored in the `remote_media_cache` table, which
|
||||||
`"local_content/aa/bb/ccccccccdddddddddddd"`. Its thumbnail with width
|
can be used to map remote MXC URIs (server names and media IDs) to local
|
||||||
`128` and height `96` and type `"image/jpeg"` is stored under
|
`filesystem_id`s.
|
||||||
`"local_thumbnails/aa/bb/ccccccccdddddddddddd/128-96-image-jpeg"`
|
|
||||||
|
|
||||||
Remote content is cached under `"remote_content"` directory. Each item of
|
### Paths
|
||||||
remote content is assigned a local `"filesystem_id"` to ensure that the
|
A file from `matrix.org` with `filesystem_id` `aabbcccccccccccccccccccc` and its
|
||||||
directory structure `"remote_content/server_name/aa/bb/ccccccccdddddddddddd"`
|
`128x96` `image/jpeg` thumbnail, created by scaling, would be stored at:
|
||||||
is appropriate. Thumbnails for remote content are stored under
|
```
|
||||||
`"remote_thumbnail/server_name/..."`
|
remote_content/matrix.org/aa/bb/cccccccccccccccccccc
|
||||||
|
remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg-scale
|
||||||
|
```
|
||||||
|
Older thumbnails may omit the thumbnailing method:
|
||||||
|
```
|
||||||
|
remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that `remote_thumbnail/` does not have an `s`.
|
||||||
|
|
||||||
|
## URL Previews
|
||||||
|
See [URL Previews](development/url_previews.md) for documentation on the URL preview
|
||||||
|
process.
|
||||||
|
|
||||||
|
When generating previews for URLs, Synapse may download and cache various
|
||||||
|
resources, including images. These resources are assigned temporary media IDs
|
||||||
|
of the form `yyyy-mm-dd_aaaaaaaaaaaaaaaa`, where `yyyy-mm-dd` is the current
|
||||||
|
date and `aaaaaaaaaaaaaaaa` is a random sequence of 16 case-sensitive letters.
|
||||||
|
|
||||||
|
The metadata for these cached resources is stored in the
|
||||||
|
`local_media_repository` and `local_media_repository_url_cache` tables.
|
||||||
|
|
||||||
|
Resources for URL previews are deleted after a few days.
|
||||||
|
|
||||||
|
### Paths
|
||||||
|
The file with media ID `yyyy-mm-dd_aaaaaaaaaaaaaaaa` and its `128x96`
|
||||||
|
`image/jpeg` thumbnail, created by scaling, would be stored at:
|
||||||
|
```
|
||||||
|
url_cache/yyyy-mm-dd/aaaaaaaaaaaaaaaa
|
||||||
|
url_cache_thumbnails/yyyy-mm-dd/aaaaaaaaaaaaaaaa/128-96-image-jpeg-scale
|
||||||
|
```
|
||||||
|
71
docs/modules/background_update_controller_callbacks.md
Normal file
71
docs/modules/background_update_controller_callbacks.md
Normal file
@ -0,0 +1,71 @@
|
|||||||
|
# Background update controller callbacks
|
||||||
|
|
||||||
|
Background update controller callbacks allow module developers to control (e.g. rate-limit)
|
||||||
|
how database background updates are run. A database background update is an operation
|
||||||
|
Synapse runs on its database in the background after it starts. It's usually used to run
|
||||||
|
database operations that would take too long if they were run at the same time as schema
|
||||||
|
updates (which are run on startup) and delay Synapse's startup too much: populating a
|
||||||
|
table with a big amount of data, adding an index on a big table, deleting superfluous data,
|
||||||
|
etc.
|
||||||
|
|
||||||
|
Background update controller callbacks can be registered using the module API's
|
||||||
|
`register_background_update_controller_callbacks` method. Only the first module (in order
|
||||||
|
of appearance in Synapse's configuration file) calling this method can register background
|
||||||
|
update controller callbacks, subsequent calls are ignored.
|
||||||
|
|
||||||
|
The available background update controller callbacks are:
|
||||||
|
|
||||||
|
### `on_update`
|
||||||
|
|
||||||
|
_First introduced in Synapse v1.49.0_
|
||||||
|
|
||||||
|
```python
|
||||||
|
def on_update(update_name: str, database_name: str, one_shot: bool) -> AsyncContextManager[int]
|
||||||
|
```
|
||||||
|
|
||||||
|
Called when about to do an iteration of a background update. The module is given the name
|
||||||
|
of the update, the name of the database, and a flag to indicate whether the background
|
||||||
|
update will happen in one go and may take a long time (e.g. creating indices). If this last
|
||||||
|
argument is set to `False`, the update will be run in batches.
|
||||||
|
|
||||||
|
The module must return an async context manager. It will be entered before Synapse runs a
|
||||||
|
background update; this should return the desired duration of the iteration, in
|
||||||
|
milliseconds.
|
||||||
|
|
||||||
|
The context manager will be exited when the iteration completes. Note that the duration
|
||||||
|
returned by the context manager is a target, and an iteration may take substantially longer
|
||||||
|
or shorter. If the `one_shot` flag is set to `True`, the duration returned is ignored.
|
||||||
|
|
||||||
|
__Note__: Unlike most module callbacks in Synapse, this one is _synchronous_. This is
|
||||||
|
because asynchronous operations are expected to be run by the async context manager.
|
||||||
|
|
||||||
|
This callback is required when registering any other background update controller callback.
|
||||||
|
|
||||||
|
### `default_batch_size`
|
||||||
|
|
||||||
|
_First introduced in Synapse v1.49.0_
|
||||||
|
|
||||||
|
```python
|
||||||
|
async def default_batch_size(update_name: str, database_name: str) -> int
|
||||||
|
```
|
||||||
|
|
||||||
|
Called before the first iteration of a background update, with the name of the update and
|
||||||
|
of the database. The module must return the number of elements to process in this first
|
||||||
|
iteration.
|
||||||
|
|
||||||
|
If this callback is not defined, Synapse will use a default value of 100.
|
||||||
|
|
||||||
|
### `min_batch_size`
|
||||||
|
|
||||||
|
_First introduced in Synapse v1.49.0_
|
||||||
|
|
||||||
|
```python
|
||||||
|
async def min_batch_size(update_name: str, database_name: str) -> int
|
||||||
|
```
|
||||||
|
|
||||||
|
Called before running a new batch for a background update, with the name of the update and
|
||||||
|
of the database. The module must return an integer representing the minimum number of
|
||||||
|
elements to process in this iteration. This number must be at least 1, and is used to
|
||||||
|
ensure that progress is always made.
|
||||||
|
|
||||||
|
If this callback is not defined, Synapse will use a default value of 100.
|
@ -71,15 +71,15 @@ Modules **must** register their web resources in their `__init__` method.
|
|||||||
## Registering a callback
|
## Registering a callback
|
||||||
|
|
||||||
Modules can use Synapse's module API to register callbacks. Callbacks are functions that
|
Modules can use Synapse's module API to register callbacks. Callbacks are functions that
|
||||||
Synapse will call when performing specific actions. Callbacks must be asynchronous, and
|
Synapse will call when performing specific actions. Callbacks must be asynchronous (unless
|
||||||
are split in categories. A single module may implement callbacks from multiple categories,
|
specified otherwise), and are split in categories. A single module may implement callbacks
|
||||||
and is under no obligation to implement all callbacks from the categories it registers
|
from multiple categories, and is under no obligation to implement all callbacks from the
|
||||||
callbacks for.
|
categories it registers callbacks for.
|
||||||
|
|
||||||
Modules can register callbacks using one of the module API's `register_[...]_callbacks`
|
Modules can register callbacks using one of the module API's `register_[...]_callbacks`
|
||||||
methods. The callback functions are passed to these methods as keyword arguments, with
|
methods. The callback functions are passed to these methods as keyword arguments, with
|
||||||
the callback name as the argument name and the function as its value. This is demonstrated
|
the callback name as the argument name and the function as its value. A
|
||||||
in the example below. A `register_[...]_callbacks` method exists for each category.
|
`register_[...]_callbacks` method exists for each category.
|
||||||
|
|
||||||
Callbacks for each category can be found on their respective page of the
|
Callbacks for each category can be found on their respective page of the
|
||||||
[Synapse documentation website](https://matrix-org.github.io/synapse).
|
[Synapse documentation website](https://matrix-org.github.io/synapse).
|
@ -83,7 +83,7 @@ oidc_providers:
|
|||||||
|
|
||||||
### Dex
|
### Dex
|
||||||
|
|
||||||
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
|
[Dex][dex-idp] is a simple, open-source OpenID Connect Provider.
|
||||||
Although it is designed to help building a full-blown provider with an
|
Although it is designed to help building a full-blown provider with an
|
||||||
external database, it can be configured with static passwords in a config file.
|
external database, it can be configured with static passwords in a config file.
|
||||||
|
|
||||||
@ -523,7 +523,7 @@ The synapse config will look like this:
|
|||||||
email_template: "{{ user.email }}"
|
email_template: "{{ user.email }}"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Django OAuth Toolkit
|
### Django OAuth Toolkit
|
||||||
|
|
||||||
[django-oauth-toolkit](https://github.com/jazzband/django-oauth-toolkit) is a
|
[django-oauth-toolkit](https://github.com/jazzband/django-oauth-toolkit) is a
|
||||||
Django application providing out of the box all the endpoints, data and logic
|
Django application providing out of the box all the endpoints, data and logic
|
||||||
|
74
docs/other/running_synapse_on_single_board_computers.md
Normal file
74
docs/other/running_synapse_on_single_board_computers.md
Normal file
@ -0,0 +1,74 @@
|
|||||||
|
## Summary of performance impact of running on resource constrained devices such as SBCs
|
||||||
|
|
||||||
|
I've been running my homeserver on a cubietruck at home now for some time and am often replying to statements like "you need loads of ram to join large rooms" with "it works fine for me". I thought it might be useful to curate a summary of the issues you're likely to run into to help as a scaling-down guide, maybe highlight these for development work or end up as documentation. It seems that once you get up to about 4x1.5GHz arm64 4GiB these issues are no longer a problem.
|
||||||
|
|
||||||
|
- **Platform**: 2x1GHz armhf 2GiB ram [Single-board computers](https://wiki.debian.org/CheapServerBoxHardware), SSD, postgres.
|
||||||
|
|
||||||
|
### Presence
|
||||||
|
|
||||||
|
This is the main reason people have a poor matrix experience on resource constrained homeservers. Element web will frequently be saying the server is offline while the python process will be pegged at 100% cpu. This feature is used to tell when other users are active (have a client app in the foreground) and therefore more likely to respond, but requires a lot of network activity to maintain even when nobody is talking in a room.
|
||||||
|
|
||||||
|
![Screenshot_2020-10-01_19-29-46](https://user-images.githubusercontent.com/71895/94848963-a47a3580-041c-11eb-8b6e-acb772b4259e.png)
|
||||||
|
|
||||||
|
While synapse does have some performance issues with presence [#3971](https://github.com/matrix-org/synapse/issues/3971), the fundamental problem is that this is an easy feature to implement for a centralised service at nearly no overhead, but federation makes it combinatorial [#8055](https://github.com/matrix-org/synapse/issues/8055). There is also a client-side config option which disables the UI and idle tracking [enable_presence_by_hs_url] to blacklist the largest instances but I didn't notice much difference, so I recommend disabling the feature entirely at the server level as well.
|
||||||
|
|
||||||
|
[enable_presence_by_hs_url]: https://github.com/vector-im/element-web/blob/v1.7.8/config.sample.json#L45
|
||||||
|
|
||||||
|
### Joining
|
||||||
|
|
||||||
|
Joining a "large", federated room will initially fail with the below message in Element web, but waiting a while (10-60mins) and trying again will succeed without any issue. What counts as "large" is not message history, user count, connections to homeservers or even a simple count of the state events, it is instead how long the state resolution algorithm takes. However, each of those numbers are reasonable proxies, so we can use them as estimates since user count is one of the few things you see before joining.
|
||||||
|
|
||||||
|
![Screenshot_2020-10-02_17-15-06](https://user-images.githubusercontent.com/71895/94945781-18771500-04d3-11eb-8419-83c2da73a341.png)
|
||||||
|
|
||||||
|
This is [#1211](https://github.com/matrix-org/synapse/issues/1211) and will also hopefully be mitigated by peeking [matrix-org/matrix-doc#2753](https://github.com/matrix-org/matrix-doc/pull/2753) so at least you don't need to wait for a join to complete before finding out if it's the kind of room you want. Note that you should first disable presence, otherwise it'll just make the situation worse [#3120](https://github.com/matrix-org/synapse/issues/3120). There is a lot of database interaction too, so make sure you've [migrated your data](../postgres.md) from the default sqlite to postgresql. Personally, I recommend patience - once the initial join is complete there's rarely any issues with actually interacting with the room, but if you like you can just block "large" rooms entirely.
|
||||||
|
|
||||||
|
### Sessions
|
||||||
|
|
||||||
|
Anything that requires modifying the device list [#7721](https://github.com/matrix-org/synapse/issues/7721) will take a while to propagate, again taking the client "Offline" until it's complete. This includes signing in and out, editing the public name and verifying e2ee. The main mitigation I recommend is to keep long-running sessions open e.g. by using Firefox SSB "Use this site in App mode" or Chromium PWA "Install Element".
|
||||||
|
|
||||||
|
### Recommended configuration
|
||||||
|
|
||||||
|
Put the below in a new file at /etc/matrix-synapse/conf.d/sbc.yaml to override the defaults in homeserver.yaml.
|
||||||
|
|
||||||
|
```
|
||||||
|
# Set to false to disable presence tracking on this homeserver.
|
||||||
|
use_presence: false
|
||||||
|
|
||||||
|
# When this is enabled, the room "complexity" will be checked before a user
|
||||||
|
# joins a new remote room. If it is above the complexity limit, the server will
|
||||||
|
# disallow joining, or will instantly leave.
|
||||||
|
limit_remote_rooms:
|
||||||
|
# Uncomment to enable room complexity checking.
|
||||||
|
#enabled: true
|
||||||
|
complexity: 3.0
|
||||||
|
|
||||||
|
# Database configuration
|
||||||
|
database:
|
||||||
|
name: psycopg2
|
||||||
|
args:
|
||||||
|
user: matrix-synapse
|
||||||
|
# Generate a long, secure one with a password manager
|
||||||
|
password: hunter2
|
||||||
|
database: matrix-synapse
|
||||||
|
host: localhost
|
||||||
|
cp_min: 5
|
||||||
|
cp_max: 10
|
||||||
|
```
|
||||||
|
|
||||||
|
Currently the complexity is measured by [current_state_events / 500](https://github.com/matrix-org/synapse/blob/v1.20.1/synapse/storage/databases/main/events_worker.py#L986). You can find join times and your most complex rooms like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
admin@homeserver:~$ zgrep '/client/r0/join/' /var/log/matrix-synapse/homeserver.log* | awk '{print $18, $25}' | sort --human-numeric-sort
|
||||||
|
29.922sec/-0.002sec /_matrix/client/r0/join/%23debian-fasttrack%3Apoddery.com
|
||||||
|
182.088sec/0.003sec /_matrix/client/r0/join/%23decentralizedweb-general%3Amatrix.org
|
||||||
|
911.625sec/-570.847sec /_matrix/client/r0/join/%23synapse%3Amatrix.org
|
||||||
|
|
||||||
|
admin@homeserver:~$ sudo --user postgres psql matrix-synapse --command 'select canonical_alias, joined_members, current_state_events from room_stats_state natural join room_stats_current where canonical_alias is not null order by current_state_events desc fetch first 5 rows only'
|
||||||
|
canonical_alias | joined_members | current_state_events
|
||||||
|
-------------------------------+----------------+----------------------
|
||||||
|
#_oftc_#debian:matrix.org | 871 | 52355
|
||||||
|
#matrix:matrix.org | 6379 | 10684
|
||||||
|
#irc:matrix.org | 461 | 3751
|
||||||
|
#decentralizedweb-general:matrix.org | 997 | 1509
|
||||||
|
#whatsapp:maunium.net | 554 | 854
|
||||||
|
```
|
@ -118,6 +118,9 @@ performance:
|
|||||||
Note that the appropriate values for those fields depend on the amount
|
Note that the appropriate values for those fields depend on the amount
|
||||||
of free memory the database host has available.
|
of free memory the database host has available.
|
||||||
|
|
||||||
|
Additionally, admins of large deployments might want to consider using huge pages
|
||||||
|
to help manage memory, especially when using large values of `shared_buffers`. You
|
||||||
|
can read more about that [here](https://www.postgresql.org/docs/10/kernel-resources.html#LINUX-HUGE-PAGES).
|
||||||
|
|
||||||
## Porting from SQLite
|
## Porting from SQLite
|
||||||
|
|
||||||
|
@ -1209,6 +1209,44 @@ oembed:
|
|||||||
#
|
#
|
||||||
#session_lifetime: 24h
|
#session_lifetime: 24h
|
||||||
|
|
||||||
|
# Time that an access token remains valid for, if the session is
|
||||||
|
# using refresh tokens.
|
||||||
|
# For more information about refresh tokens, please see the manual.
|
||||||
|
# Note that this only applies to clients which advertise support for
|
||||||
|
# refresh tokens.
|
||||||
|
#
|
||||||
|
# Note also that this is calculated at login time and refresh time:
|
||||||
|
# changes are not applied to existing sessions until they are refreshed.
|
||||||
|
#
|
||||||
|
# By default, this is 5 minutes.
|
||||||
|
#
|
||||||
|
#refreshable_access_token_lifetime: 5m
|
||||||
|
|
||||||
|
# Time that a refresh token remains valid for (provided that it is not
|
||||||
|
# exchanged for another one first).
|
||||||
|
# This option can be used to automatically log-out inactive sessions.
|
||||||
|
# Please see the manual for more information.
|
||||||
|
#
|
||||||
|
# Note also that this is calculated at login time and refresh time:
|
||||||
|
# changes are not applied to existing sessions until they are refreshed.
|
||||||
|
#
|
||||||
|
# By default, this is infinite.
|
||||||
|
#
|
||||||
|
#refresh_token_lifetime: 24h
|
||||||
|
|
||||||
|
# Time that an access token remains valid for, if the session is NOT
|
||||||
|
# using refresh tokens.
|
||||||
|
# Please note that not all clients support refresh tokens, so setting
|
||||||
|
# this to a short value may be inconvenient for some users who will
|
||||||
|
# then be logged out frequently.
|
||||||
|
#
|
||||||
|
# Note also that this is calculated at login time: changes are not applied
|
||||||
|
# retrospectively to existing sessions for users that have already logged in.
|
||||||
|
#
|
||||||
|
# By default, this is infinite.
|
||||||
|
#
|
||||||
|
#nonrefreshable_access_token_lifetime: 24h
|
||||||
|
|
||||||
# The user must provide all of the below types of 3PID when registering.
|
# The user must provide all of the below types of 3PID when registering.
|
||||||
#
|
#
|
||||||
#registrations_require_3pid:
|
#registrations_require_3pid:
|
||||||
|
@ -71,7 +71,12 @@ Below are the templates Synapse will look for when generating the content of an
|
|||||||
* `sender_avatar_url`: the avatar URL (as a `mxc://` URL) for the event's
|
* `sender_avatar_url`: the avatar URL (as a `mxc://` URL) for the event's
|
||||||
sender
|
sender
|
||||||
* `sender_hash`: a hash of the user ID of the sender
|
* `sender_hash`: a hash of the user ID of the sender
|
||||||
|
* `msgtype`: the type of the message
|
||||||
|
* `body_text_html`: html representation of the message
|
||||||
|
* `body_text_plain`: plaintext representation of the message
|
||||||
|
* `image_url`: mxc url of an image, when "msgtype" is "m.image"
|
||||||
* `link`: a `matrix.to` link to the room
|
* `link`: a `matrix.to` link to the room
|
||||||
|
* `avator_url`: url to the room's avator
|
||||||
* `reason`: information on the event that triggered the email to be sent. It's an
|
* `reason`: information on the event that triggered the email to be sent. It's an
|
||||||
object with the following attributes:
|
object with the following attributes:
|
||||||
* `room_id`: the ID of the room the event was sent in
|
* `room_id`: the ID of the room the event was sent in
|
||||||
|
114
docs/usage/administration/admin_api/federation.md
Normal file
114
docs/usage/administration/admin_api/federation.md
Normal file
@ -0,0 +1,114 @@
|
|||||||
|
# Federation API
|
||||||
|
|
||||||
|
This API allows a server administrator to manage Synapse's federation with other homeservers.
|
||||||
|
|
||||||
|
Note: This API is new, experimental and "subject to change".
|
||||||
|
|
||||||
|
## List of destinations
|
||||||
|
|
||||||
|
This API gets the current destination retry timing info for all remote servers.
|
||||||
|
|
||||||
|
The list contains all the servers with which the server federates,
|
||||||
|
regardless of whether an error occurred or not.
|
||||||
|
If an error occurs, it may take up to 20 minutes for the error to be displayed here,
|
||||||
|
as a complete retry must have failed.
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
A standard request with no filtering:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /_synapse/admin/v1/federation/destinations
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"destinations":[
|
||||||
|
{
|
||||||
|
"destination": "matrix.org",
|
||||||
|
"retry_last_ts": 1557332397936,
|
||||||
|
"retry_interval": 3000000,
|
||||||
|
"failure_ts": 1557329397936,
|
||||||
|
"last_successful_stream_ordering": null
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"total": 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
To paginate, check for `next_token` and if present, call the endpoint again
|
||||||
|
with `from` set to the value of `next_token`. This will return a new page.
|
||||||
|
|
||||||
|
If the endpoint does not return a `next_token` then there are no more destinations
|
||||||
|
to paginate through.
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following query parameters are available:
|
||||||
|
|
||||||
|
- `from` - Offset in the returned list. Defaults to `0`.
|
||||||
|
- `limit` - Maximum amount of destinations to return. Defaults to `100`.
|
||||||
|
- `order_by` - The method in which to sort the returned list of destinations.
|
||||||
|
Valid values are:
|
||||||
|
- `destination` - Destinations are ordered alphabetically by remote server name.
|
||||||
|
This is the default.
|
||||||
|
- `retry_last_ts` - Destinations are ordered by time of last retry attempt in ms.
|
||||||
|
- `retry_interval` - Destinations are ordered by how long until next retry in ms.
|
||||||
|
- `failure_ts` - Destinations are ordered by when the server started failing in ms.
|
||||||
|
- `last_successful_stream_ordering` - Destinations are ordered by the stream ordering
|
||||||
|
of the most recent successfully-sent PDU.
|
||||||
|
- `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
|
||||||
|
this value to `b` will reverse the above sort order. Defaults to `f`.
|
||||||
|
|
||||||
|
*Caution:* The database only has an index on the column `destination`.
|
||||||
|
This means that if a different sort order is used,
|
||||||
|
this can cause a large load on the database, especially for large environments.
|
||||||
|
|
||||||
|
**Response**
|
||||||
|
|
||||||
|
The following fields are returned in the JSON response body:
|
||||||
|
|
||||||
|
- `destinations` - An array of objects, each containing information about a destination.
|
||||||
|
Destination objects contain the following fields:
|
||||||
|
- `destination` - string - Name of the remote server to federate.
|
||||||
|
- `retry_last_ts` - integer - The last time Synapse tried and failed to reach the
|
||||||
|
remote server, in ms. This is `0` if the last attempt to communicate with the
|
||||||
|
remote server was successful.
|
||||||
|
- `retry_interval` - integer - How long since the last time Synapse tried to reach
|
||||||
|
the remote server before trying again, in ms. This is `0` if no further retrying occuring.
|
||||||
|
- `failure_ts` - nullable integer - The first time Synapse tried and failed to reach the
|
||||||
|
remote server, in ms. This is `null` if communication with the remote server has never failed.
|
||||||
|
- `last_successful_stream_ordering` - nullable integer - The stream ordering of the most
|
||||||
|
recent successfully-sent [PDU](understanding_synapse_through_grafana_graphs.md#federation)
|
||||||
|
to this destination, or `null` if this information has not been tracked yet.
|
||||||
|
- `next_token`: string representing a positive integer - Indication for pagination. See above.
|
||||||
|
- `total` - integer - Total number of destinations.
|
||||||
|
|
||||||
|
# Destination Details API
|
||||||
|
|
||||||
|
This API gets the retry timing info for a specific remote server.
|
||||||
|
|
||||||
|
The API is:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /_synapse/admin/v1/federation/destinations/<destination>
|
||||||
|
```
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"destination": "matrix.org",
|
||||||
|
"retry_last_ts": 1557332397936,
|
||||||
|
"retry_interval": 3000000,
|
||||||
|
"failure_ts": 1557329397936,
|
||||||
|
"last_successful_stream_ordering": null
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response**
|
||||||
|
|
||||||
|
The response fields are the same like in the `destinations` array in
|
||||||
|
[List of destinations](#list-of-destinations) response.
|
103
docs/usage/administration/admin_faq.md
Normal file
103
docs/usage/administration/admin_faq.md
Normal file
@ -0,0 +1,103 @@
|
|||||||
|
## Admin FAQ
|
||||||
|
|
||||||
|
How do I become a server admin?
|
||||||
|
---
|
||||||
|
If your server already has an admin account you should use the user admin API to promote other accounts to become admins. See [User Admin API](../../admin_api/user_admin_api.md#Change-whether-a-user-is-a-server-administrator-or-not)
|
||||||
|
|
||||||
|
If you don't have any admin accounts yet you won't be able to use the admin API so you'll have to edit the database manually. Manually editing the database is generally not recommended so once you have an admin account, use the admin APIs to make further changes.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
|
||||||
|
```
|
||||||
|
What servers are my server talking to?
|
||||||
|
---
|
||||||
|
Run this sql query on your db:
|
||||||
|
```sql
|
||||||
|
SELECT * FROM destinations;
|
||||||
|
```
|
||||||
|
|
||||||
|
What servers are currently participating in this room?
|
||||||
|
---
|
||||||
|
Run this sql query on your db:
|
||||||
|
```sql
|
||||||
|
SELECT DISTINCT split_part(state_key, ':', 2)
|
||||||
|
FROM current_state_events AS c
|
||||||
|
INNER JOIN room_memberships AS m USING (room_id, event_id)
|
||||||
|
WHERE room_id = '!cURbafjkfsMDVwdRDQ:matrix.org' AND membership = 'join';
|
||||||
|
```
|
||||||
|
|
||||||
|
What users are registered on my server?
|
||||||
|
---
|
||||||
|
```sql
|
||||||
|
SELECT NAME from users;
|
||||||
|
```
|
||||||
|
|
||||||
|
Manually resetting passwords:
|
||||||
|
---
|
||||||
|
See https://github.com/matrix-org/synapse/blob/master/README.rst#password-reset
|
||||||
|
|
||||||
|
I have a problem with my server. Can I just delete my database and start again?
|
||||||
|
---
|
||||||
|
Deleting your database is unlikely to make anything better.
|
||||||
|
|
||||||
|
It's easy to make the mistake of thinking that you can start again from a clean slate by dropping your database, but things don't work like that in a federated network: lots of other servers have information about your server.
|
||||||
|
|
||||||
|
For example: other servers might think that you are in a room, your server will think that you are not, and you'll probably be unable to interact with that room in a sensible way ever again.
|
||||||
|
|
||||||
|
In general, there are better solutions to any problem than dropping the database. Come and seek help in https://matrix.to/#/#synapse:matrix.org.
|
||||||
|
|
||||||
|
There are two exceptions when it might be sensible to delete your database and start again:
|
||||||
|
* You have *never* joined any rooms which are federated with other servers. For instance, a local deployment which the outside world can't talk to.
|
||||||
|
* You are changing the `server_name` in the homeserver configuration. In effect this makes your server a completely new one from the point of view of the network, so in this case it makes sense to start with a clean database.
|
||||||
|
(In both cases you probably also want to clear out the media_store.)
|
||||||
|
|
||||||
|
I've stuffed up access to my room, how can I delete it to free up the alias?
|
||||||
|
---
|
||||||
|
Using the following curl command:
|
||||||
|
```
|
||||||
|
curl -H 'Authorization: Bearer <access-token>' -X DELETE https://matrix.org/_matrix/client/r0/directory/room/<room-alias>
|
||||||
|
```
|
||||||
|
`<access-token>` - can be obtained in riot by looking in the riot settings, down the bottom is:
|
||||||
|
Access Token:\<click to reveal\>
|
||||||
|
|
||||||
|
`<room-alias>` - the room alias, eg. #my_room:matrix.org this possibly needs to be URL encoded also, for example %23my_room%3Amatrix.org
|
||||||
|
|
||||||
|
How can I find the lines corresponding to a given HTTP request in my homeserver log?
|
||||||
|
---
|
||||||
|
|
||||||
|
Synapse tags each log line according to the HTTP request it is processing. When it finishes processing each request, it logs a line containing the words `Processed request: `. For example:
|
||||||
|
|
||||||
|
```
|
||||||
|
2019-02-14 22:35:08,196 - synapse.access.http.8008 - 302 - INFO - GET-37 - ::1 - 8008 - {@richvdh:localhost} Processed request: 0.173sec/0.001sec (0.002sec, 0.000sec) (0.027sec/0.026sec/2) 687B 200 "GET /_matrix/client/r0/sync HTTP/1.1" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" [0 dbevts]"
|
||||||
|
```
|
||||||
|
|
||||||
|
Here we can see that the request has been tagged with `GET-37`. (The tag depends on the method of the HTTP request, so might start with `GET-`, `PUT-`, `POST-`, `OPTIONS-` or `DELETE-`.) So to find all lines corresponding to this request, we can do:
|
||||||
|
|
||||||
|
```
|
||||||
|
grep 'GET-37' homeserver.log
|
||||||
|
```
|
||||||
|
|
||||||
|
If you want to paste that output into a github issue or matrix room, please remember to surround it with triple-backticks (```) to make it legible (see https://help.github.com/en/articles/basic-writing-and-formatting-syntax#quoting-code).
|
||||||
|
|
||||||
|
|
||||||
|
What do all those fields in the 'Processed' line mean?
|
||||||
|
---
|
||||||
|
See [Request log format](request_log.md).
|
||||||
|
|
||||||
|
|
||||||
|
What are the biggest rooms on my server?
|
||||||
|
---
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT s.canonical_alias, g.room_id, count(*) AS num_rows
|
||||||
|
FROM
|
||||||
|
state_groups_state AS g,
|
||||||
|
room_stats_state AS s
|
||||||
|
WHERE g.room_id = s.room_id
|
||||||
|
GROUP BY s.canonical_alias, g.room_id
|
||||||
|
ORDER BY num_rows desc
|
||||||
|
LIMIT 10;
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also use the [List Room API](../../admin_api/rooms.md#list-room-api)
|
||||||
|
and `order_by` `state_events`.
|
18
docs/usage/administration/database_maintenance_tools.md
Normal file
18
docs/usage/administration/database_maintenance_tools.md
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
This blog post by Victor Berger explains how to use many of the tools listed on this page: https://levans.fr/shrink-synapse-database.html
|
||||||
|
|
||||||
|
# List of useful tools and scripts for maintenance Synapse database:
|
||||||
|
|
||||||
|
## [Purge Remote Media API](../../admin_api/media_admin_api.md#purge-remote-media-api)
|
||||||
|
The purge remote media API allows server admins to purge old cached remote media.
|
||||||
|
|
||||||
|
## [Purge Local Media API](../../admin_api/media_admin_api.md#delete-local-media)
|
||||||
|
This API deletes the *local* media from the disk of your own server.
|
||||||
|
|
||||||
|
## [Purge History API](../../admin_api/purge_history_api.md)
|
||||||
|
The purge history API allows server admins to purge historic events from their database, reclaiming disk space.
|
||||||
|
|
||||||
|
## [synapse-compress-state](https://github.com/matrix-org/rust-synapse-compress-state)
|
||||||
|
Tool for compressing (deduplicating) `state_groups_state` table.
|
||||||
|
|
||||||
|
## [SQL for analyzing Synapse PostgreSQL database stats](useful_sql_for_admins.md)
|
||||||
|
Some easy SQL that reports useful stats about your Synapse database.
|
25
docs/usage/administration/state_groups.md
Normal file
25
docs/usage/administration/state_groups.md
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
# How do State Groups work?
|
||||||
|
|
||||||
|
As a general rule, I encourage people who want to understand the deepest darkest secrets of the database schema to drop by #synapse-dev:matrix.org and ask questions.
|
||||||
|
|
||||||
|
However, one question that comes up frequently is that of how "state groups" work, and why the `state_groups_state` table gets so big, so here's an attempt to answer that question.
|
||||||
|
|
||||||
|
We need to be able to relatively quickly calculate the state of a room at any point in that room's history. In other words, we need to know the state of the room at each event in that room. This is done as follows:
|
||||||
|
|
||||||
|
A sequence of events where the state is the same are grouped together into a `state_group`; the mapping is recorded in `event_to_state_groups`. (Technically speaking, since a state event usually changes the state in the room, we are recording the state of the room *after* the given event id: which is to say, to a handwavey simplification, the first event in a state group is normally a state event, and others in the same state group are normally non-state-events.)
|
||||||
|
|
||||||
|
`state_groups` records, for each state group, the id of the room that we're looking at, and also the id of the first event in that group. (I'm not sure if that event id is used much in practice.)
|
||||||
|
|
||||||
|
Now, if we stored all the room state for each `state_group`, that would be a huge amount of data. Instead, for each state group, we normally store the difference between the state in that group and some other state group, and only occasionally (every 100 state changes or so) record the full state.
|
||||||
|
|
||||||
|
So, most state groups have an entry in `state_group_edges` (don't ask me why it's not a column in `state_groups`) which records the previous state group in the room, and `state_groups_state` records the differences in state since that previous state group.
|
||||||
|
|
||||||
|
A full state group just records the event id for each piece of state in the room at that point.
|
||||||
|
|
||||||
|
## Known bugs with state groups
|
||||||
|
|
||||||
|
There are various reasons that we can end up creating many more state groups than we need: see https://github.com/matrix-org/synapse/issues/3364 for more details.
|
||||||
|
|
||||||
|
## Compression tool
|
||||||
|
|
||||||
|
There is a tool at https://github.com/matrix-org/rust-synapse-compress-state which can compress the `state_groups_state` on a room by-room basis (essentially, it reduces the number of "full" state groups). This can result in dramatic reductions of the storage used.
|
@ -0,0 +1,84 @@
|
|||||||
|
## Understanding Synapse through Grafana graphs
|
||||||
|
|
||||||
|
It is possible to monitor much of the internal state of Synapse using [Prometheus](https://prometheus.io)
|
||||||
|
metrics and [Grafana](https://grafana.com/).
|
||||||
|
A guide for configuring Synapse to provide metrics is available [here](../../metrics-howto.md)
|
||||||
|
and information on setting up Grafana is [here](https://github.com/matrix-org/synapse/tree/master/contrib/grafana).
|
||||||
|
In this setup, Prometheus will periodically scrape the information Synapse provides and
|
||||||
|
store a record of it over time. Grafana is then used as an interface to query and
|
||||||
|
present this information through a series of pretty graphs.
|
||||||
|
|
||||||
|
Once you have grafana set up, and assuming you're using [our grafana dashboard template](https://github.com/matrix-org/synapse/blob/master/contrib/grafana/synapse.json), look for the following graphs when debugging a slow/overloaded Synapse:
|
||||||
|
|
||||||
|
## Message Event Send Time
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82239409-a1c8e900-9930-11ea-8081-e4614e0c63f4.png)
|
||||||
|
|
||||||
|
This, along with the CPU and Memory graphs, is a good way to check the general health of your Synapse instance. It represents how long it takes for a user on your homeserver to send a message.
|
||||||
|
|
||||||
|
## Transaction Count and Transaction Duration
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82239985-8d392080-9931-11ea-80d0-843ab2f22e1e.png)
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82240050-ab068580-9931-11ea-98f1-f94671cbac9a.png)
|
||||||
|
|
||||||
|
These graphs show the database transactions that are occurring the most frequently, as well as those are that are taking the most amount of time to execute.
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82240192-e86b1300-9931-11ea-9aac-3e2c9bfa6fdc.png)
|
||||||
|
|
||||||
|
In the first graph, we can see obvious spikes corresponding to lots of `get_user_by_id` transactions. This would be useful information to figure out which part of the Synapse codebase is potentially creating a heavy load on the system. However, be sure to cross-reference this with Transaction Duration, which states that `get_users_by_id` is actually a very quick database transaction and isn't causing as much load as others, like `persist_events`:
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82240467-62030100-9932-11ea-8db9-917f2d977fe1.png)
|
||||||
|
|
||||||
|
Still, it's probably worth investigating why we're getting users from the database that often, and whether it's possible to reduce the amount of queries we make by adjusting our cache factor(s).
|
||||||
|
|
||||||
|
The `persist_events` transaction is responsible for saving new room events to the Synapse database, so can often show a high transaction duration.
|
||||||
|
|
||||||
|
## Federation
|
||||||
|
|
||||||
|
The charts in the "Federation" section show information about incoming and outgoing federation requests. Federation data can be divided into two basic types:
|
||||||
|
|
||||||
|
- PDU (Persistent Data Unit) - room events: messages, state events (join/leave), etc. These are permanently stored in the database.
|
||||||
|
- EDU (Ephemeral Data Unit) - other data, which need not be stored permanently, such as read receipts, typing notifications.
|
||||||
|
|
||||||
|
The "Outgoing EDUs by type" chart shows the EDUs within outgoing federation requests by type: `m.device_list_update`, `m.direct_to_device`, `m.presence`, `m.receipt`, `m.typing`.
|
||||||
|
|
||||||
|
If you see a large number of `m.presence` EDUs and are having trouble with too much CPU load, you can disable `presence` in the Synapse config. See also [#3971](https://github.com/matrix-org/synapse/issues/3971).
|
||||||
|
|
||||||
|
## Caches
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82240572-8b239180-9932-11ea-96ff-6b5f0e57ebe5.png)
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82240666-b8703f80-9932-11ea-86af-9f663988d8da.png)
|
||||||
|
|
||||||
|
This is quite a useful graph. It shows how many times Synapse attempts to retrieve a piece of data from a cache which the cache did not contain, thus resulting in a call to the database. We can see here that the `_get_joined_profile_from_event_id` cache is being requested a lot, and often the data we're after is not cached.
|
||||||
|
|
||||||
|
Cross-referencing this with the Eviction Rate graph, which shows that entries are being evicted from `_get_joined_profile_from_event_id` quite often:
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82240766-de95df80-9932-11ea-8c15-5acfc57c48da.png)
|
||||||
|
|
||||||
|
we should probably consider raising the size of that cache by raising its cache factor (a multiplier value for the size of an individual cache). Information on doing so is available [here](https://github.com/matrix-org/synapse/blob/ee421e524478c1ad8d43741c27379499c2f6135c/docs/sample_config.yaml#L608-L642) (note that the configuration of individual cache factors through the configuration file is available in Synapse v1.14.0+, whereas doing so through environment variables has been supported for a very long time). Note that this will increase Synapse's overall memory usage.
|
||||||
|
|
||||||
|
## Forward Extremities
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82241440-13566680-9934-11ea-8b88-ba468db937ed.png)
|
||||||
|
|
||||||
|
Forward extremities are the leaf events at the end of a DAG in a room, aka events that have no children. The more that exist in a room, the more [state resolution](https://spec.matrix.org/v1.1/server-server-api/#room-state-resolution) that Synapse needs to perform (hint: it's an expensive operation). While Synapse has code to prevent too many of these existing at one time in a room, bugs can sometimes make them crop up again.
|
||||||
|
|
||||||
|
If a room has >10 forward extremities, it's worth checking which room is the culprit and potentially removing them using the SQL queries mentioned in [#1760](https://github.com/matrix-org/synapse/issues/1760).
|
||||||
|
|
||||||
|
## Garbage Collection
|
||||||
|
|
||||||
|
![image](https://user-images.githubusercontent.com/1342360/82241911-da6ac180-9934-11ea-9a0d-a311fe22acd0.png)
|
||||||
|
|
||||||
|
Large spikes in garbage collection times (bigger than shown here, I'm talking in the
|
||||||
|
multiple seconds range), can cause lots of problems in Synapse performance. It's more an
|
||||||
|
indicator of problems, and a symptom of other problems though, so check other graphs for what might be causing it.
|
||||||
|
|
||||||
|
## Final Thoughts
|
||||||
|
|
||||||
|
If you're still having performance problems with your Synapse instance and you've
|
||||||
|
tried everything you can, it may just be a lack of system resources. Consider adding
|
||||||
|
more CPU and RAM, and make use of [worker mode](../../workers.md)
|
||||||
|
to make use of multiple CPU cores / multiple machines for your homeserver.
|
||||||
|
|
156
docs/usage/administration/useful_sql_for_admins.md
Normal file
156
docs/usage/administration/useful_sql_for_admins.md
Normal file
@ -0,0 +1,156 @@
|
|||||||
|
## Some useful SQL queries for Synapse Admins
|
||||||
|
|
||||||
|
## Size of full matrix db
|
||||||
|
`SELECT pg_size_pretty( pg_database_size( 'matrix' ) );`
|
||||||
|
### Result example:
|
||||||
|
```
|
||||||
|
pg_size_pretty
|
||||||
|
----------------
|
||||||
|
6420 MB
|
||||||
|
(1 row)
|
||||||
|
```
|
||||||
|
## Show top 20 larger rooms by state events count
|
||||||
|
```sql
|
||||||
|
SELECT r.name, s.room_id, s.current_state_events
|
||||||
|
FROM room_stats_current s
|
||||||
|
LEFT JOIN room_stats_state r USING (room_id)
|
||||||
|
ORDER BY current_state_events DESC
|
||||||
|
LIMIT 20;
|
||||||
|
```
|
||||||
|
|
||||||
|
and by state_group_events count:
|
||||||
|
```sql
|
||||||
|
SELECT rss.name, s.room_id, count(s.room_id) FROM state_groups_state s
|
||||||
|
LEFT JOIN room_stats_state rss USING (room_id)
|
||||||
|
GROUP BY s.room_id, rss.name
|
||||||
|
ORDER BY count(s.room_id) DESC
|
||||||
|
LIMIT 20;
|
||||||
|
```
|
||||||
|
plus same, but with join removed for performance reasons:
|
||||||
|
```sql
|
||||||
|
SELECT s.room_id, count(s.room_id) FROM state_groups_state s
|
||||||
|
GROUP BY s.room_id
|
||||||
|
ORDER BY count(s.room_id) DESC
|
||||||
|
LIMIT 20;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Show top 20 larger tables by row count
|
||||||
|
```sql
|
||||||
|
SELECT relname, n_live_tup as rows
|
||||||
|
FROM pg_stat_user_tables
|
||||||
|
ORDER BY n_live_tup DESC
|
||||||
|
LIMIT 20;
|
||||||
|
```
|
||||||
|
This query is quick, but may be very approximate, for exact number of rows use `SELECT COUNT(*) FROM <table_name>`.
|
||||||
|
### Result example:
|
||||||
|
```
|
||||||
|
state_groups_state - 161687170
|
||||||
|
event_auth - 8584785
|
||||||
|
event_edges - 6995633
|
||||||
|
event_json - 6585916
|
||||||
|
event_reference_hashes - 6580990
|
||||||
|
events - 6578879
|
||||||
|
received_transactions - 5713989
|
||||||
|
event_to_state_groups - 4873377
|
||||||
|
stream_ordering_to_exterm - 4136285
|
||||||
|
current_state_delta_stream - 3770972
|
||||||
|
event_search - 3670521
|
||||||
|
state_events - 2845082
|
||||||
|
room_memberships - 2785854
|
||||||
|
cache_invalidation_stream - 2448218
|
||||||
|
state_groups - 1255467
|
||||||
|
state_group_edges - 1229849
|
||||||
|
current_state_events - 1222905
|
||||||
|
users_in_public_rooms - 364059
|
||||||
|
device_lists_stream - 326903
|
||||||
|
user_directory_search - 316433
|
||||||
|
```
|
||||||
|
|
||||||
|
## Show top 20 rooms by new events count in last 1 day:
|
||||||
|
```sql
|
||||||
|
SELECT e.room_id, r.name, COUNT(e.event_id) cnt FROM events e
|
||||||
|
LEFT JOIN room_stats_state r USING (room_id)
|
||||||
|
WHERE e.origin_server_ts >= DATE_PART('epoch', NOW() - INTERVAL '1 day') * 1000 GROUP BY e.room_id, r.name ORDER BY cnt DESC LIMIT 20;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Show top 20 users on homeserver by sent events (messages) at last month:
|
||||||
|
```sql
|
||||||
|
SELECT user_id, SUM(total_events)
|
||||||
|
FROM user_stats_historical
|
||||||
|
WHERE TO_TIMESTAMP(end_ts/1000) AT TIME ZONE 'UTC' > date_trunc('day', now() - interval '1 month')
|
||||||
|
GROUP BY user_id
|
||||||
|
ORDER BY SUM(total_events) DESC
|
||||||
|
LIMIT 20;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Show last 100 messages from needed user, with room names:
|
||||||
|
```sql
|
||||||
|
SELECT e.room_id, r.name, e.event_id, e.type, e.content, j.json FROM events e
|
||||||
|
LEFT JOIN event_json j USING (room_id)
|
||||||
|
LEFT JOIN room_stats_state r USING (room_id)
|
||||||
|
WHERE sender = '@LOGIN:example.com'
|
||||||
|
AND e.type = 'm.room.message'
|
||||||
|
ORDER BY stream_ordering DESC
|
||||||
|
LIMIT 100;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Show top 20 larger tables by storage size
|
||||||
|
```sql
|
||||||
|
SELECT nspname || '.' || relname AS "relation",
|
||||||
|
pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"
|
||||||
|
FROM pg_class C
|
||||||
|
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
|
||||||
|
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
|
||||||
|
AND C.relkind <> 'i'
|
||||||
|
AND nspname !~ '^pg_toast'
|
||||||
|
ORDER BY pg_total_relation_size(C.oid) DESC
|
||||||
|
LIMIT 20;
|
||||||
|
```
|
||||||
|
### Result example:
|
||||||
|
```
|
||||||
|
public.state_groups_state - 27 GB
|
||||||
|
public.event_json - 9855 MB
|
||||||
|
public.events - 3675 MB
|
||||||
|
public.event_edges - 3404 MB
|
||||||
|
public.received_transactions - 2745 MB
|
||||||
|
public.event_reference_hashes - 1864 MB
|
||||||
|
public.event_auth - 1775 MB
|
||||||
|
public.stream_ordering_to_exterm - 1663 MB
|
||||||
|
public.event_search - 1370 MB
|
||||||
|
public.room_memberships - 1050 MB
|
||||||
|
public.event_to_state_groups - 948 MB
|
||||||
|
public.current_state_delta_stream - 711 MB
|
||||||
|
public.state_events - 611 MB
|
||||||
|
public.presence_stream - 530 MB
|
||||||
|
public.current_state_events - 525 MB
|
||||||
|
public.cache_invalidation_stream - 466 MB
|
||||||
|
public.receipts_linearized - 279 MB
|
||||||
|
public.state_groups - 160 MB
|
||||||
|
public.device_lists_remote_cache - 124 MB
|
||||||
|
public.state_group_edges - 122 MB
|
||||||
|
```
|
||||||
|
|
||||||
|
## Show rooms with names, sorted by events in this rooms
|
||||||
|
`echo "select event_json.room_id,room_stats_state.name from event_json,room_stats_state where room_stats_state.room_id=event_json.room_id" | psql synapse | sort | uniq -c | sort -n`
|
||||||
|
### Result example:
|
||||||
|
```
|
||||||
|
9459 !FPUfgzXYWTKgIrwKxW:matrix.org | This Week in Matrix
|
||||||
|
9459 !FPUfgzXYWTKgIrwKxW:matrix.org | This Week in Matrix (TWIM)
|
||||||
|
17799 !iDIOImbmXxwNngznsa:matrix.org | Linux in Russian
|
||||||
|
18739 !GnEEPYXUhoaHbkFBNX:matrix.org | Riot Android
|
||||||
|
23373 !QtykxKocfZaZOUrTwp:matrix.org | Matrix HQ
|
||||||
|
39504 !gTQfWzbYncrtNrvEkB:matrix.org | ru.[matrix]
|
||||||
|
43601 !iNmaIQExDMeqdITdHH:matrix.org | Riot
|
||||||
|
43601 !iNmaIQExDMeqdITdHH:matrix.org | Riot Web/Desktop
|
||||||
|
```
|
||||||
|
|
||||||
|
## Lookup room state info by list of room_id
|
||||||
|
```sql
|
||||||
|
SELECT rss.room_id, rss.name, rss.canonical_alias, rss.topic, rss.encryption, rsc.joined_members, rsc.local_users_in_room, rss.join_rules
|
||||||
|
FROM room_stats_state rss
|
||||||
|
LEFT JOIN room_stats_current rsc USING (room_id)
|
||||||
|
WHERE room_id IN (WHERE room_id IN (
|
||||||
|
'!OGEhHVWSdvArJzumhm:matrix.org',
|
||||||
|
'!YTvKGNlinIzlkMTVRl:matrix.org'
|
||||||
|
)
|
||||||
|
```
|
@ -210,7 +210,7 @@ expressions:
|
|||||||
^/_matrix/federation/v1/get_groups_publicised$
|
^/_matrix/federation/v1/get_groups_publicised$
|
||||||
^/_matrix/key/v2/query
|
^/_matrix/key/v2/query
|
||||||
^/_matrix/federation/unstable/org.matrix.msc2946/spaces/
|
^/_matrix/federation/unstable/org.matrix.msc2946/spaces/
|
||||||
^/_matrix/federation/unstable/org.matrix.msc2946/hierarchy/
|
^/_matrix/federation/(v1|unstable/org.matrix.msc2946)/hierarchy/
|
||||||
|
|
||||||
# Inbound federation transaction request
|
# Inbound federation transaction request
|
||||||
^/_matrix/federation/v1/send/
|
^/_matrix/federation/v1/send/
|
||||||
@ -223,7 +223,7 @@ expressions:
|
|||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
|
||||||
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
|
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
|
||||||
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
|
^/_matrix/client/(v1|unstable/org.matrix.msc2946)/rooms/.*/hierarchy$
|
||||||
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
|
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/devices$
|
^/_matrix/client/(api/v1|r0|v3|unstable)/devices$
|
||||||
|
22
mypy.ini
22
mypy.ini
@ -33,7 +33,6 @@ exclude = (?x)
|
|||||||
|synapse/storage/databases/main/event_federation.py
|
|synapse/storage/databases/main/event_federation.py
|
||||||
|synapse/storage/databases/main/event_push_actions.py
|
|synapse/storage/databases/main/event_push_actions.py
|
||||||
|synapse/storage/databases/main/events_bg_updates.py
|
|synapse/storage/databases/main/events_bg_updates.py
|
||||||
|synapse/storage/databases/main/events_worker.py
|
|
||||||
|synapse/storage/databases/main/group_server.py
|
|synapse/storage/databases/main/group_server.py
|
||||||
|synapse/storage/databases/main/metrics.py
|
|synapse/storage/databases/main/metrics.py
|
||||||
|synapse/storage/databases/main/monthly_active_users.py
|
|synapse/storage/databases/main/monthly_active_users.py
|
||||||
@ -87,9 +86,6 @@ exclude = (?x)
|
|||||||
|tests/push/test_presentable_names.py
|
|tests/push/test_presentable_names.py
|
||||||
|tests/push/test_push_rule_evaluator.py
|
|tests/push/test_push_rule_evaluator.py
|
||||||
|tests/rest/admin/test_admin.py
|
|tests/rest/admin/test_admin.py
|
||||||
|tests/rest/admin/test_device.py
|
|
||||||
|tests/rest/admin/test_media.py
|
|
||||||
|tests/rest/admin/test_server_notice.py
|
|
||||||
|tests/rest/admin/test_user.py
|
|tests/rest/admin/test_user.py
|
||||||
|tests/rest/admin/test_username_available.py
|
|tests/rest/admin/test_username_available.py
|
||||||
|tests/rest/client/test_account.py
|
|tests/rest/client/test_account.py
|
||||||
@ -112,7 +108,6 @@ exclude = (?x)
|
|||||||
|tests/server_notices/test_resource_limits_server_notices.py
|
|tests/server_notices/test_resource_limits_server_notices.py
|
||||||
|tests/state/test_v2.py
|
|tests/state/test_v2.py
|
||||||
|tests/storage/test_account_data.py
|
|tests/storage/test_account_data.py
|
||||||
|tests/storage/test_appservice.py
|
|
||||||
|tests/storage/test_background_update.py
|
|tests/storage/test_background_update.py
|
||||||
|tests/storage/test_base.py
|
|tests/storage/test_base.py
|
||||||
|tests/storage/test_client_ips.py
|
|tests/storage/test_client_ips.py
|
||||||
@ -125,7 +120,6 @@ exclude = (?x)
|
|||||||
|tests/test_server.py
|
|tests/test_server.py
|
||||||
|tests/test_state.py
|
|tests/test_state.py
|
||||||
|tests/test_terms_auth.py
|
|tests/test_terms_auth.py
|
||||||
|tests/test_visibility.py
|
|
||||||
|tests/unittest.py
|
|tests/unittest.py
|
||||||
|tests/util/caches/test_cached_call.py
|
|tests/util/caches/test_cached_call.py
|
||||||
|tests/util/caches/test_deferred_cache.py
|
|tests/util/caches/test_deferred_cache.py
|
||||||
@ -160,12 +154,21 @@ disallow_untyped_defs = True
|
|||||||
[mypy-synapse.events.*]
|
[mypy-synapse.events.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.federation.*]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.federation.transport.client]
|
||||||
|
disallow_untyped_defs = False
|
||||||
|
|
||||||
[mypy-synapse.handlers.*]
|
[mypy-synapse.handlers.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.metrics.*]
|
[mypy-synapse.metrics.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.module_api.*]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.push.*]
|
[mypy-synapse.push.*]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
@ -184,6 +187,9 @@ disallow_untyped_defs = True
|
|||||||
[mypy-synapse.storage.databases.main.directory]
|
[mypy-synapse.storage.databases.main.directory]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-synapse.storage.databases.main.events_worker]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
[mypy-synapse.storage.databases.main.room_batch]
|
[mypy-synapse.storage.databases.main.room_batch]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
@ -220,6 +226,10 @@ disallow_untyped_defs = True
|
|||||||
[mypy-tests.rest.client.test_directory]
|
[mypy-tests.rest.client.test_directory]
|
||||||
disallow_untyped_defs = True
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
[mypy-tests.federation.transport.test_client]
|
||||||
|
disallow_untyped_defs = True
|
||||||
|
|
||||||
|
|
||||||
;; Dependencies without annotations
|
;; Dependencies without annotations
|
||||||
;; Before ignoring a module, check to see if type stubs are available.
|
;; Before ignoring a module, check to see if type stubs are available.
|
||||||
;; The `typeshed` project maintains stubs here:
|
;; The `typeshed` project maintains stubs here:
|
||||||
|
@ -65,4 +65,4 @@ if [[ -n "$1" ]]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# Run the tests!
|
# Run the tests!
|
||||||
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
|
go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
|
||||||
|
@ -15,6 +15,25 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
Script for signing and sending federation requests.
|
||||||
|
|
||||||
|
Some tips on doing the join dance with this:
|
||||||
|
|
||||||
|
room_id=...
|
||||||
|
user_id=...
|
||||||
|
|
||||||
|
# make_join
|
||||||
|
federation_client.py "/_matrix/federation/v1/make_join/$room_id/$user_id?ver=5" > make_join.json
|
||||||
|
|
||||||
|
# sign
|
||||||
|
jq -M .event make_join.json | sign_json --sign-event-room-version=$(jq -r .room_version make_join.json) -o signed-join.json
|
||||||
|
|
||||||
|
# send_join
|
||||||
|
federation_client.py -X PUT "/_matrix/federation/v2/send_join/$room_id/x" --body $(<signed-join.json) > send_join.json
|
||||||
|
"""
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import base64
|
import base64
|
||||||
import json
|
import json
|
||||||
|
@ -22,6 +22,8 @@ import yaml
|
|||||||
from signedjson.key import read_signing_keys
|
from signedjson.key import read_signing_keys
|
||||||
from signedjson.sign import sign_json
|
from signedjson.sign import sign_json
|
||||||
|
|
||||||
|
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||||
|
from synapse.crypto.event_signing import add_hashes_and_signatures
|
||||||
from synapse.util import json_encoder
|
from synapse.util import json_encoder
|
||||||
|
|
||||||
|
|
||||||
@ -68,6 +70,16 @@ Example usage:
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--sign-event-room-version",
|
||||||
|
type=str,
|
||||||
|
help=(
|
||||||
|
"Sign the JSON as an event for the given room version, rather than raw JSON. "
|
||||||
|
"This means that we will add a 'hashes' object, and redact the event before "
|
||||||
|
"signing."
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
input_args = parser.add_mutually_exclusive_group()
|
input_args = parser.add_mutually_exclusive_group()
|
||||||
|
|
||||||
input_args.add_argument("input_data", nargs="?", help="Raw JSON to be signed.")
|
input_args.add_argument("input_data", nargs="?", help="Raw JSON to be signed.")
|
||||||
@ -116,7 +128,17 @@ Example usage:
|
|||||||
print("Input json was not an object", file=sys.stderr)
|
print("Input json was not an object", file=sys.stderr)
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
sign_json(obj, args.server_name, keys[0])
|
if args.sign_event_room_version:
|
||||||
|
room_version = KNOWN_ROOM_VERSIONS.get(args.sign_event_room_version)
|
||||||
|
if not room_version:
|
||||||
|
print(
|
||||||
|
f"Unknown room version {args.sign_event_room_version}", file=sys.stderr
|
||||||
|
)
|
||||||
|
sys.exit(1)
|
||||||
|
add_hashes_and_signatures(room_version, obj, args.server_name, keys[0])
|
||||||
|
else:
|
||||||
|
sign_json(obj, args.server_name, keys[0])
|
||||||
|
|
||||||
for c in json_encoder.iterencode(obj):
|
for c in json_encoder.iterencode(obj):
|
||||||
args.output.write(c)
|
args.output.write(c)
|
||||||
args.output.write("\n")
|
args.output.write("\n")
|
||||||
|
10
setup.py
10
setup.py
@ -119,7 +119,9 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
|
|||||||
# Tests assume that all optional dependencies are installed.
|
# Tests assume that all optional dependencies are installed.
|
||||||
#
|
#
|
||||||
# parameterized_class decorator was introduced in parameterized 0.7.0
|
# parameterized_class decorator was introduced in parameterized 0.7.0
|
||||||
CONDITIONAL_REQUIREMENTS["test"] = ["parameterized>=0.7.0"]
|
#
|
||||||
|
# We use `mock` library as that backports `AsyncMock` to Python 3.6
|
||||||
|
CONDITIONAL_REQUIREMENTS["test"] = ["parameterized>=0.7.0", "mock>=4.0.0"]
|
||||||
|
|
||||||
CONDITIONAL_REQUIREMENTS["dev"] = (
|
CONDITIONAL_REQUIREMENTS["dev"] = (
|
||||||
CONDITIONAL_REQUIREMENTS["lint"]
|
CONDITIONAL_REQUIREMENTS["lint"]
|
||||||
@ -150,6 +152,12 @@ setup(
|
|||||||
long_description=long_description,
|
long_description=long_description,
|
||||||
long_description_content_type="text/x-rst",
|
long_description_content_type="text/x-rst",
|
||||||
python_requires="~=3.6",
|
python_requires="~=3.6",
|
||||||
|
entry_points={
|
||||||
|
"console_scripts": [
|
||||||
|
"synapse_homeserver = synapse.app.homeserver:main",
|
||||||
|
"synapse_worker = synapse.app.generic_worker:main",
|
||||||
|
]
|
||||||
|
},
|
||||||
classifiers=[
|
classifiers=[
|
||||||
"Development Status :: 5 - Production/Stable",
|
"Development Status :: 5 - Production/Stable",
|
||||||
"Topic :: Communications :: Chat",
|
"Topic :: Communications :: Chat",
|
||||||
|
@ -47,7 +47,7 @@ try:
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "1.48.0"
|
__version__ = "1.49.0rc1"
|
||||||
|
|
||||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||||
# We import here so that we don't have to install a bunch of deps when
|
# We import here so that we don't have to install a bunch of deps when
|
||||||
|
@ -17,6 +17,8 @@
|
|||||||
|
|
||||||
"""Contains constants from the specification."""
|
"""Contains constants from the specification."""
|
||||||
|
|
||||||
|
from typing_extensions import Final
|
||||||
|
|
||||||
# the max size of a (canonical-json-encoded) event
|
# the max size of a (canonical-json-encoded) event
|
||||||
MAX_PDU_SIZE = 65536
|
MAX_PDU_SIZE = 65536
|
||||||
|
|
||||||
@ -39,125 +41,125 @@ class Membership:
|
|||||||
|
|
||||||
"""Represents the membership states of a user in a room."""
|
"""Represents the membership states of a user in a room."""
|
||||||
|
|
||||||
INVITE = "invite"
|
INVITE: Final = "invite"
|
||||||
JOIN = "join"
|
JOIN: Final = "join"
|
||||||
KNOCK = "knock"
|
KNOCK: Final = "knock"
|
||||||
LEAVE = "leave"
|
LEAVE: Final = "leave"
|
||||||
BAN = "ban"
|
BAN: Final = "ban"
|
||||||
LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN)
|
LIST: Final = (INVITE, JOIN, KNOCK, LEAVE, BAN)
|
||||||
|
|
||||||
|
|
||||||
class PresenceState:
|
class PresenceState:
|
||||||
"""Represents the presence state of a user."""
|
"""Represents the presence state of a user."""
|
||||||
|
|
||||||
OFFLINE = "offline"
|
OFFLINE: Final = "offline"
|
||||||
UNAVAILABLE = "unavailable"
|
UNAVAILABLE: Final = "unavailable"
|
||||||
ONLINE = "online"
|
ONLINE: Final = "online"
|
||||||
BUSY = "org.matrix.msc3026.busy"
|
BUSY: Final = "org.matrix.msc3026.busy"
|
||||||
|
|
||||||
|
|
||||||
class JoinRules:
|
class JoinRules:
|
||||||
PUBLIC = "public"
|
PUBLIC: Final = "public"
|
||||||
KNOCK = "knock"
|
KNOCK: Final = "knock"
|
||||||
INVITE = "invite"
|
INVITE: Final = "invite"
|
||||||
PRIVATE = "private"
|
PRIVATE: Final = "private"
|
||||||
# As defined for MSC3083.
|
# As defined for MSC3083.
|
||||||
RESTRICTED = "restricted"
|
RESTRICTED: Final = "restricted"
|
||||||
|
|
||||||
|
|
||||||
class RestrictedJoinRuleTypes:
|
class RestrictedJoinRuleTypes:
|
||||||
"""Understood types for the allow rules in restricted join rules."""
|
"""Understood types for the allow rules in restricted join rules."""
|
||||||
|
|
||||||
ROOM_MEMBERSHIP = "m.room_membership"
|
ROOM_MEMBERSHIP: Final = "m.room_membership"
|
||||||
|
|
||||||
|
|
||||||
class LoginType:
|
class LoginType:
|
||||||
PASSWORD = "m.login.password"
|
PASSWORD: Final = "m.login.password"
|
||||||
EMAIL_IDENTITY = "m.login.email.identity"
|
EMAIL_IDENTITY: Final = "m.login.email.identity"
|
||||||
MSISDN = "m.login.msisdn"
|
MSISDN: Final = "m.login.msisdn"
|
||||||
RECAPTCHA = "m.login.recaptcha"
|
RECAPTCHA: Final = "m.login.recaptcha"
|
||||||
TERMS = "m.login.terms"
|
TERMS: Final = "m.login.terms"
|
||||||
SSO = "m.login.sso"
|
SSO: Final = "m.login.sso"
|
||||||
DUMMY = "m.login.dummy"
|
DUMMY: Final = "m.login.dummy"
|
||||||
REGISTRATION_TOKEN = "org.matrix.msc3231.login.registration_token"
|
REGISTRATION_TOKEN: Final = "org.matrix.msc3231.login.registration_token"
|
||||||
|
|
||||||
|
|
||||||
# This is used in the `type` parameter for /register when called by
|
# This is used in the `type` parameter for /register when called by
|
||||||
# an appservice to register a new user.
|
# an appservice to register a new user.
|
||||||
APP_SERVICE_REGISTRATION_TYPE = "m.login.application_service"
|
APP_SERVICE_REGISTRATION_TYPE: Final = "m.login.application_service"
|
||||||
|
|
||||||
|
|
||||||
class EventTypes:
|
class EventTypes:
|
||||||
Member = "m.room.member"
|
Member: Final = "m.room.member"
|
||||||
Create = "m.room.create"
|
Create: Final = "m.room.create"
|
||||||
Tombstone = "m.room.tombstone"
|
Tombstone: Final = "m.room.tombstone"
|
||||||
JoinRules = "m.room.join_rules"
|
JoinRules: Final = "m.room.join_rules"
|
||||||
PowerLevels = "m.room.power_levels"
|
PowerLevels: Final = "m.room.power_levels"
|
||||||
Aliases = "m.room.aliases"
|
Aliases: Final = "m.room.aliases"
|
||||||
Redaction = "m.room.redaction"
|
Redaction: Final = "m.room.redaction"
|
||||||
ThirdPartyInvite = "m.room.third_party_invite"
|
ThirdPartyInvite: Final = "m.room.third_party_invite"
|
||||||
RelatedGroups = "m.room.related_groups"
|
RelatedGroups: Final = "m.room.related_groups"
|
||||||
|
|
||||||
RoomHistoryVisibility = "m.room.history_visibility"
|
RoomHistoryVisibility: Final = "m.room.history_visibility"
|
||||||
CanonicalAlias = "m.room.canonical_alias"
|
CanonicalAlias: Final = "m.room.canonical_alias"
|
||||||
Encrypted = "m.room.encrypted"
|
Encrypted: Final = "m.room.encrypted"
|
||||||
RoomAvatar = "m.room.avatar"
|
RoomAvatar: Final = "m.room.avatar"
|
||||||
RoomEncryption = "m.room.encryption"
|
RoomEncryption: Final = "m.room.encryption"
|
||||||
GuestAccess = "m.room.guest_access"
|
GuestAccess: Final = "m.room.guest_access"
|
||||||
|
|
||||||
# These are used for validation
|
# These are used for validation
|
||||||
Message = "m.room.message"
|
Message: Final = "m.room.message"
|
||||||
Topic = "m.room.topic"
|
Topic: Final = "m.room.topic"
|
||||||
Name = "m.room.name"
|
Name: Final = "m.room.name"
|
||||||
|
|
||||||
ServerACL = "m.room.server_acl"
|
ServerACL: Final = "m.room.server_acl"
|
||||||
Pinned = "m.room.pinned_events"
|
Pinned: Final = "m.room.pinned_events"
|
||||||
|
|
||||||
Retention = "m.room.retention"
|
Retention: Final = "m.room.retention"
|
||||||
|
|
||||||
Dummy = "org.matrix.dummy_event"
|
Dummy: Final = "org.matrix.dummy_event"
|
||||||
|
|
||||||
SpaceChild = "m.space.child"
|
SpaceChild: Final = "m.space.child"
|
||||||
SpaceParent = "m.space.parent"
|
SpaceParent: Final = "m.space.parent"
|
||||||
|
|
||||||
MSC2716_INSERTION = "org.matrix.msc2716.insertion"
|
MSC2716_INSERTION: Final = "org.matrix.msc2716.insertion"
|
||||||
MSC2716_BATCH = "org.matrix.msc2716.batch"
|
MSC2716_BATCH: Final = "org.matrix.msc2716.batch"
|
||||||
MSC2716_MARKER = "org.matrix.msc2716.marker"
|
MSC2716_MARKER: Final = "org.matrix.msc2716.marker"
|
||||||
|
|
||||||
|
|
||||||
class ToDeviceEventTypes:
|
class ToDeviceEventTypes:
|
||||||
RoomKeyRequest = "m.room_key_request"
|
RoomKeyRequest: Final = "m.room_key_request"
|
||||||
|
|
||||||
|
|
||||||
class DeviceKeyAlgorithms:
|
class DeviceKeyAlgorithms:
|
||||||
"""Spec'd algorithms for the generation of per-device keys"""
|
"""Spec'd algorithms for the generation of per-device keys"""
|
||||||
|
|
||||||
ED25519 = "ed25519"
|
ED25519: Final = "ed25519"
|
||||||
CURVE25519 = "curve25519"
|
CURVE25519: Final = "curve25519"
|
||||||
SIGNED_CURVE25519 = "signed_curve25519"
|
SIGNED_CURVE25519: Final = "signed_curve25519"
|
||||||
|
|
||||||
|
|
||||||
class EduTypes:
|
class EduTypes:
|
||||||
Presence = "m.presence"
|
Presence: Final = "m.presence"
|
||||||
|
|
||||||
|
|
||||||
class RejectedReason:
|
class RejectedReason:
|
||||||
AUTH_ERROR = "auth_error"
|
AUTH_ERROR: Final = "auth_error"
|
||||||
|
|
||||||
|
|
||||||
class RoomCreationPreset:
|
class RoomCreationPreset:
|
||||||
PRIVATE_CHAT = "private_chat"
|
PRIVATE_CHAT: Final = "private_chat"
|
||||||
PUBLIC_CHAT = "public_chat"
|
PUBLIC_CHAT: Final = "public_chat"
|
||||||
TRUSTED_PRIVATE_CHAT = "trusted_private_chat"
|
TRUSTED_PRIVATE_CHAT: Final = "trusted_private_chat"
|
||||||
|
|
||||||
|
|
||||||
class ThirdPartyEntityKind:
|
class ThirdPartyEntityKind:
|
||||||
USER = "user"
|
USER: Final = "user"
|
||||||
LOCATION = "location"
|
LOCATION: Final = "location"
|
||||||
|
|
||||||
|
|
||||||
ServerNoticeMsgType = "m.server_notice"
|
ServerNoticeMsgType: Final = "m.server_notice"
|
||||||
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
|
ServerNoticeLimitReached: Final = "m.server_notice.usage_limit_reached"
|
||||||
|
|
||||||
|
|
||||||
class UserTypes:
|
class UserTypes:
|
||||||
@ -165,91 +167,91 @@ class UserTypes:
|
|||||||
'admin' and 'guest' users should also be UserTypes. Normal users are type None
|
'admin' and 'guest' users should also be UserTypes. Normal users are type None
|
||||||
"""
|
"""
|
||||||
|
|
||||||
SUPPORT = "support"
|
SUPPORT: Final = "support"
|
||||||
BOT = "bot"
|
BOT: Final = "bot"
|
||||||
ALL_USER_TYPES = (SUPPORT, BOT)
|
ALL_USER_TYPES: Final = (SUPPORT, BOT)
|
||||||
|
|
||||||
|
|
||||||
class RelationTypes:
|
class RelationTypes:
|
||||||
"""The types of relations known to this server."""
|
"""The types of relations known to this server."""
|
||||||
|
|
||||||
ANNOTATION = "m.annotation"
|
ANNOTATION: Final = "m.annotation"
|
||||||
REPLACE = "m.replace"
|
REPLACE: Final = "m.replace"
|
||||||
REFERENCE = "m.reference"
|
REFERENCE: Final = "m.reference"
|
||||||
THREAD = "io.element.thread"
|
THREAD: Final = "io.element.thread"
|
||||||
|
|
||||||
|
|
||||||
class LimitBlockingTypes:
|
class LimitBlockingTypes:
|
||||||
"""Reasons that a server may be blocked"""
|
"""Reasons that a server may be blocked"""
|
||||||
|
|
||||||
MONTHLY_ACTIVE_USER = "monthly_active_user"
|
MONTHLY_ACTIVE_USER: Final = "monthly_active_user"
|
||||||
HS_DISABLED = "hs_disabled"
|
HS_DISABLED: Final = "hs_disabled"
|
||||||
|
|
||||||
|
|
||||||
class EventContentFields:
|
class EventContentFields:
|
||||||
"""Fields found in events' content, regardless of type."""
|
"""Fields found in events' content, regardless of type."""
|
||||||
|
|
||||||
# Labels for the event, cf https://github.com/matrix-org/matrix-doc/pull/2326
|
# Labels for the event, cf https://github.com/matrix-org/matrix-doc/pull/2326
|
||||||
LABELS = "org.matrix.labels"
|
LABELS: Final = "org.matrix.labels"
|
||||||
|
|
||||||
# Timestamp to delete the event after
|
# Timestamp to delete the event after
|
||||||
# cf https://github.com/matrix-org/matrix-doc/pull/2228
|
# cf https://github.com/matrix-org/matrix-doc/pull/2228
|
||||||
SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"
|
SELF_DESTRUCT_AFTER: Final = "org.matrix.self_destruct_after"
|
||||||
|
|
||||||
# cf https://github.com/matrix-org/matrix-doc/pull/1772
|
# cf https://github.com/matrix-org/matrix-doc/pull/1772
|
||||||
ROOM_TYPE = "type"
|
ROOM_TYPE: Final = "type"
|
||||||
|
|
||||||
# Whether a room can federate.
|
# Whether a room can federate.
|
||||||
FEDERATE = "m.federate"
|
FEDERATE: Final = "m.federate"
|
||||||
|
|
||||||
# The creator of the room, as used in `m.room.create` events.
|
# The creator of the room, as used in `m.room.create` events.
|
||||||
ROOM_CREATOR = "creator"
|
ROOM_CREATOR: Final = "creator"
|
||||||
|
|
||||||
# Used in m.room.guest_access events.
|
# Used in m.room.guest_access events.
|
||||||
GUEST_ACCESS = "guest_access"
|
GUEST_ACCESS: Final = "guest_access"
|
||||||
|
|
||||||
# Used on normal messages to indicate they were historically imported after the fact
|
# Used on normal messages to indicate they were historically imported after the fact
|
||||||
MSC2716_HISTORICAL = "org.matrix.msc2716.historical"
|
MSC2716_HISTORICAL: Final = "org.matrix.msc2716.historical"
|
||||||
# For "insertion" events to indicate what the next batch ID should be in
|
# For "insertion" events to indicate what the next batch ID should be in
|
||||||
# order to connect to it
|
# order to connect to it
|
||||||
MSC2716_NEXT_BATCH_ID = "org.matrix.msc2716.next_batch_id"
|
MSC2716_NEXT_BATCH_ID: Final = "org.matrix.msc2716.next_batch_id"
|
||||||
# Used on "batch" events to indicate which insertion event it connects to
|
# Used on "batch" events to indicate which insertion event it connects to
|
||||||
MSC2716_BATCH_ID = "org.matrix.msc2716.batch_id"
|
MSC2716_BATCH_ID: Final = "org.matrix.msc2716.batch_id"
|
||||||
# For "marker" events
|
# For "marker" events
|
||||||
MSC2716_MARKER_INSERTION = "org.matrix.msc2716.marker.insertion"
|
MSC2716_MARKER_INSERTION: Final = "org.matrix.msc2716.marker.insertion"
|
||||||
|
|
||||||
# The authorising user for joining a restricted room.
|
# The authorising user for joining a restricted room.
|
||||||
AUTHORISING_USER = "join_authorised_via_users_server"
|
AUTHORISING_USER: Final = "join_authorised_via_users_server"
|
||||||
|
|
||||||
|
|
||||||
class RoomTypes:
|
class RoomTypes:
|
||||||
"""Understood values of the room_type field of m.room.create events."""
|
"""Understood values of the room_type field of m.room.create events."""
|
||||||
|
|
||||||
SPACE = "m.space"
|
SPACE: Final = "m.space"
|
||||||
|
|
||||||
|
|
||||||
class RoomEncryptionAlgorithms:
|
class RoomEncryptionAlgorithms:
|
||||||
MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"
|
MEGOLM_V1_AES_SHA2: Final = "m.megolm.v1.aes-sha2"
|
||||||
DEFAULT = MEGOLM_V1_AES_SHA2
|
DEFAULT: Final = MEGOLM_V1_AES_SHA2
|
||||||
|
|
||||||
|
|
||||||
class AccountDataTypes:
|
class AccountDataTypes:
|
||||||
DIRECT = "m.direct"
|
DIRECT: Final = "m.direct"
|
||||||
IGNORED_USER_LIST = "m.ignored_user_list"
|
IGNORED_USER_LIST: Final = "m.ignored_user_list"
|
||||||
|
|
||||||
|
|
||||||
class HistoryVisibility:
|
class HistoryVisibility:
|
||||||
INVITED = "invited"
|
INVITED: Final = "invited"
|
||||||
JOINED = "joined"
|
JOINED: Final = "joined"
|
||||||
SHARED = "shared"
|
SHARED: Final = "shared"
|
||||||
WORLD_READABLE = "world_readable"
|
WORLD_READABLE: Final = "world_readable"
|
||||||
|
|
||||||
|
|
||||||
class GuestAccess:
|
class GuestAccess:
|
||||||
CAN_JOIN = "can_join"
|
CAN_JOIN: Final = "can_join"
|
||||||
# anything that is not "can_join" is considered "forbidden", but for completeness:
|
# anything that is not "can_join" is considered "forbidden", but for completeness:
|
||||||
FORBIDDEN = "forbidden"
|
FORBIDDEN: Final = "forbidden"
|
||||||
|
|
||||||
|
|
||||||
class ReadReceiptEventFields:
|
class ReadReceiptEventFields:
|
||||||
MSC2285_HIDDEN = "org.matrix.msc2285.hidden"
|
MSC2285_HIDDEN: Final = "org.matrix.msc2285.hidden"
|
||||||
|
@ -32,6 +32,7 @@ from typing import (
|
|||||||
Iterable,
|
Iterable,
|
||||||
List,
|
List,
|
||||||
NoReturn,
|
NoReturn,
|
||||||
|
Optional,
|
||||||
Tuple,
|
Tuple,
|
||||||
cast,
|
cast,
|
||||||
)
|
)
|
||||||
@ -129,7 +130,7 @@ def start_worker_reactor(
|
|||||||
def start_reactor(
|
def start_reactor(
|
||||||
appname: str,
|
appname: str,
|
||||||
soft_file_limit: int,
|
soft_file_limit: int,
|
||||||
gc_thresholds: Tuple[int, int, int],
|
gc_thresholds: Optional[Tuple[int, int, int]],
|
||||||
pid_file: str,
|
pid_file: str,
|
||||||
daemonize: bool,
|
daemonize: bool,
|
||||||
print_pidfile: bool,
|
print_pidfile: bool,
|
||||||
|
@ -113,6 +113,7 @@ from synapse.storage.databases.main.monthly_active_users import (
|
|||||||
)
|
)
|
||||||
from synapse.storage.databases.main.presence import PresenceStore
|
from synapse.storage.databases.main.presence import PresenceStore
|
||||||
from synapse.storage.databases.main.room import RoomWorkerStore
|
from synapse.storage.databases.main.room import RoomWorkerStore
|
||||||
|
from synapse.storage.databases.main.room_batch import RoomBatchStore
|
||||||
from synapse.storage.databases.main.search import SearchStore
|
from synapse.storage.databases.main.search import SearchStore
|
||||||
from synapse.storage.databases.main.session import SessionStore
|
from synapse.storage.databases.main.session import SessionStore
|
||||||
from synapse.storage.databases.main.stats import StatsStore
|
from synapse.storage.databases.main.stats import StatsStore
|
||||||
@ -240,6 +241,7 @@ class GenericWorkerSlavedStore(
|
|||||||
SlavedEventStore,
|
SlavedEventStore,
|
||||||
SlavedKeyStore,
|
SlavedKeyStore,
|
||||||
RoomWorkerStore,
|
RoomWorkerStore,
|
||||||
|
RoomBatchStore,
|
||||||
DirectoryStore,
|
DirectoryStore,
|
||||||
SlavedApplicationServiceStore,
|
SlavedApplicationServiceStore,
|
||||||
SlavedRegistrationStore,
|
SlavedRegistrationStore,
|
||||||
@ -503,6 +505,10 @@ def start(config_options: List[str]) -> None:
|
|||||||
_base.start_worker_reactor("synapse-generic-worker", config)
|
_base.start_worker_reactor("synapse-generic-worker", config)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
def main() -> None:
|
||||||
with LoggingContext("main"):
|
with LoggingContext("main"):
|
||||||
start(sys.argv[1:])
|
start(sys.argv[1:])
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
@ -194,6 +194,7 @@ class SynapseHomeServer(HomeServer):
|
|||||||
{
|
{
|
||||||
"/_matrix/client/api/v1": client_resource,
|
"/_matrix/client/api/v1": client_resource,
|
||||||
"/_matrix/client/r0": client_resource,
|
"/_matrix/client/r0": client_resource,
|
||||||
|
"/_matrix/client/v1": client_resource,
|
||||||
"/_matrix/client/v3": client_resource,
|
"/_matrix/client/v3": client_resource,
|
||||||
"/_matrix/client/unstable": client_resource,
|
"/_matrix/client/unstable": client_resource,
|
||||||
"/_matrix/client/v2_alpha": client_resource,
|
"/_matrix/client/v2_alpha": client_resource,
|
||||||
@ -357,6 +358,13 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
|
|||||||
# generating config files and shouldn't try to continue.
|
# generating config files and shouldn't try to continue.
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
|
|
||||||
|
if config.worker.worker_app:
|
||||||
|
raise ConfigError(
|
||||||
|
"You have specified `worker_app` in the config but are attempting to start a non-worker "
|
||||||
|
"instance. Please use `python -m synapse.app.generic_worker` instead (or remove the option if this is the main process)."
|
||||||
|
)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
|
events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
|
||||||
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
|
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
|
||||||
|
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
|
from enum import Enum
|
||||||
from typing import TYPE_CHECKING, Iterable, List, Match, Optional
|
from typing import TYPE_CHECKING, Iterable, List, Match, Optional
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes
|
from synapse.api.constants import EventTypes
|
||||||
@ -27,7 +28,7 @@ if TYPE_CHECKING:
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class ApplicationServiceState:
|
class ApplicationServiceState(Enum):
|
||||||
DOWN = "down"
|
DOWN = "down"
|
||||||
UP = "up"
|
UP = "up"
|
||||||
|
|
||||||
|
@ -13,12 +13,13 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import sys
|
import sys
|
||||||
|
from typing import List
|
||||||
|
|
||||||
from synapse.config._base import ConfigError
|
from synapse.config._base import ConfigError
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
|
|
||||||
|
|
||||||
def main(args):
|
def main(args: List[str]) -> None:
|
||||||
action = args[1] if len(args) > 1 and args[1] == "read" else None
|
action = args[1] if len(args) > 1 and args[1] == "read" else None
|
||||||
# If we're reading a key in the config file, then `args[1]` will be `read` and `args[2]`
|
# If we're reading a key in the config file, then `args[1]` will be `read` and `args[2]`
|
||||||
# will be the key to read.
|
# will be the key to read.
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -13,14 +14,14 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import Dict
|
from typing import Dict, List
|
||||||
from urllib import parse as urlparse
|
from urllib import parse as urlparse
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
from netaddr import IPSet
|
from netaddr import IPSet
|
||||||
|
|
||||||
from synapse.appservice import ApplicationService
|
from synapse.appservice import ApplicationService
|
||||||
from synapse.types import UserID
|
from synapse.types import JsonDict, UserID
|
||||||
|
|
||||||
from ._base import Config, ConfigError
|
from ._base import Config, ConfigError
|
||||||
|
|
||||||
@ -30,12 +31,12 @@ logger = logging.getLogger(__name__)
|
|||||||
class AppServiceConfig(Config):
|
class AppServiceConfig(Config):
|
||||||
section = "appservice"
|
section = "appservice"
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs) -> None:
|
||||||
self.app_service_config_files = config.get("app_service_config_files", [])
|
self.app_service_config_files = config.get("app_service_config_files", [])
|
||||||
self.notify_appservices = config.get("notify_appservices", True)
|
self.notify_appservices = config.get("notify_appservices", True)
|
||||||
self.track_appservice_user_ips = config.get("track_appservice_user_ips", False)
|
self.track_appservice_user_ips = config.get("track_appservice_user_ips", False)
|
||||||
|
|
||||||
def generate_config_section(cls, **kwargs):
|
def generate_config_section(cls, **kwargs) -> str:
|
||||||
return """\
|
return """\
|
||||||
# A list of application service config files to use
|
# A list of application service config files to use
|
||||||
#
|
#
|
||||||
@ -50,7 +51,9 @@ class AppServiceConfig(Config):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
def load_appservices(hostname, config_files):
|
def load_appservices(
|
||||||
|
hostname: str, config_files: List[str]
|
||||||
|
) -> List[ApplicationService]:
|
||||||
"""Returns a list of Application Services from the config files."""
|
"""Returns a list of Application Services from the config files."""
|
||||||
if not isinstance(config_files, list):
|
if not isinstance(config_files, list):
|
||||||
logger.warning("Expected %s to be a list of AS config files.", config_files)
|
logger.warning("Expected %s to be a list of AS config files.", config_files)
|
||||||
@ -93,7 +96,9 @@ def load_appservices(hostname, config_files):
|
|||||||
return appservices
|
return appservices
|
||||||
|
|
||||||
|
|
||||||
def _load_appservice(hostname, as_info, config_filename):
|
def _load_appservice(
|
||||||
|
hostname: str, as_info: JsonDict, config_filename: str
|
||||||
|
) -> ApplicationService:
|
||||||
required_string_fields = ["id", "as_token", "hs_token", "sender_localpart"]
|
required_string_fields = ["id", "as_token", "hs_token", "sender_localpart"]
|
||||||
for field in required_string_fields:
|
for field in required_string_fields:
|
||||||
if not isinstance(as_info.get(field), str):
|
if not isinstance(as_info.get(field), str):
|
||||||
@ -115,9 +120,9 @@ def _load_appservice(hostname, as_info, config_filename):
|
|||||||
user_id = user.to_string()
|
user_id = user.to_string()
|
||||||
|
|
||||||
# Rate limiting for users of this AS is on by default (excludes sender)
|
# Rate limiting for users of this AS is on by default (excludes sender)
|
||||||
rate_limited = True
|
rate_limited = as_info.get("rate_limited")
|
||||||
if isinstance(as_info.get("rate_limited"), bool):
|
if not isinstance(rate_limited, bool):
|
||||||
rate_limited = as_info.get("rate_limited")
|
rate_limited = True
|
||||||
|
|
||||||
# namespace checks
|
# namespace checks
|
||||||
if not isinstance(as_info.get("namespaces"), dict):
|
if not isinstance(as_info.get("namespaces"), dict):
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2019 Matrix.org Foundation C.I.C.
|
# Copyright 2019-2021 Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -17,6 +17,8 @@ import re
|
|||||||
import threading
|
import threading
|
||||||
from typing import Callable, Dict, Optional
|
from typing import Callable, Dict, Optional
|
||||||
|
|
||||||
|
import attr
|
||||||
|
|
||||||
from synapse.python_dependencies import DependencyException, check_requirements
|
from synapse.python_dependencies import DependencyException, check_requirements
|
||||||
|
|
||||||
from ._base import Config, ConfigError
|
from ._base import Config, ConfigError
|
||||||
@ -34,13 +36,13 @@ _DEFAULT_FACTOR_SIZE = 0.5
|
|||||||
_DEFAULT_EVENT_CACHE_SIZE = "10K"
|
_DEFAULT_EVENT_CACHE_SIZE = "10K"
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(slots=True, auto_attribs=True)
|
||||||
class CacheProperties:
|
class CacheProperties:
|
||||||
def __init__(self):
|
# The default factor size for all caches
|
||||||
# The default factor size for all caches
|
default_factor_size: float = float(
|
||||||
self.default_factor_size = float(
|
os.environ.get(_CACHE_PREFIX, _DEFAULT_FACTOR_SIZE)
|
||||||
os.environ.get(_CACHE_PREFIX, _DEFAULT_FACTOR_SIZE)
|
)
|
||||||
)
|
resize_all_caches_func: Optional[Callable[[], None]] = None
|
||||||
self.resize_all_caches_func = None
|
|
||||||
|
|
||||||
|
|
||||||
properties = CacheProperties()
|
properties = CacheProperties()
|
||||||
@ -62,7 +64,7 @@ def _canonicalise_cache_name(cache_name: str) -> str:
|
|||||||
|
|
||||||
def add_resizable_cache(
|
def add_resizable_cache(
|
||||||
cache_name: str, cache_resize_callback: Callable[[float], None]
|
cache_name: str, cache_resize_callback: Callable[[float], None]
|
||||||
):
|
) -> None:
|
||||||
"""Register a cache that's size can dynamically change
|
"""Register a cache that's size can dynamically change
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@ -91,7 +93,7 @@ class CacheConfig(Config):
|
|||||||
_environ = os.environ
|
_environ = os.environ
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def reset():
|
def reset() -> None:
|
||||||
"""Resets the caches to their defaults. Used for tests."""
|
"""Resets the caches to their defaults. Used for tests."""
|
||||||
properties.default_factor_size = float(
|
properties.default_factor_size = float(
|
||||||
os.environ.get(_CACHE_PREFIX, _DEFAULT_FACTOR_SIZE)
|
os.environ.get(_CACHE_PREFIX, _DEFAULT_FACTOR_SIZE)
|
||||||
@ -100,7 +102,7 @@ class CacheConfig(Config):
|
|||||||
with _CACHES_LOCK:
|
with _CACHES_LOCK:
|
||||||
_CACHES.clear()
|
_CACHES.clear()
|
||||||
|
|
||||||
def generate_config_section(self, **kwargs):
|
def generate_config_section(self, **kwargs) -> str:
|
||||||
return """\
|
return """\
|
||||||
## Caching ##
|
## Caching ##
|
||||||
|
|
||||||
@ -162,7 +164,7 @@ class CacheConfig(Config):
|
|||||||
#sync_response_cache_duration: 2m
|
#sync_response_cache_duration: 2m
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs) -> None:
|
||||||
self.event_cache_size = self.parse_size(
|
self.event_cache_size = self.parse_size(
|
||||||
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
|
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
|
||||||
)
|
)
|
||||||
@ -232,7 +234,7 @@ class CacheConfig(Config):
|
|||||||
# needing an instance of Config
|
# needing an instance of Config
|
||||||
properties.resize_all_caches_func = self.resize_all_caches
|
properties.resize_all_caches_func = self.resize_all_caches
|
||||||
|
|
||||||
def resize_all_caches(self):
|
def resize_all_caches(self) -> None:
|
||||||
"""Ensure all cache sizes are up to date
|
"""Ensure all cache sizes are up to date
|
||||||
|
|
||||||
For each cache, run the mapped callback function with either
|
For each cache, run the mapped callback function with either
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -28,7 +29,7 @@ class CasConfig(Config):
|
|||||||
|
|
||||||
section = "cas"
|
section = "cas"
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs) -> None:
|
||||||
cas_config = config.get("cas_config", None)
|
cas_config = config.get("cas_config", None)
|
||||||
self.cas_enabled = cas_config and cas_config.get("enabled", True)
|
self.cas_enabled = cas_config and cas_config.get("enabled", True)
|
||||||
|
|
||||||
@ -51,7 +52,7 @@ class CasConfig(Config):
|
|||||||
self.cas_displayname_attribute = None
|
self.cas_displayname_attribute = None
|
||||||
self.cas_required_attributes = []
|
self.cas_required_attributes = []
|
||||||
|
|
||||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||||
return """\
|
return """\
|
||||||
# Enable Central Authentication Service (CAS) for registration and login.
|
# Enable Central Authentication Service (CAS) for registration and login.
|
||||||
#
|
#
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# Copyright 2014-2016 OpenMarket Ltd
|
# Copyright 2014-2016 OpenMarket Ltd
|
||||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
# Copyright 2020-2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -12,6 +12,7 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
import argparse
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
|
|
||||||
@ -119,7 +120,7 @@ class DatabaseConfig(Config):
|
|||||||
|
|
||||||
self.databases = []
|
self.databases = []
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs) -> None:
|
||||||
# We *experimentally* support specifying multiple databases via the
|
# We *experimentally* support specifying multiple databases via the
|
||||||
# `databases` key. This is a map from a label to database config in the
|
# `databases` key. This is a map from a label to database config in the
|
||||||
# same format as the `database` config option, plus an extra
|
# same format as the `database` config option, plus an extra
|
||||||
@ -163,12 +164,12 @@ class DatabaseConfig(Config):
|
|||||||
self.databases = [DatabaseConnectionConfig("master", database_config)]
|
self.databases = [DatabaseConnectionConfig("master", database_config)]
|
||||||
self.set_databasepath(database_path)
|
self.set_databasepath(database_path)
|
||||||
|
|
||||||
def generate_config_section(self, data_dir_path, **kwargs):
|
def generate_config_section(self, data_dir_path, **kwargs) -> str:
|
||||||
return DEFAULT_CONFIG % {
|
return DEFAULT_CONFIG % {
|
||||||
"database_path": os.path.join(data_dir_path, "homeserver.db")
|
"database_path": os.path.join(data_dir_path, "homeserver.db")
|
||||||
}
|
}
|
||||||
|
|
||||||
def read_arguments(self, args):
|
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||||
"""
|
"""
|
||||||
Cases for the cli input:
|
Cases for the cli input:
|
||||||
- If no databases are configured and no database_path is set, raise.
|
- If no databases are configured and no database_path is set, raise.
|
||||||
@ -194,7 +195,7 @@ class DatabaseConfig(Config):
|
|||||||
else:
|
else:
|
||||||
logger.warning(NON_SQLITE_DATABASE_PATH_WARNING)
|
logger.warning(NON_SQLITE_DATABASE_PATH_WARNING)
|
||||||
|
|
||||||
def set_databasepath(self, database_path):
|
def set_databasepath(self, database_path: str) -> None:
|
||||||
|
|
||||||
if database_path != ":memory:":
|
if database_path != ":memory:":
|
||||||
database_path = self.abspath(database_path)
|
database_path = self.abspath(database_path)
|
||||||
@ -202,7 +203,7 @@ class DatabaseConfig(Config):
|
|||||||
self.databases[0].config["args"]["database"] = database_path
|
self.databases[0].config["args"]["database"] = database_path
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def add_arguments(parser):
|
def add_arguments(parser: argparse.ArgumentParser) -> None:
|
||||||
db_group = parser.add_argument_group("database")
|
db_group = parser.add_argument_group("database")
|
||||||
db_group.add_argument(
|
db_group.add_argument(
|
||||||
"-d",
|
"-d",
|
||||||
|
@ -46,3 +46,6 @@ class ExperimentalConfig(Config):
|
|||||||
|
|
||||||
# MSC3266 (room summary api)
|
# MSC3266 (room summary api)
|
||||||
self.msc3266_enabled: bool = experimental.get("msc3266_enabled", False)
|
self.msc3266_enabled: bool = experimental.get("msc3266_enabled", False)
|
||||||
|
|
||||||
|
# MSC3030 (Jump to date API endpoint)
|
||||||
|
self.msc3030_enabled: bool = experimental.get("msc3030_enabled", False)
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
# Copyright 2014-2016 OpenMarket Ltd
|
# Copyright 2014-2016 OpenMarket Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -18,7 +19,7 @@ import os
|
|||||||
import sys
|
import sys
|
||||||
import threading
|
import threading
|
||||||
from string import Template
|
from string import Template
|
||||||
from typing import TYPE_CHECKING, Any, Dict
|
from typing import TYPE_CHECKING, Any, Dict, Optional
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
from zope.interface import implementer
|
from zope.interface import implementer
|
||||||
@ -40,6 +41,7 @@ from synapse.util.versionstring import get_version_string
|
|||||||
from ._base import Config, ConfigError
|
from ._base import Config, ConfigError
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
DEFAULT_LOG_CONFIG = Template(
|
DEFAULT_LOG_CONFIG = Template(
|
||||||
@ -141,13 +143,13 @@ removed in Synapse 1.3.0. You should instead set up a separate log configuration
|
|||||||
class LoggingConfig(Config):
|
class LoggingConfig(Config):
|
||||||
section = "logging"
|
section = "logging"
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs) -> None:
|
||||||
if config.get("log_file"):
|
if config.get("log_file"):
|
||||||
raise ConfigError(LOG_FILE_ERROR)
|
raise ConfigError(LOG_FILE_ERROR)
|
||||||
self.log_config = self.abspath(config.get("log_config"))
|
self.log_config = self.abspath(config.get("log_config"))
|
||||||
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
|
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
|
||||||
|
|
||||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||||
log_config = os.path.join(config_dir_path, server_name + ".log.config")
|
log_config = os.path.join(config_dir_path, server_name + ".log.config")
|
||||||
return (
|
return (
|
||||||
"""\
|
"""\
|
||||||
@ -161,14 +163,14 @@ class LoggingConfig(Config):
|
|||||||
% locals()
|
% locals()
|
||||||
)
|
)
|
||||||
|
|
||||||
def read_arguments(self, args):
|
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||||
if args.no_redirect_stdio is not None:
|
if args.no_redirect_stdio is not None:
|
||||||
self.no_redirect_stdio = args.no_redirect_stdio
|
self.no_redirect_stdio = args.no_redirect_stdio
|
||||||
if args.log_file is not None:
|
if args.log_file is not None:
|
||||||
raise ConfigError(LOG_FILE_ERROR)
|
raise ConfigError(LOG_FILE_ERROR)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def add_arguments(parser):
|
def add_arguments(parser: argparse.ArgumentParser) -> None:
|
||||||
logging_group = parser.add_argument_group("logging")
|
logging_group = parser.add_argument_group("logging")
|
||||||
logging_group.add_argument(
|
logging_group.add_argument(
|
||||||
"-n",
|
"-n",
|
||||||
@ -197,7 +199,9 @@ class LoggingConfig(Config):
|
|||||||
log_config_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=log_file))
|
log_config_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=log_file))
|
||||||
|
|
||||||
|
|
||||||
def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) -> None:
|
def _setup_stdlib_logging(
|
||||||
|
config: "HomeServerConfig", log_config_path: Optional[str], logBeginner: LogBeginner
|
||||||
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Set up Python standard library logging.
|
Set up Python standard library logging.
|
||||||
"""
|
"""
|
||||||
@ -230,7 +234,7 @@ def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) ->
|
|||||||
log_metadata_filter = MetadataFilter({"server_name": config.server.server_name})
|
log_metadata_filter = MetadataFilter({"server_name": config.server.server_name})
|
||||||
old_factory = logging.getLogRecordFactory()
|
old_factory = logging.getLogRecordFactory()
|
||||||
|
|
||||||
def factory(*args, **kwargs):
|
def factory(*args: Any, **kwargs: Any) -> logging.LogRecord:
|
||||||
record = old_factory(*args, **kwargs)
|
record = old_factory(*args, **kwargs)
|
||||||
log_context_filter.filter(record)
|
log_context_filter.filter(record)
|
||||||
log_metadata_filter.filter(record)
|
log_metadata_filter.filter(record)
|
||||||
@ -297,7 +301,7 @@ def _load_logging_config(log_config_path: str) -> None:
|
|||||||
logging.config.dictConfig(log_config)
|
logging.config.dictConfig(log_config)
|
||||||
|
|
||||||
|
|
||||||
def _reload_logging_config(log_config_path):
|
def _reload_logging_config(log_config_path: Optional[str]) -> None:
|
||||||
"""
|
"""
|
||||||
Reload the log configuration from the file and apply it.
|
Reload the log configuration from the file and apply it.
|
||||||
"""
|
"""
|
||||||
@ -311,8 +315,8 @@ def _reload_logging_config(log_config_path):
|
|||||||
|
|
||||||
def setup_logging(
|
def setup_logging(
|
||||||
hs: "HomeServer",
|
hs: "HomeServer",
|
||||||
config,
|
config: "HomeServerConfig",
|
||||||
use_worker_options=False,
|
use_worker_options: bool = False,
|
||||||
logBeginner: LogBeginner = globalLogBeginner,
|
logBeginner: LogBeginner = globalLogBeginner,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
|
@ -14,7 +14,7 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from collections import Counter
|
from collections import Counter
|
||||||
from typing import Collection, Iterable, List, Mapping, Optional, Tuple, Type
|
from typing import Any, Collection, Iterable, List, Mapping, Optional, Tuple, Type
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
@ -36,7 +36,7 @@ LEGACY_USER_MAPPING_PROVIDER = "synapse.handlers.oidc_handler.JinjaOidcMappingPr
|
|||||||
class OIDCConfig(Config):
|
class OIDCConfig(Config):
|
||||||
section = "oidc"
|
section = "oidc"
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs) -> None:
|
||||||
self.oidc_providers = tuple(_parse_oidc_provider_configs(config))
|
self.oidc_providers = tuple(_parse_oidc_provider_configs(config))
|
||||||
if not self.oidc_providers:
|
if not self.oidc_providers:
|
||||||
return
|
return
|
||||||
@ -66,7 +66,7 @@ class OIDCConfig(Config):
|
|||||||
# OIDC is enabled if we have a provider
|
# OIDC is enabled if we have a provider
|
||||||
return bool(self.oidc_providers)
|
return bool(self.oidc_providers)
|
||||||
|
|
||||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||||
return """\
|
return """\
|
||||||
# List of OpenID Connect (OIDC) / OAuth 2.0 identity providers, for registration
|
# List of OpenID Connect (OIDC) / OAuth 2.0 identity providers, for registration
|
||||||
# and login.
|
# and login.
|
||||||
@ -495,89 +495,89 @@ def _parse_oidc_config_dict(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
class OidcProviderClientSecretJwtKey:
|
class OidcProviderClientSecretJwtKey:
|
||||||
# a pem-encoded signing key
|
# a pem-encoded signing key
|
||||||
key = attr.ib(type=str)
|
key: str
|
||||||
|
|
||||||
# properties to include in the JWT header
|
# properties to include in the JWT header
|
||||||
jwt_header = attr.ib(type=Mapping[str, str])
|
jwt_header: Mapping[str, str]
|
||||||
|
|
||||||
# properties to include in the JWT payload.
|
# properties to include in the JWT payload.
|
||||||
jwt_payload = attr.ib(type=Mapping[str, str])
|
jwt_payload: Mapping[str, str]
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
class OidcProviderConfig:
|
class OidcProviderConfig:
|
||||||
# a unique identifier for this identity provider. Used in the 'user_external_ids'
|
# a unique identifier for this identity provider. Used in the 'user_external_ids'
|
||||||
# table, as well as the query/path parameter used in the login protocol.
|
# table, as well as the query/path parameter used in the login protocol.
|
||||||
idp_id = attr.ib(type=str)
|
idp_id: str
|
||||||
|
|
||||||
# user-facing name for this identity provider.
|
# user-facing name for this identity provider.
|
||||||
idp_name = attr.ib(type=str)
|
idp_name: str
|
||||||
|
|
||||||
# Optional MXC URI for icon for this IdP.
|
# Optional MXC URI for icon for this IdP.
|
||||||
idp_icon = attr.ib(type=Optional[str])
|
idp_icon: Optional[str]
|
||||||
|
|
||||||
# Optional brand identifier for this IdP.
|
# Optional brand identifier for this IdP.
|
||||||
idp_brand = attr.ib(type=Optional[str])
|
idp_brand: Optional[str]
|
||||||
|
|
||||||
# whether the OIDC discovery mechanism is used to discover endpoints
|
# whether the OIDC discovery mechanism is used to discover endpoints
|
||||||
discover = attr.ib(type=bool)
|
discover: bool
|
||||||
|
|
||||||
# the OIDC issuer. Used to validate tokens and (if discovery is enabled) to
|
# the OIDC issuer. Used to validate tokens and (if discovery is enabled) to
|
||||||
# discover the provider's endpoints.
|
# discover the provider's endpoints.
|
||||||
issuer = attr.ib(type=str)
|
issuer: str
|
||||||
|
|
||||||
# oauth2 client id to use
|
# oauth2 client id to use
|
||||||
client_id = attr.ib(type=str)
|
client_id: str
|
||||||
|
|
||||||
# oauth2 client secret to use. if `None`, use client_secret_jwt_key to generate
|
# oauth2 client secret to use. if `None`, use client_secret_jwt_key to generate
|
||||||
# a secret.
|
# a secret.
|
||||||
client_secret = attr.ib(type=Optional[str])
|
client_secret: Optional[str]
|
||||||
|
|
||||||
# key to use to construct a JWT to use as a client secret. May be `None` if
|
# key to use to construct a JWT to use as a client secret. May be `None` if
|
||||||
# `client_secret` is set.
|
# `client_secret` is set.
|
||||||
client_secret_jwt_key = attr.ib(type=Optional[OidcProviderClientSecretJwtKey])
|
client_secret_jwt_key: Optional[OidcProviderClientSecretJwtKey]
|
||||||
|
|
||||||
# auth method to use when exchanging the token.
|
# auth method to use when exchanging the token.
|
||||||
# Valid values are 'client_secret_basic', 'client_secret_post' and
|
# Valid values are 'client_secret_basic', 'client_secret_post' and
|
||||||
# 'none'.
|
# 'none'.
|
||||||
client_auth_method = attr.ib(type=str)
|
client_auth_method: str
|
||||||
|
|
||||||
# list of scopes to request
|
# list of scopes to request
|
||||||
scopes = attr.ib(type=Collection[str])
|
scopes: Collection[str]
|
||||||
|
|
||||||
# the oauth2 authorization endpoint. Required if discovery is disabled.
|
# the oauth2 authorization endpoint. Required if discovery is disabled.
|
||||||
authorization_endpoint = attr.ib(type=Optional[str])
|
authorization_endpoint: Optional[str]
|
||||||
|
|
||||||
# the oauth2 token endpoint. Required if discovery is disabled.
|
# the oauth2 token endpoint. Required if discovery is disabled.
|
||||||
token_endpoint = attr.ib(type=Optional[str])
|
token_endpoint: Optional[str]
|
||||||
|
|
||||||
# the OIDC userinfo endpoint. Required if discovery is disabled and the
|
# the OIDC userinfo endpoint. Required if discovery is disabled and the
|
||||||
# "openid" scope is not requested.
|
# "openid" scope is not requested.
|
||||||
userinfo_endpoint = attr.ib(type=Optional[str])
|
userinfo_endpoint: Optional[str]
|
||||||
|
|
||||||
# URI where to fetch the JWKS. Required if discovery is disabled and the
|
# URI where to fetch the JWKS. Required if discovery is disabled and the
|
||||||
# "openid" scope is used.
|
# "openid" scope is used.
|
||||||
jwks_uri = attr.ib(type=Optional[str])
|
jwks_uri: Optional[str]
|
||||||
|
|
||||||
# Whether to skip metadata verification
|
# Whether to skip metadata verification
|
||||||
skip_verification = attr.ib(type=bool)
|
skip_verification: bool
|
||||||
|
|
||||||
# Whether to fetch the user profile from the userinfo endpoint. Valid
|
# Whether to fetch the user profile from the userinfo endpoint. Valid
|
||||||
# values are: "auto" or "userinfo_endpoint".
|
# values are: "auto" or "userinfo_endpoint".
|
||||||
user_profile_method = attr.ib(type=str)
|
user_profile_method: str
|
||||||
|
|
||||||
# whether to allow a user logging in via OIDC to match a pre-existing account
|
# whether to allow a user logging in via OIDC to match a pre-existing account
|
||||||
# instead of failing
|
# instead of failing
|
||||||
allow_existing_users = attr.ib(type=bool)
|
allow_existing_users: bool
|
||||||
|
|
||||||
# the class of the user mapping provider
|
# the class of the user mapping provider
|
||||||
user_mapping_provider_class = attr.ib(type=Type)
|
user_mapping_provider_class: Type
|
||||||
|
|
||||||
# the config of the user mapping provider
|
# the config of the user mapping provider
|
||||||
user_mapping_provider_config = attr.ib()
|
user_mapping_provider_config: Any
|
||||||
|
|
||||||
# required attributes to require in userinfo to allow login/registration
|
# required attributes to require in userinfo to allow login/registration
|
||||||
attribute_requirements = attr.ib(type=List[SsoAttributeRequirement])
|
attribute_requirements: List[SsoAttributeRequirement]
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -11,6 +12,8 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
import argparse
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
from synapse.api.constants import RoomCreationPreset
|
from synapse.api.constants import RoomCreationPreset
|
||||||
from synapse.config._base import Config, ConfigError
|
from synapse.config._base import Config, ConfigError
|
||||||
@ -113,32 +116,73 @@ class RegistrationConfig(Config):
|
|||||||
self.session_lifetime = session_lifetime
|
self.session_lifetime = session_lifetime
|
||||||
|
|
||||||
# The `refreshable_access_token_lifetime` applies for tokens that can be renewed
|
# The `refreshable_access_token_lifetime` applies for tokens that can be renewed
|
||||||
# using a refresh token, as per MSC2918. If it is `None`, the refresh
|
# using a refresh token, as per MSC2918.
|
||||||
# token mechanism is disabled.
|
# If it is `None`, the refresh token mechanism is disabled.
|
||||||
#
|
|
||||||
# Since it is incompatible with the `session_lifetime` mechanism, it is set to
|
|
||||||
# `None` by default if a `session_lifetime` is set.
|
|
||||||
refreshable_access_token_lifetime = config.get(
|
refreshable_access_token_lifetime = config.get(
|
||||||
"refreshable_access_token_lifetime",
|
"refreshable_access_token_lifetime",
|
||||||
"5m" if session_lifetime is None else None,
|
"5m",
|
||||||
)
|
)
|
||||||
if refreshable_access_token_lifetime is not None:
|
if refreshable_access_token_lifetime is not None:
|
||||||
refreshable_access_token_lifetime = self.parse_duration(
|
refreshable_access_token_lifetime = self.parse_duration(
|
||||||
refreshable_access_token_lifetime
|
refreshable_access_token_lifetime
|
||||||
)
|
)
|
||||||
self.refreshable_access_token_lifetime = refreshable_access_token_lifetime
|
self.refreshable_access_token_lifetime: Optional[
|
||||||
|
int
|
||||||
|
] = refreshable_access_token_lifetime
|
||||||
|
|
||||||
if (
|
if (
|
||||||
session_lifetime is not None
|
self.session_lifetime is not None
|
||||||
and refreshable_access_token_lifetime is not None
|
and "refreshable_access_token_lifetime" in config
|
||||||
):
|
):
|
||||||
raise ConfigError(
|
if self.session_lifetime < self.refreshable_access_token_lifetime:
|
||||||
"The refresh token mechanism is incompatible with the "
|
raise ConfigError(
|
||||||
"`session_lifetime` option. Consider disabling the "
|
"Both `session_lifetime` and `refreshable_access_token_lifetime` "
|
||||||
"`session_lifetime` option or disabling the refresh token "
|
"configuration options have been set, but `refreshable_access_token_lifetime` "
|
||||||
"mechanism by removing the `refreshable_access_token_lifetime` "
|
" exceeds `session_lifetime`!"
|
||||||
"option."
|
)
|
||||||
|
|
||||||
|
# The `nonrefreshable_access_token_lifetime` applies for tokens that can NOT be
|
||||||
|
# refreshed using a refresh token.
|
||||||
|
# If it is None, then these tokens last for the entire length of the session,
|
||||||
|
# which is infinite by default.
|
||||||
|
# The intention behind this configuration option is to help with requiring
|
||||||
|
# all clients to use refresh tokens, if the homeserver administrator requires.
|
||||||
|
nonrefreshable_access_token_lifetime = config.get(
|
||||||
|
"nonrefreshable_access_token_lifetime",
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
if nonrefreshable_access_token_lifetime is not None:
|
||||||
|
nonrefreshable_access_token_lifetime = self.parse_duration(
|
||||||
|
nonrefreshable_access_token_lifetime
|
||||||
)
|
)
|
||||||
|
self.nonrefreshable_access_token_lifetime = nonrefreshable_access_token_lifetime
|
||||||
|
|
||||||
|
if (
|
||||||
|
self.session_lifetime is not None
|
||||||
|
and self.nonrefreshable_access_token_lifetime is not None
|
||||||
|
):
|
||||||
|
if self.session_lifetime < self.nonrefreshable_access_token_lifetime:
|
||||||
|
raise ConfigError(
|
||||||
|
"Both `session_lifetime` and `nonrefreshable_access_token_lifetime` "
|
||||||
|
"configuration options have been set, but `nonrefreshable_access_token_lifetime` "
|
||||||
|
" exceeds `session_lifetime`!"
|
||||||
|
)
|
||||||
|
|
||||||
|
refresh_token_lifetime = config.get("refresh_token_lifetime")
|
||||||
|
if refresh_token_lifetime is not None:
|
||||||
|
refresh_token_lifetime = self.parse_duration(refresh_token_lifetime)
|
||||||
|
self.refresh_token_lifetime: Optional[int] = refresh_token_lifetime
|
||||||
|
|
||||||
|
if (
|
||||||
|
self.session_lifetime is not None
|
||||||
|
and self.refresh_token_lifetime is not None
|
||||||
|
):
|
||||||
|
if self.session_lifetime < self.refresh_token_lifetime:
|
||||||
|
raise ConfigError(
|
||||||
|
"Both `session_lifetime` and `refresh_token_lifetime` "
|
||||||
|
"configuration options have been set, but `refresh_token_lifetime` "
|
||||||
|
" exceeds `session_lifetime`!"
|
||||||
|
)
|
||||||
|
|
||||||
# The fallback template used for authenticating using a registration token
|
# The fallback template used for authenticating using a registration token
|
||||||
self.registration_token_template = self.read_template("registration_token.html")
|
self.registration_token_template = self.read_template("registration_token.html")
|
||||||
@ -176,6 +220,44 @@ class RegistrationConfig(Config):
|
|||||||
#
|
#
|
||||||
#session_lifetime: 24h
|
#session_lifetime: 24h
|
||||||
|
|
||||||
|
# Time that an access token remains valid for, if the session is
|
||||||
|
# using refresh tokens.
|
||||||
|
# For more information about refresh tokens, please see the manual.
|
||||||
|
# Note that this only applies to clients which advertise support for
|
||||||
|
# refresh tokens.
|
||||||
|
#
|
||||||
|
# Note also that this is calculated at login time and refresh time:
|
||||||
|
# changes are not applied to existing sessions until they are refreshed.
|
||||||
|
#
|
||||||
|
# By default, this is 5 minutes.
|
||||||
|
#
|
||||||
|
#refreshable_access_token_lifetime: 5m
|
||||||
|
|
||||||
|
# Time that a refresh token remains valid for (provided that it is not
|
||||||
|
# exchanged for another one first).
|
||||||
|
# This option can be used to automatically log-out inactive sessions.
|
||||||
|
# Please see the manual for more information.
|
||||||
|
#
|
||||||
|
# Note also that this is calculated at login time and refresh time:
|
||||||
|
# changes are not applied to existing sessions until they are refreshed.
|
||||||
|
#
|
||||||
|
# By default, this is infinite.
|
||||||
|
#
|
||||||
|
#refresh_token_lifetime: 24h
|
||||||
|
|
||||||
|
# Time that an access token remains valid for, if the session is NOT
|
||||||
|
# using refresh tokens.
|
||||||
|
# Please note that not all clients support refresh tokens, so setting
|
||||||
|
# this to a short value may be inconvenient for some users who will
|
||||||
|
# then be logged out frequently.
|
||||||
|
#
|
||||||
|
# Note also that this is calculated at login time: changes are not applied
|
||||||
|
# retrospectively to existing sessions for users that have already logged in.
|
||||||
|
#
|
||||||
|
# By default, this is infinite.
|
||||||
|
#
|
||||||
|
#nonrefreshable_access_token_lifetime: 24h
|
||||||
|
|
||||||
# The user must provide all of the below types of 3PID when registering.
|
# The user must provide all of the below types of 3PID when registering.
|
||||||
#
|
#
|
||||||
#registrations_require_3pid:
|
#registrations_require_3pid:
|
||||||
@ -369,7 +451,7 @@ class RegistrationConfig(Config):
|
|||||||
)
|
)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def add_arguments(parser):
|
def add_arguments(parser: argparse.ArgumentParser) -> None:
|
||||||
reg_group = parser.add_argument_group("registration")
|
reg_group = parser.add_argument_group("registration")
|
||||||
reg_group.add_argument(
|
reg_group.add_argument(
|
||||||
"--enable-registration",
|
"--enable-registration",
|
||||||
@ -378,6 +460,6 @@ class RegistrationConfig(Config):
|
|||||||
help="Enable registration for new users.",
|
help="Enable registration for new users.",
|
||||||
)
|
)
|
||||||
|
|
||||||
def read_arguments(self, args):
|
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||||
if args.enable_registration is not None:
|
if args.enable_registration is not None:
|
||||||
self.enable_registration = strtobool(str(args.enable_registration))
|
self.enable_registration = strtobool(str(args.enable_registration))
|
||||||
|
@ -15,11 +15,12 @@
|
|||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
from typing import Dict, List
|
from typing import Dict, List, Tuple
|
||||||
from urllib.request import getproxies_environment # type: ignore
|
from urllib.request import getproxies_environment # type: ignore
|
||||||
|
|
||||||
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST, generate_ip_set
|
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST, generate_ip_set
|
||||||
from synapse.python_dependencies import DependencyException, check_requirements
|
from synapse.python_dependencies import DependencyException, check_requirements
|
||||||
|
from synapse.types import JsonDict
|
||||||
from synapse.util.module_loader import load_module
|
from synapse.util.module_loader import load_module
|
||||||
|
|
||||||
from ._base import Config, ConfigError
|
from ._base import Config, ConfigError
|
||||||
@ -57,7 +58,9 @@ MediaStorageProviderConfig = namedtuple(
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def parse_thumbnail_requirements(thumbnail_sizes):
|
def parse_thumbnail_requirements(
|
||||||
|
thumbnail_sizes: List[JsonDict],
|
||||||
|
) -> Dict[str, Tuple[ThumbnailRequirement, ...]]:
|
||||||
"""Takes a list of dictionaries with "width", "height", and "method" keys
|
"""Takes a list of dictionaries with "width", "height", and "method" keys
|
||||||
and creates a map from image media types to the thumbnail size, thumbnailing
|
and creates a map from image media types to the thumbnail size, thumbnailing
|
||||||
method, and thumbnail media type to precalculate
|
method, and thumbnail media type to precalculate
|
||||||
@ -69,7 +72,7 @@ def parse_thumbnail_requirements(thumbnail_sizes):
|
|||||||
Dictionary mapping from media type string to list of
|
Dictionary mapping from media type string to list of
|
||||||
ThumbnailRequirement tuples.
|
ThumbnailRequirement tuples.
|
||||||
"""
|
"""
|
||||||
requirements: Dict[str, List] = {}
|
requirements: Dict[str, List[ThumbnailRequirement]] = {}
|
||||||
for size in thumbnail_sizes:
|
for size in thumbnail_sizes:
|
||||||
width = size["width"]
|
width = size["width"]
|
||||||
height = size["height"]
|
height = size["height"]
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# Copyright 2018 New Vector Ltd
|
# Copyright 2018 New Vector Ltd
|
||||||
# Copyright 2019 The Matrix.org Foundation C.I.C.
|
# Copyright 2019-2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -14,10 +14,11 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import Any, List
|
from typing import Any, List, Set
|
||||||
|
|
||||||
from synapse.config.sso import SsoAttributeRequirement
|
from synapse.config.sso import SsoAttributeRequirement
|
||||||
from synapse.python_dependencies import DependencyException, check_requirements
|
from synapse.python_dependencies import DependencyException, check_requirements
|
||||||
|
from synapse.types import JsonDict
|
||||||
from synapse.util.module_loader import load_module, load_python_module
|
from synapse.util.module_loader import load_module, load_python_module
|
||||||
|
|
||||||
from ._base import Config, ConfigError
|
from ._base import Config, ConfigError
|
||||||
@ -33,7 +34,7 @@ LEGACY_USER_MAPPING_PROVIDER = (
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def _dict_merge(merge_dict, into_dict):
|
def _dict_merge(merge_dict: dict, into_dict: dict) -> None:
|
||||||
"""Do a deep merge of two dicts
|
"""Do a deep merge of two dicts
|
||||||
|
|
||||||
Recursively merges `merge_dict` into `into_dict`:
|
Recursively merges `merge_dict` into `into_dict`:
|
||||||
@ -43,8 +44,8 @@ def _dict_merge(merge_dict, into_dict):
|
|||||||
the value from `merge_dict`.
|
the value from `merge_dict`.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
merge_dict (dict): dict to merge
|
merge_dict: dict to merge
|
||||||
into_dict (dict): target dict
|
into_dict: target dict to be modified
|
||||||
"""
|
"""
|
||||||
for k, v in merge_dict.items():
|
for k, v in merge_dict.items():
|
||||||
if k not in into_dict:
|
if k not in into_dict:
|
||||||
@ -64,7 +65,7 @@ def _dict_merge(merge_dict, into_dict):
|
|||||||
class SAML2Config(Config):
|
class SAML2Config(Config):
|
||||||
section = "saml2"
|
section = "saml2"
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs) -> None:
|
||||||
self.saml2_enabled = False
|
self.saml2_enabled = False
|
||||||
|
|
||||||
saml2_config = config.get("saml2_config")
|
saml2_config = config.get("saml2_config")
|
||||||
@ -183,8 +184,8 @@ class SAML2Config(Config):
|
|||||||
)
|
)
|
||||||
|
|
||||||
def _default_saml_config_dict(
|
def _default_saml_config_dict(
|
||||||
self, required_attributes: set, optional_attributes: set
|
self, required_attributes: Set[str], optional_attributes: Set[str]
|
||||||
):
|
) -> JsonDict:
|
||||||
"""Generate a configuration dictionary with required and optional attributes that
|
"""Generate a configuration dictionary with required and optional attributes that
|
||||||
will be needed to process new user registration
|
will be needed to process new user registration
|
||||||
|
|
||||||
@ -195,7 +196,7 @@ class SAML2Config(Config):
|
|||||||
additional information to Synapse user accounts, but are not required
|
additional information to Synapse user accounts, but are not required
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
dict: A SAML configuration dictionary
|
A SAML configuration dictionary
|
||||||
"""
|
"""
|
||||||
import saml2
|
import saml2
|
||||||
|
|
||||||
@ -222,7 +223,7 @@ class SAML2Config(Config):
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||||
return """\
|
return """\
|
||||||
## Single sign-on integration ##
|
## Single sign-on integration ##
|
||||||
|
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
|
import argparse
|
||||||
import itertools
|
import itertools
|
||||||
import logging
|
import logging
|
||||||
import os.path
|
import os.path
|
||||||
@ -27,6 +28,7 @@ from netaddr import AddrFormatError, IPNetwork, IPSet
|
|||||||
from twisted.conch.ssh.keys import Key
|
from twisted.conch.ssh.keys import Key
|
||||||
|
|
||||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||||
|
from synapse.types import JsonDict
|
||||||
from synapse.util.module_loader import load_module
|
from synapse.util.module_loader import load_module
|
||||||
from synapse.util.stringutils import parse_and_validate_server_name
|
from synapse.util.stringutils import parse_and_validate_server_name
|
||||||
|
|
||||||
@ -1223,7 +1225,7 @@ class ServerConfig(Config):
|
|||||||
% locals()
|
% locals()
|
||||||
)
|
)
|
||||||
|
|
||||||
def read_arguments(self, args):
|
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||||
if args.manhole is not None:
|
if args.manhole is not None:
|
||||||
self.manhole = args.manhole
|
self.manhole = args.manhole
|
||||||
if args.daemonize is not None:
|
if args.daemonize is not None:
|
||||||
@ -1232,7 +1234,7 @@ class ServerConfig(Config):
|
|||||||
self.print_pidfile = args.print_pidfile
|
self.print_pidfile = args.print_pidfile
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def add_arguments(parser):
|
def add_arguments(parser: argparse.ArgumentParser) -> None:
|
||||||
server_group = parser.add_argument_group("server")
|
server_group = parser.add_argument_group("server")
|
||||||
server_group.add_argument(
|
server_group.add_argument(
|
||||||
"-D",
|
"-D",
|
||||||
@ -1274,14 +1276,16 @@ class ServerConfig(Config):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def is_threepid_reserved(reserved_threepids, threepid):
|
def is_threepid_reserved(
|
||||||
|
reserved_threepids: List[JsonDict], threepid: JsonDict
|
||||||
|
) -> bool:
|
||||||
"""Check the threepid against the reserved threepid config
|
"""Check the threepid against the reserved threepid config
|
||||||
Args:
|
Args:
|
||||||
reserved_threepids([dict]) - list of reserved threepids
|
reserved_threepids: List of reserved threepids
|
||||||
threepid(dict) - The threepid to test for
|
threepid: The threepid to test for
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
boolean Is the threepid undertest reserved_user
|
Is the threepid undertest reserved_user
|
||||||
"""
|
"""
|
||||||
|
|
||||||
for tp in reserved_threepids:
|
for tp in reserved_threepids:
|
||||||
@ -1290,7 +1294,9 @@ def is_threepid_reserved(reserved_threepids, threepid):
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
def read_gc_thresholds(thresholds):
|
def read_gc_thresholds(
|
||||||
|
thresholds: Optional[List[Any]],
|
||||||
|
) -> Optional[Tuple[int, int, int]]:
|
||||||
"""Reads the three integer thresholds for garbage collection. Ensures that
|
"""Reads the three integer thresholds for garbage collection. Ensures that
|
||||||
the thresholds are integers if thresholds are supplied.
|
the thresholds are integers if thresholds are supplied.
|
||||||
"""
|
"""
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
# Copyright 2020-2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -29,13 +29,13 @@ https://matrix-org.github.io/synapse/latest/templates.html
|
|||||||
---------------------------------------------------------------------------------------"""
|
---------------------------------------------------------------------------------------"""
|
||||||
|
|
||||||
|
|
||||||
@attr.s(frozen=True)
|
@attr.s(frozen=True, auto_attribs=True)
|
||||||
class SsoAttributeRequirement:
|
class SsoAttributeRequirement:
|
||||||
"""Object describing a single requirement for SSO attributes."""
|
"""Object describing a single requirement for SSO attributes."""
|
||||||
|
|
||||||
attribute = attr.ib(type=str)
|
attribute: str
|
||||||
# If a value is not given, than the attribute must simply exist.
|
# If a value is not given, than the attribute must simply exist.
|
||||||
value = attr.ib(type=Optional[str])
|
value: Optional[str]
|
||||||
|
|
||||||
JSON_SCHEMA = {
|
JSON_SCHEMA = {
|
||||||
"type": "object",
|
"type": "object",
|
||||||
@ -49,7 +49,7 @@ class SSOConfig(Config):
|
|||||||
|
|
||||||
section = "sso"
|
section = "sso"
|
||||||
|
|
||||||
def read_config(self, config, **kwargs):
|
def read_config(self, config, **kwargs) -> None:
|
||||||
sso_config: Dict[str, Any] = config.get("sso") or {}
|
sso_config: Dict[str, Any] = config.get("sso") or {}
|
||||||
|
|
||||||
# The sso-specific template_dir
|
# The sso-specific template_dir
|
||||||
@ -106,7 +106,7 @@ class SSOConfig(Config):
|
|||||||
)
|
)
|
||||||
self.sso_client_whitelist.append(login_fallback_url)
|
self.sso_client_whitelist.append(login_fallback_url)
|
||||||
|
|
||||||
def generate_config_section(self, **kwargs):
|
def generate_config_section(self, **kwargs) -> str:
|
||||||
return """\
|
return """\
|
||||||
# Additional settings to use with single-sign on systems such as OpenID Connect,
|
# Additional settings to use with single-sign on systems such as OpenID Connect,
|
||||||
# SAML2 and CAS.
|
# SAML2 and CAS.
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
# Copyright 2016 OpenMarket Ltd
|
# Copyright 2016 OpenMarket Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -12,6 +13,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
|
import argparse
|
||||||
from typing import List, Union
|
from typing import List, Union
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
@ -343,7 +345,7 @@ class WorkerConfig(Config):
|
|||||||
#worker_replication_secret: ""
|
#worker_replication_secret: ""
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def read_arguments(self, args):
|
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||||
# We support a bunch of command line arguments that override options in
|
# We support a bunch of command line arguments that override options in
|
||||||
# the config. A lot of these options have a worker_* prefix when running
|
# the config. A lot of these options have a worker_* prefix when running
|
||||||
# on workers so we also have to override them when command line options
|
# on workers so we also have to override them when command line options
|
||||||
|
@ -667,21 +667,25 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
|||||||
perspective_name,
|
perspective_name,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
request: JsonDict = {}
|
||||||
|
for queue_value in keys_to_fetch:
|
||||||
|
# there may be multiple requests for each server, so we have to merge
|
||||||
|
# them intelligently.
|
||||||
|
request_for_server = {
|
||||||
|
key_id: {
|
||||||
|
"minimum_valid_until_ts": queue_value.minimum_valid_until_ts,
|
||||||
|
}
|
||||||
|
for key_id in queue_value.key_ids
|
||||||
|
}
|
||||||
|
request.setdefault(queue_value.server_name, {}).update(request_for_server)
|
||||||
|
|
||||||
|
logger.debug("Request to notary server %s: %s", perspective_name, request)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
query_response = await self.client.post_json(
|
query_response = await self.client.post_json(
|
||||||
destination=perspective_name,
|
destination=perspective_name,
|
||||||
path="/_matrix/key/v2/query",
|
path="/_matrix/key/v2/query",
|
||||||
data={
|
data={"server_keys": request},
|
||||||
"server_keys": {
|
|
||||||
queue_value.server_name: {
|
|
||||||
key_id: {
|
|
||||||
"minimum_valid_until_ts": queue_value.minimum_valid_until_ts,
|
|
||||||
}
|
|
||||||
for key_id in queue_value.key_ids
|
|
||||||
}
|
|
||||||
for queue_value in keys_to_fetch
|
|
||||||
}
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
except (NotRetryingDestination, RequestSendFailed) as e:
|
except (NotRetryingDestination, RequestSendFailed) as e:
|
||||||
# these both have str() representations which we can't really improve upon
|
# these both have str() representations which we can't really improve upon
|
||||||
@ -689,6 +693,10 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
|||||||
except HttpResponseException as e:
|
except HttpResponseException as e:
|
||||||
raise KeyLookupError("Remote server returned an error: %s" % (e,))
|
raise KeyLookupError("Remote server returned an error: %s" % (e,))
|
||||||
|
|
||||||
|
logger.debug(
|
||||||
|
"Response from notary server %s: %s", perspective_name, query_response
|
||||||
|
)
|
||||||
|
|
||||||
keys: Dict[str, Dict[str, FetchKeyResult]] = {}
|
keys: Dict[str, Dict[str, FetchKeyResult]] = {}
|
||||||
added_keys: List[Tuple[str, str, FetchKeyResult]] = []
|
added_keys: List[Tuple[str, str, FetchKeyResult]] = []
|
||||||
|
|
||||||
|
@ -322,6 +322,11 @@ class _AsyncEventContextImpl(EventContext):
|
|||||||
attributes by loading from the database.
|
attributes by loading from the database.
|
||||||
"""
|
"""
|
||||||
if self.state_group is None:
|
if self.state_group is None:
|
||||||
|
# No state group means the event is an outlier. Usually the state_ids dicts are also
|
||||||
|
# pre-set to empty dicts, but they get reset when the context is serialized, so set
|
||||||
|
# them to empty dicts again here.
|
||||||
|
self._current_state_ids = {}
|
||||||
|
self._prev_state_ids = {}
|
||||||
return
|
return
|
||||||
|
|
||||||
current_state_ids = await self._storage.state.get_state_ids_for_group(
|
current_state_ids = await self._storage.state.get_state_ids_for_group(
|
||||||
|
@ -306,6 +306,7 @@ def format_event_for_client_v2_without_room_id(d: JsonDict) -> JsonDict:
|
|||||||
def serialize_event(
|
def serialize_event(
|
||||||
e: Union[JsonDict, EventBase],
|
e: Union[JsonDict, EventBase],
|
||||||
time_now_ms: int,
|
time_now_ms: int,
|
||||||
|
*,
|
||||||
as_client_event: bool = True,
|
as_client_event: bool = True,
|
||||||
event_format: Callable[[JsonDict], JsonDict] = format_event_for_client_v1,
|
event_format: Callable[[JsonDict], JsonDict] = format_event_for_client_v1,
|
||||||
token_id: Optional[str] = None,
|
token_id: Optional[str] = None,
|
||||||
@ -393,7 +394,8 @@ class EventClientSerializer:
|
|||||||
self,
|
self,
|
||||||
event: Union[JsonDict, EventBase],
|
event: Union[JsonDict, EventBase],
|
||||||
time_now: int,
|
time_now: int,
|
||||||
bundle_relations: bool = True,
|
*,
|
||||||
|
bundle_aggregations: bool = True,
|
||||||
**kwargs: Any,
|
**kwargs: Any,
|
||||||
) -> JsonDict:
|
) -> JsonDict:
|
||||||
"""Serializes a single event.
|
"""Serializes a single event.
|
||||||
@ -401,8 +403,9 @@ class EventClientSerializer:
|
|||||||
Args:
|
Args:
|
||||||
event: The event being serialized.
|
event: The event being serialized.
|
||||||
time_now: The current time in milliseconds
|
time_now: The current time in milliseconds
|
||||||
bundle_relations: Whether to include the bundled relations for this
|
bundle_aggregations: Whether to include the bundled aggregations for this
|
||||||
event.
|
event. Only applies to non-state events. (State events never include
|
||||||
|
bundled aggregations.)
|
||||||
**kwargs: Arguments to pass to `serialize_event`
|
**kwargs: Arguments to pass to `serialize_event`
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
@ -414,20 +417,27 @@ class EventClientSerializer:
|
|||||||
|
|
||||||
serialized_event = serialize_event(event, time_now, **kwargs)
|
serialized_event = serialize_event(event, time_now, **kwargs)
|
||||||
|
|
||||||
# If MSC1849 is enabled then we need to look if there are any relations
|
# Check if there are any bundled aggregations to include with the event.
|
||||||
# we need to bundle in with the event.
|
#
|
||||||
# Do not bundle relations if the event has been redacted
|
# Do not bundle aggregations if any of the following at true:
|
||||||
if not event.internal_metadata.is_redacted() and (
|
#
|
||||||
self._msc1849_enabled and bundle_relations
|
# * Support is disabled via the configuration or the caller.
|
||||||
|
# * The event is a state event.
|
||||||
|
# * The event has been redacted.
|
||||||
|
if (
|
||||||
|
self._msc1849_enabled
|
||||||
|
and bundle_aggregations
|
||||||
|
and not event.is_state()
|
||||||
|
and not event.internal_metadata.is_redacted()
|
||||||
):
|
):
|
||||||
await self._injected_bundled_relations(event, time_now, serialized_event)
|
await self._injected_bundled_aggregations(event, time_now, serialized_event)
|
||||||
|
|
||||||
return serialized_event
|
return serialized_event
|
||||||
|
|
||||||
async def _injected_bundled_relations(
|
async def _injected_bundled_aggregations(
|
||||||
self, event: EventBase, time_now: int, serialized_event: JsonDict
|
self, event: EventBase, time_now: int, serialized_event: JsonDict
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Potentially injects bundled relations into the unsigned portion of the serialized event.
|
"""Potentially injects bundled aggregations into the unsigned portion of the serialized event.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
event: The event being serialized.
|
event: The event being serialized.
|
||||||
@ -435,20 +445,28 @@ class EventClientSerializer:
|
|||||||
serialized_event: The serialized event which may be modified.
|
serialized_event: The serialized event which may be modified.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
# Do not bundle aggregations for an event which represents an edit or an
|
||||||
|
# annotation. It does not make sense for them to have related events.
|
||||||
|
relates_to = event.content.get("m.relates_to")
|
||||||
|
if isinstance(relates_to, (dict, frozendict)):
|
||||||
|
relation_type = relates_to.get("rel_type")
|
||||||
|
if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE):
|
||||||
|
return
|
||||||
|
|
||||||
event_id = event.event_id
|
event_id = event.event_id
|
||||||
|
|
||||||
# The bundled relations to include.
|
# The bundled aggregations to include.
|
||||||
relations = {}
|
aggregations = {}
|
||||||
|
|
||||||
annotations = await self.store.get_aggregation_groups_for_event(event_id)
|
annotations = await self.store.get_aggregation_groups_for_event(event_id)
|
||||||
if annotations.chunk:
|
if annotations.chunk:
|
||||||
relations[RelationTypes.ANNOTATION] = annotations.to_dict()
|
aggregations[RelationTypes.ANNOTATION] = annotations.to_dict()
|
||||||
|
|
||||||
references = await self.store.get_relations_for_event(
|
references = await self.store.get_relations_for_event(
|
||||||
event_id, RelationTypes.REFERENCE, direction="f"
|
event_id, RelationTypes.REFERENCE, direction="f"
|
||||||
)
|
)
|
||||||
if references.chunk:
|
if references.chunk:
|
||||||
relations[RelationTypes.REFERENCE] = references.to_dict()
|
aggregations[RelationTypes.REFERENCE] = references.to_dict()
|
||||||
|
|
||||||
edit = None
|
edit = None
|
||||||
if event.type == EventTypes.Message:
|
if event.type == EventTypes.Message:
|
||||||
@ -474,7 +492,7 @@ class EventClientSerializer:
|
|||||||
else:
|
else:
|
||||||
serialized_event["content"].pop("m.relates_to", None)
|
serialized_event["content"].pop("m.relates_to", None)
|
||||||
|
|
||||||
relations[RelationTypes.REPLACE] = {
|
aggregations[RelationTypes.REPLACE] = {
|
||||||
"event_id": edit.event_id,
|
"event_id": edit.event_id,
|
||||||
"origin_server_ts": edit.origin_server_ts,
|
"origin_server_ts": edit.origin_server_ts,
|
||||||
"sender": edit.sender,
|
"sender": edit.sender,
|
||||||
@ -487,17 +505,19 @@ class EventClientSerializer:
|
|||||||
latest_thread_event,
|
latest_thread_event,
|
||||||
) = await self.store.get_thread_summary(event_id)
|
) = await self.store.get_thread_summary(event_id)
|
||||||
if latest_thread_event:
|
if latest_thread_event:
|
||||||
relations[RelationTypes.THREAD] = {
|
aggregations[RelationTypes.THREAD] = {
|
||||||
# Don't bundle relations as this could recurse forever.
|
# Don't bundle aggregations as this could recurse forever.
|
||||||
"latest_event": await self.serialize_event(
|
"latest_event": await self.serialize_event(
|
||||||
latest_thread_event, time_now, bundle_relations=False
|
latest_thread_event, time_now, bundle_aggregations=False
|
||||||
),
|
),
|
||||||
"count": thread_count,
|
"count": thread_count,
|
||||||
}
|
}
|
||||||
|
|
||||||
# If any bundled relations were found, include them.
|
# If any bundled aggregations were found, include them.
|
||||||
if relations:
|
if aggregations:
|
||||||
serialized_event["unsigned"].setdefault("m.relations", {}).update(relations)
|
serialized_event["unsigned"].setdefault("m.relations", {}).update(
|
||||||
|
aggregations
|
||||||
|
)
|
||||||
|
|
||||||
async def serialize_events(
|
async def serialize_events(
|
||||||
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
|
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
|
||||||
|
@ -128,7 +128,7 @@ class FederationClient(FederationBase):
|
|||||||
reset_expiry_on_get=False,
|
reset_expiry_on_get=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
def _clear_tried_cache(self):
|
def _clear_tried_cache(self) -> None:
|
||||||
"""Clear pdu_destination_tried cache"""
|
"""Clear pdu_destination_tried cache"""
|
||||||
now = self._clock.time_msec()
|
now = self._clock.time_msec()
|
||||||
|
|
||||||
@ -800,7 +800,7 @@ class FederationClient(FederationBase):
|
|||||||
no servers successfully handle the request.
|
no servers successfully handle the request.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
async def send_request(destination) -> SendJoinResult:
|
async def send_request(destination: str) -> SendJoinResult:
|
||||||
response = await self._do_send_join(room_version, destination, pdu)
|
response = await self._do_send_join(room_version, destination, pdu)
|
||||||
|
|
||||||
# If an event was returned (and expected to be returned):
|
# If an event was returned (and expected to be returned):
|
||||||
@ -1395,11 +1395,28 @@ class FederationClient(FederationBase):
|
|||||||
async def send_request(
|
async def send_request(
|
||||||
destination: str,
|
destination: str,
|
||||||
) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]:
|
) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]:
|
||||||
res = await self.transport_layer.get_room_hierarchy(
|
try:
|
||||||
destination=destination,
|
res = await self.transport_layer.get_room_hierarchy(
|
||||||
room_id=room_id,
|
destination=destination,
|
||||||
suggested_only=suggested_only,
|
room_id=room_id,
|
||||||
)
|
suggested_only=suggested_only,
|
||||||
|
)
|
||||||
|
except HttpResponseException as e:
|
||||||
|
# If an error is received that is due to an unrecognised endpoint,
|
||||||
|
# fallback to the unstable endpoint. Otherwise consider it a
|
||||||
|
# legitmate error and raise.
|
||||||
|
if not self._is_unknown_endpoint(e):
|
||||||
|
raise
|
||||||
|
|
||||||
|
logger.debug(
|
||||||
|
"Couldn't fetch room hierarchy with the v1 API, falling back to the unstable API"
|
||||||
|
)
|
||||||
|
|
||||||
|
res = await self.transport_layer.get_room_hierarchy_unstable(
|
||||||
|
destination=destination,
|
||||||
|
room_id=room_id,
|
||||||
|
suggested_only=suggested_only,
|
||||||
|
)
|
||||||
|
|
||||||
room = res.get("room")
|
room = res.get("room")
|
||||||
if not isinstance(room, dict):
|
if not isinstance(room, dict):
|
||||||
@ -1449,6 +1466,10 @@ class FederationClient(FederationBase):
|
|||||||
if e.code != 502:
|
if e.code != 502:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
logger.debug(
|
||||||
|
"Couldn't fetch room hierarchy, falling back to the spaces API"
|
||||||
|
)
|
||||||
|
|
||||||
# Fallback to the old federation API and translate the results if
|
# Fallback to the old federation API and translate the results if
|
||||||
# no servers implement the new API.
|
# no servers implement the new API.
|
||||||
#
|
#
|
||||||
@ -1496,6 +1517,83 @@ class FederationClient(FederationBase):
|
|||||||
self._get_room_hierarchy_cache[(room_id, suggested_only)] = result
|
self._get_room_hierarchy_cache[(room_id, suggested_only)] = result
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
async def timestamp_to_event(
|
||||||
|
self, destination: str, room_id: str, timestamp: int, direction: str
|
||||||
|
) -> "TimestampToEventResponse":
|
||||||
|
"""
|
||||||
|
Calls a remote federating server at `destination` asking for their
|
||||||
|
closest event to the given timestamp in the given direction. Also
|
||||||
|
validates the response to always return the expected keys or raises an
|
||||||
|
error.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
destination: Domain name of the remote homeserver
|
||||||
|
room_id: Room to fetch the event from
|
||||||
|
timestamp: The point in time (inclusive) we should navigate from in
|
||||||
|
the given direction to find the closest event.
|
||||||
|
direction: ["f"|"b"] to indicate whether we should navigate forward
|
||||||
|
or backward from the given timestamp to find the closest event.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A parsed TimestampToEventResponse including the closest event_id
|
||||||
|
and origin_server_ts
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
Various exceptions when the request fails
|
||||||
|
InvalidResponseError when the response does not have the correct
|
||||||
|
keys or wrong types
|
||||||
|
"""
|
||||||
|
remote_response = await self.transport_layer.timestamp_to_event(
|
||||||
|
destination, room_id, timestamp, direction
|
||||||
|
)
|
||||||
|
|
||||||
|
if not isinstance(remote_response, dict):
|
||||||
|
raise InvalidResponseError(
|
||||||
|
"Response must be a JSON dictionary but received %r" % remote_response
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
return TimestampToEventResponse.from_json_dict(remote_response)
|
||||||
|
except ValueError as e:
|
||||||
|
raise InvalidResponseError(str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
||||||
|
class TimestampToEventResponse:
|
||||||
|
"""Typed response dictionary for the federation /timestamp_to_event endpoint"""
|
||||||
|
|
||||||
|
event_id: str
|
||||||
|
origin_server_ts: int
|
||||||
|
|
||||||
|
# the raw data, including the above keys
|
||||||
|
data: JsonDict
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_json_dict(cls, d: JsonDict) -> "TimestampToEventResponse":
|
||||||
|
"""Parsed response from the federation /timestamp_to_event endpoint
|
||||||
|
|
||||||
|
Args:
|
||||||
|
d: JSON object response to be parsed
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
ValueError if d does not the correct keys or they are the wrong types
|
||||||
|
"""
|
||||||
|
|
||||||
|
event_id = d.get("event_id")
|
||||||
|
if not isinstance(event_id, str):
|
||||||
|
raise ValueError(
|
||||||
|
"Invalid response: 'event_id' must be a str but received %r" % event_id
|
||||||
|
)
|
||||||
|
|
||||||
|
origin_server_ts = d.get("origin_server_ts")
|
||||||
|
if not isinstance(origin_server_ts, int):
|
||||||
|
raise ValueError(
|
||||||
|
"Invalid response: 'origin_server_ts' must be a int but received %r"
|
||||||
|
% origin_server_ts
|
||||||
|
)
|
||||||
|
|
||||||
|
return cls(event_id, origin_server_ts, d)
|
||||||
|
|
||||||
|
|
||||||
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
||||||
class FederationSpaceSummaryEventResult:
|
class FederationSpaceSummaryEventResult:
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
# Copyright 2018 New Vector Ltd
|
# Copyright 2018 New Vector Ltd
|
||||||
# Copyright 2019 Matrix.org Federation C.I.C
|
# Copyright 2019-2021 Matrix.org Federation C.I.C
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -110,6 +110,7 @@ class FederationServer(FederationBase):
|
|||||||
super().__init__(hs)
|
super().__init__(hs)
|
||||||
|
|
||||||
self.handler = hs.get_federation_handler()
|
self.handler = hs.get_federation_handler()
|
||||||
|
self.storage = hs.get_storage()
|
||||||
self._federation_event_handler = hs.get_federation_event_handler()
|
self._federation_event_handler = hs.get_federation_event_handler()
|
||||||
self.state = hs.get_state_handler()
|
self.state = hs.get_state_handler()
|
||||||
self._event_auth_handler = hs.get_event_auth_handler()
|
self._event_auth_handler = hs.get_event_auth_handler()
|
||||||
@ -200,6 +201,48 @@ class FederationServer(FederationBase):
|
|||||||
|
|
||||||
return 200, res
|
return 200, res
|
||||||
|
|
||||||
|
async def on_timestamp_to_event_request(
|
||||||
|
self, origin: str, room_id: str, timestamp: int, direction: str
|
||||||
|
) -> Tuple[int, Dict[str, Any]]:
|
||||||
|
"""When we receive a federated `/timestamp_to_event` request,
|
||||||
|
handle all of the logic for validating and fetching the event.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
origin: The server we received the event from
|
||||||
|
room_id: Room to fetch the event from
|
||||||
|
timestamp: The point in time (inclusive) we should navigate from in
|
||||||
|
the given direction to find the closest event.
|
||||||
|
direction: ["f"|"b"] to indicate whether we should navigate forward
|
||||||
|
or backward from the given timestamp to find the closest event.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple indicating the response status code and dictionary response
|
||||||
|
body including `event_id`.
|
||||||
|
"""
|
||||||
|
with (await self._server_linearizer.queue((origin, room_id))):
|
||||||
|
origin_host, _ = parse_server_name(origin)
|
||||||
|
await self.check_server_matches_acl(origin_host, room_id)
|
||||||
|
|
||||||
|
# We only try to fetch data from the local database
|
||||||
|
event_id = await self.store.get_event_id_for_timestamp(
|
||||||
|
room_id, timestamp, direction
|
||||||
|
)
|
||||||
|
if event_id:
|
||||||
|
event = await self.store.get_event(
|
||||||
|
event_id, allow_none=False, allow_rejected=False
|
||||||
|
)
|
||||||
|
|
||||||
|
return 200, {
|
||||||
|
"event_id": event_id,
|
||||||
|
"origin_server_ts": event.origin_server_ts,
|
||||||
|
}
|
||||||
|
|
||||||
|
raise SynapseError(
|
||||||
|
404,
|
||||||
|
"Unable to find event from %s in direction %s" % (timestamp, direction),
|
||||||
|
errcode=Codes.NOT_FOUND,
|
||||||
|
)
|
||||||
|
|
||||||
async def on_incoming_transaction(
|
async def on_incoming_transaction(
|
||||||
self,
|
self,
|
||||||
origin: str,
|
origin: str,
|
||||||
@ -407,7 +450,7 @@ class FederationServer(FederationBase):
|
|||||||
# require callouts to other servers to fetch missing events), but
|
# require callouts to other servers to fetch missing events), but
|
||||||
# impose a limit to avoid going too crazy with ram/cpu.
|
# impose a limit to avoid going too crazy with ram/cpu.
|
||||||
|
|
||||||
async def process_pdus_for_room(room_id: str):
|
async def process_pdus_for_room(room_id: str) -> None:
|
||||||
with nested_logging_context(room_id):
|
with nested_logging_context(room_id):
|
||||||
logger.debug("Processing PDUs for %s", room_id)
|
logger.debug("Processing PDUs for %s", room_id)
|
||||||
|
|
||||||
@ -504,7 +547,7 @@ class FederationServer(FederationBase):
|
|||||||
|
|
||||||
async def on_state_ids_request(
|
async def on_state_ids_request(
|
||||||
self, origin: str, room_id: str, event_id: str
|
self, origin: str, room_id: str, event_id: str
|
||||||
) -> Tuple[int, Dict[str, Any]]:
|
) -> Tuple[int, JsonDict]:
|
||||||
if not event_id:
|
if not event_id:
|
||||||
raise NotImplementedError("Specify an event")
|
raise NotImplementedError("Specify an event")
|
||||||
|
|
||||||
@ -524,7 +567,9 @@ class FederationServer(FederationBase):
|
|||||||
|
|
||||||
return 200, resp
|
return 200, resp
|
||||||
|
|
||||||
async def _on_state_ids_request_compute(self, room_id, event_id):
|
async def _on_state_ids_request_compute(
|
||||||
|
self, room_id: str, event_id: str
|
||||||
|
) -> JsonDict:
|
||||||
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
|
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
|
||||||
auth_chain_ids = await self.store.get_auth_chain_ids(room_id, state_ids)
|
auth_chain_ids = await self.store.get_auth_chain_ids(room_id, state_ids)
|
||||||
return {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
|
return {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
|
||||||
@ -613,8 +658,11 @@ class FederationServer(FederationBase):
|
|||||||
state = await self.store.get_events(state_ids)
|
state = await self.store.get_events(state_ids)
|
||||||
|
|
||||||
time_now = self._clock.time_msec()
|
time_now = self._clock.time_msec()
|
||||||
|
event_json = event.get_pdu_json()
|
||||||
return {
|
return {
|
||||||
"org.matrix.msc3083.v2.event": event.get_pdu_json(),
|
# TODO Remove the unstable prefix when servers have updated.
|
||||||
|
"org.matrix.msc3083.v2.event": event_json,
|
||||||
|
"event": event_json,
|
||||||
"state": [p.get_pdu_json(time_now) for p in state.values()],
|
"state": [p.get_pdu_json(time_now) for p in state.values()],
|
||||||
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain],
|
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain],
|
||||||
}
|
}
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
# Copyright 2014-2016 OpenMarket Ltd
|
# Copyright 2014-2016 OpenMarket Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -23,6 +24,7 @@ from typing import Optional, Tuple
|
|||||||
|
|
||||||
from synapse.federation.units import Transaction
|
from synapse.federation.units import Transaction
|
||||||
from synapse.logging.utils import log_function
|
from synapse.logging.utils import log_function
|
||||||
|
from synapse.storage.databases.main import DataStore
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@ -31,7 +33,7 @@ logger = logging.getLogger(__name__)
|
|||||||
class TransactionActions:
|
class TransactionActions:
|
||||||
"""Defines persistence actions that relate to handling Transactions."""
|
"""Defines persistence actions that relate to handling Transactions."""
|
||||||
|
|
||||||
def __init__(self, datastore):
|
def __init__(self, datastore: DataStore):
|
||||||
self.store = datastore
|
self.store = datastore
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
# Copyright 2014-2016 OpenMarket Ltd
|
# Copyright 2014-2016 OpenMarket Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -350,7 +351,7 @@ class BaseFederationRow:
|
|||||||
TypeId = "" # Unique string that ids the type. Must be overridden in sub classes.
|
TypeId = "" # Unique string that ids the type. Must be overridden in sub classes.
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def from_data(data):
|
def from_data(data: JsonDict) -> "BaseFederationRow":
|
||||||
"""Parse the data from the federation stream into a row.
|
"""Parse the data from the federation stream into a row.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@ -359,7 +360,7 @@ class BaseFederationRow:
|
|||||||
"""
|
"""
|
||||||
raise NotImplementedError()
|
raise NotImplementedError()
|
||||||
|
|
||||||
def to_data(self):
|
def to_data(self) -> JsonDict:
|
||||||
"""Serialize this row to be sent over the federation stream.
|
"""Serialize this row to be sent over the federation stream.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
@ -368,7 +369,7 @@ class BaseFederationRow:
|
|||||||
"""
|
"""
|
||||||
raise NotImplementedError()
|
raise NotImplementedError()
|
||||||
|
|
||||||
def add_to_buffer(self, buff):
|
def add_to_buffer(self, buff: "ParsedFederationStreamData") -> None:
|
||||||
"""Add this row to the appropriate field in the buffer ready for this
|
"""Add this row to the appropriate field in the buffer ready for this
|
||||||
to be sent over federation.
|
to be sent over federation.
|
||||||
|
|
||||||
@ -391,15 +392,15 @@ class PresenceDestinationsRow(
|
|||||||
TypeId = "pd"
|
TypeId = "pd"
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def from_data(data):
|
def from_data(data: JsonDict) -> "PresenceDestinationsRow":
|
||||||
return PresenceDestinationsRow(
|
return PresenceDestinationsRow(
|
||||||
state=UserPresenceState.from_dict(data["state"]), destinations=data["dests"]
|
state=UserPresenceState.from_dict(data["state"]), destinations=data["dests"]
|
||||||
)
|
)
|
||||||
|
|
||||||
def to_data(self):
|
def to_data(self) -> JsonDict:
|
||||||
return {"state": self.state.as_dict(), "dests": self.destinations}
|
return {"state": self.state.as_dict(), "dests": self.destinations}
|
||||||
|
|
||||||
def add_to_buffer(self, buff):
|
def add_to_buffer(self, buff: "ParsedFederationStreamData") -> None:
|
||||||
buff.presence_destinations.append((self.state, self.destinations))
|
buff.presence_destinations.append((self.state, self.destinations))
|
||||||
|
|
||||||
|
|
||||||
@ -417,13 +418,13 @@ class KeyedEduRow(
|
|||||||
TypeId = "k"
|
TypeId = "k"
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def from_data(data):
|
def from_data(data: JsonDict) -> "KeyedEduRow":
|
||||||
return KeyedEduRow(key=tuple(data["key"]), edu=Edu(**data["edu"]))
|
return KeyedEduRow(key=tuple(data["key"]), edu=Edu(**data["edu"]))
|
||||||
|
|
||||||
def to_data(self):
|
def to_data(self) -> JsonDict:
|
||||||
return {"key": self.key, "edu": self.edu.get_internal_dict()}
|
return {"key": self.key, "edu": self.edu.get_internal_dict()}
|
||||||
|
|
||||||
def add_to_buffer(self, buff):
|
def add_to_buffer(self, buff: "ParsedFederationStreamData") -> None:
|
||||||
buff.keyed_edus.setdefault(self.edu.destination, {})[self.key] = self.edu
|
buff.keyed_edus.setdefault(self.edu.destination, {})[self.key] = self.edu
|
||||||
|
|
||||||
|
|
||||||
@ -433,13 +434,13 @@ class EduRow(BaseFederationRow, namedtuple("EduRow", ("edu",))): # Edu
|
|||||||
TypeId = "e"
|
TypeId = "e"
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def from_data(data):
|
def from_data(data: JsonDict) -> "EduRow":
|
||||||
return EduRow(Edu(**data))
|
return EduRow(Edu(**data))
|
||||||
|
|
||||||
def to_data(self):
|
def to_data(self) -> JsonDict:
|
||||||
return self.edu.get_internal_dict()
|
return self.edu.get_internal_dict()
|
||||||
|
|
||||||
def add_to_buffer(self, buff):
|
def add_to_buffer(self, buff: "ParsedFederationStreamData") -> None:
|
||||||
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
|
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
# Copyright 2014-2016 OpenMarket Ltd
|
# Copyright 2014-2016 OpenMarket Ltd
|
||||||
# Copyright 2019 New Vector Ltd
|
# Copyright 2019 New Vector Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -14,7 +15,8 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import datetime
|
import datetime
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple
|
from types import TracebackType
|
||||||
|
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple, Type
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
@ -213,7 +215,7 @@ class PerDestinationQueue:
|
|||||||
self._pending_edus_keyed[(edu.edu_type, key)] = edu
|
self._pending_edus_keyed[(edu.edu_type, key)] = edu
|
||||||
self.attempt_new_transaction()
|
self.attempt_new_transaction()
|
||||||
|
|
||||||
def send_edu(self, edu) -> None:
|
def send_edu(self, edu: Edu) -> None:
|
||||||
self._pending_edus.append(edu)
|
self._pending_edus.append(edu)
|
||||||
self.attempt_new_transaction()
|
self.attempt_new_transaction()
|
||||||
|
|
||||||
@ -701,7 +703,12 @@ class _TransactionQueueManager:
|
|||||||
|
|
||||||
return self._pdus, pending_edus
|
return self._pdus, pending_edus
|
||||||
|
|
||||||
async def __aexit__(self, exc_type, exc, tb):
|
async def __aexit__(
|
||||||
|
self,
|
||||||
|
exc_type: Optional[Type[BaseException]],
|
||||||
|
exc: Optional[BaseException],
|
||||||
|
tb: Optional[TracebackType],
|
||||||
|
) -> None:
|
||||||
if exc_type is not None:
|
if exc_type is not None:
|
||||||
# Failed to send transaction, so we bail out.
|
# Failed to send transaction, so we bail out.
|
||||||
return
|
return
|
||||||
|
@ -21,6 +21,7 @@ from typing import (
|
|||||||
Callable,
|
Callable,
|
||||||
Collection,
|
Collection,
|
||||||
Dict,
|
Dict,
|
||||||
|
Generator,
|
||||||
Iterable,
|
Iterable,
|
||||||
List,
|
List,
|
||||||
Mapping,
|
Mapping,
|
||||||
@ -148,6 +149,42 @@ class TransportLayerClient:
|
|||||||
destination, path=path, args=args, try_trailing_slash_on_400=True
|
destination, path=path, args=args, try_trailing_slash_on_400=True
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@log_function
|
||||||
|
async def timestamp_to_event(
|
||||||
|
self, destination: str, room_id: str, timestamp: int, direction: str
|
||||||
|
) -> Union[JsonDict, List]:
|
||||||
|
"""
|
||||||
|
Calls a remote federating server at `destination` asking for their
|
||||||
|
closest event to the given timestamp in the given direction.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
destination: Domain name of the remote homeserver
|
||||||
|
room_id: Room to fetch the event from
|
||||||
|
timestamp: The point in time (inclusive) we should navigate from in
|
||||||
|
the given direction to find the closest event.
|
||||||
|
direction: ["f"|"b"] to indicate whether we should navigate forward
|
||||||
|
or backward from the given timestamp to find the closest event.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Response dict received from the remote homeserver.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
Various exceptions when the request fails
|
||||||
|
"""
|
||||||
|
path = _create_path(
|
||||||
|
FEDERATION_UNSTABLE_PREFIX,
|
||||||
|
"/org.matrix.msc3030/timestamp_to_event/%s",
|
||||||
|
room_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
args = {"ts": [str(timestamp)], "dir": [direction]}
|
||||||
|
|
||||||
|
remote_response = await self.client.get_json(
|
||||||
|
destination, path=path, args=args, try_trailing_slash_on_400=True
|
||||||
|
)
|
||||||
|
|
||||||
|
return remote_response
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
async def send_transaction(
|
async def send_transaction(
|
||||||
self,
|
self,
|
||||||
@ -199,11 +236,16 @@ class TransportLayerClient:
|
|||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
async def make_query(
|
async def make_query(
|
||||||
self, destination, query_type, args, retry_on_dns_fail, ignore_backoff=False
|
self,
|
||||||
):
|
destination: str,
|
||||||
|
query_type: str,
|
||||||
|
args: dict,
|
||||||
|
retry_on_dns_fail: bool,
|
||||||
|
ignore_backoff: bool = False,
|
||||||
|
) -> JsonDict:
|
||||||
path = _create_v1_path("/query/%s", query_type)
|
path = _create_v1_path("/query/%s", query_type)
|
||||||
|
|
||||||
content = await self.client.get_json(
|
return await self.client.get_json(
|
||||||
destination=destination,
|
destination=destination,
|
||||||
path=path,
|
path=path,
|
||||||
args=args,
|
args=args,
|
||||||
@ -212,8 +254,6 @@ class TransportLayerClient:
|
|||||||
ignore_backoff=ignore_backoff,
|
ignore_backoff=ignore_backoff,
|
||||||
)
|
)
|
||||||
|
|
||||||
return content
|
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
async def make_membership_event(
|
async def make_membership_event(
|
||||||
self,
|
self,
|
||||||
@ -1192,10 +1232,24 @@ class TransportLayerClient:
|
|||||||
)
|
)
|
||||||
|
|
||||||
async def get_room_hierarchy(
|
async def get_room_hierarchy(
|
||||||
self,
|
self, destination: str, room_id: str, suggested_only: bool
|
||||||
destination: str,
|
) -> JsonDict:
|
||||||
room_id: str,
|
"""
|
||||||
suggested_only: bool,
|
Args:
|
||||||
|
destination: The remote server
|
||||||
|
room_id: The room ID to ask about.
|
||||||
|
suggested_only: if True, only suggested rooms will be returned
|
||||||
|
"""
|
||||||
|
path = _create_v1_path("/hierarchy/%s", room_id)
|
||||||
|
|
||||||
|
return await self.client.get_json(
|
||||||
|
destination=destination,
|
||||||
|
path=path,
|
||||||
|
args={"suggested_only": "true" if suggested_only else "false"},
|
||||||
|
)
|
||||||
|
|
||||||
|
async def get_room_hierarchy_unstable(
|
||||||
|
self, destination: str, room_id: str, suggested_only: bool
|
||||||
) -> JsonDict:
|
) -> JsonDict:
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
@ -1267,7 +1321,7 @@ class SendJoinResponse:
|
|||||||
|
|
||||||
|
|
||||||
@ijson.coroutine
|
@ijson.coroutine
|
||||||
def _event_parser(event_dict: JsonDict):
|
def _event_parser(event_dict: JsonDict) -> Generator[None, Tuple[str, Any], None]:
|
||||||
"""Helper function for use with `ijson.kvitems_coro` to parse key-value pairs
|
"""Helper function for use with `ijson.kvitems_coro` to parse key-value pairs
|
||||||
to add them to a given dictionary.
|
to add them to a given dictionary.
|
||||||
"""
|
"""
|
||||||
@ -1278,7 +1332,9 @@ def _event_parser(event_dict: JsonDict):
|
|||||||
|
|
||||||
|
|
||||||
@ijson.coroutine
|
@ijson.coroutine
|
||||||
def _event_list_parser(room_version: RoomVersion, events: List[EventBase]):
|
def _event_list_parser(
|
||||||
|
room_version: RoomVersion, events: List[EventBase]
|
||||||
|
) -> Generator[None, JsonDict, None]:
|
||||||
"""Helper function for use with `ijson.items_coro` to parse an array of
|
"""Helper function for use with `ijson.items_coro` to parse an array of
|
||||||
events and add them to the given list.
|
events and add them to the given list.
|
||||||
"""
|
"""
|
||||||
@ -1317,15 +1373,26 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
|
|||||||
prefix + "auth_chain.item",
|
prefix + "auth_chain.item",
|
||||||
use_float=True,
|
use_float=True,
|
||||||
)
|
)
|
||||||
self._coro_event = ijson.kvitems_coro(
|
# TODO Remove the unstable prefix when servers have updated.
|
||||||
|
#
|
||||||
|
# By re-using the same event dictionary this will cause the parsing of
|
||||||
|
# org.matrix.msc3083.v2.event and event to stomp over each other.
|
||||||
|
# Generally this should be fine.
|
||||||
|
self._coro_unstable_event = ijson.kvitems_coro(
|
||||||
_event_parser(self._response.event_dict),
|
_event_parser(self._response.event_dict),
|
||||||
prefix + "org.matrix.msc3083.v2.event",
|
prefix + "org.matrix.msc3083.v2.event",
|
||||||
use_float=True,
|
use_float=True,
|
||||||
)
|
)
|
||||||
|
self._coro_event = ijson.kvitems_coro(
|
||||||
|
_event_parser(self._response.event_dict),
|
||||||
|
prefix + "event",
|
||||||
|
use_float=True,
|
||||||
|
)
|
||||||
|
|
||||||
def write(self, data: bytes) -> int:
|
def write(self, data: bytes) -> int:
|
||||||
self._coro_state.send(data)
|
self._coro_state.send(data)
|
||||||
self._coro_auth.send(data)
|
self._coro_auth.send(data)
|
||||||
|
self._coro_unstable_event.send(data)
|
||||||
self._coro_event.send(data)
|
self._coro_event.send(data)
|
||||||
|
|
||||||
return len(data)
|
return len(data)
|
||||||
|
@ -22,7 +22,10 @@ from synapse.federation.transport.server._base import (
|
|||||||
Authenticator,
|
Authenticator,
|
||||||
BaseFederationServlet,
|
BaseFederationServlet,
|
||||||
)
|
)
|
||||||
from synapse.federation.transport.server.federation import FEDERATION_SERVLET_CLASSES
|
from synapse.federation.transport.server.federation import (
|
||||||
|
FEDERATION_SERVLET_CLASSES,
|
||||||
|
FederationTimestampLookupServlet,
|
||||||
|
)
|
||||||
from synapse.federation.transport.server.groups_local import GROUP_LOCAL_SERVLET_CLASSES
|
from synapse.federation.transport.server.groups_local import GROUP_LOCAL_SERVLET_CLASSES
|
||||||
from synapse.federation.transport.server.groups_server import (
|
from synapse.federation.transport.server.groups_server import (
|
||||||
GROUP_SERVER_SERVLET_CLASSES,
|
GROUP_SERVER_SERVLET_CLASSES,
|
||||||
@ -299,7 +302,7 @@ def register_servlets(
|
|||||||
authenticator: Authenticator,
|
authenticator: Authenticator,
|
||||||
ratelimiter: FederationRateLimiter,
|
ratelimiter: FederationRateLimiter,
|
||||||
servlet_groups: Optional[Iterable[str]] = None,
|
servlet_groups: Optional[Iterable[str]] = None,
|
||||||
):
|
) -> None:
|
||||||
"""Initialize and register servlet classes.
|
"""Initialize and register servlet classes.
|
||||||
|
|
||||||
Will by default register all servlets. For custom behaviour, pass in
|
Will by default register all servlets. For custom behaviour, pass in
|
||||||
@ -324,6 +327,13 @@ def register_servlets(
|
|||||||
)
|
)
|
||||||
|
|
||||||
for servletclass in DEFAULT_SERVLET_GROUPS[servlet_group]:
|
for servletclass in DEFAULT_SERVLET_GROUPS[servlet_group]:
|
||||||
|
# Only allow the `/timestamp_to_event` servlet if msc3030 is enabled
|
||||||
|
if (
|
||||||
|
servletclass == FederationTimestampLookupServlet
|
||||||
|
and not hs.config.experimental.msc3030_enabled
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
|
||||||
servletclass(
|
servletclass(
|
||||||
hs=hs,
|
hs=hs,
|
||||||
authenticator=authenticator,
|
authenticator=authenticator,
|
||||||
|
@ -15,10 +15,13 @@
|
|||||||
import functools
|
import functools
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
|
from typing import Any, Awaitable, Callable, Optional, Tuple, cast
|
||||||
|
|
||||||
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
|
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
|
||||||
from synapse.api.urls import FEDERATION_V1_PREFIX
|
from synapse.api.urls import FEDERATION_V1_PREFIX
|
||||||
|
from synapse.http.server import HttpServer, ServletCallback
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging import opentracing
|
from synapse.logging import opentracing
|
||||||
from synapse.logging.context import run_in_background
|
from synapse.logging.context import run_in_background
|
||||||
from synapse.logging.opentracing import (
|
from synapse.logging.opentracing import (
|
||||||
@ -29,6 +32,7 @@ from synapse.logging.opentracing import (
|
|||||||
whitelisted_homeserver,
|
whitelisted_homeserver,
|
||||||
)
|
)
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
from synapse.types import JsonDict
|
||||||
from synapse.util.ratelimitutils import FederationRateLimiter
|
from synapse.util.ratelimitutils import FederationRateLimiter
|
||||||
from synapse.util.stringutils import parse_and_validate_server_name
|
from synapse.util.stringutils import parse_and_validate_server_name
|
||||||
|
|
||||||
@ -59,9 +63,11 @@ class Authenticator:
|
|||||||
self.replication_client = hs.get_tcp_replication()
|
self.replication_client = hs.get_tcp_replication()
|
||||||
|
|
||||||
# A method just so we can pass 'self' as the authenticator to the Servlets
|
# A method just so we can pass 'self' as the authenticator to the Servlets
|
||||||
async def authenticate_request(self, request, content):
|
async def authenticate_request(
|
||||||
|
self, request: SynapseRequest, content: Optional[JsonDict]
|
||||||
|
) -> str:
|
||||||
now = self._clock.time_msec()
|
now = self._clock.time_msec()
|
||||||
json_request = {
|
json_request: JsonDict = {
|
||||||
"method": request.method.decode("ascii"),
|
"method": request.method.decode("ascii"),
|
||||||
"uri": request.uri.decode("ascii"),
|
"uri": request.uri.decode("ascii"),
|
||||||
"destination": self.server_name,
|
"destination": self.server_name,
|
||||||
@ -114,7 +120,7 @@ class Authenticator:
|
|||||||
|
|
||||||
return origin
|
return origin
|
||||||
|
|
||||||
async def _reset_retry_timings(self, origin):
|
async def _reset_retry_timings(self, origin: str) -> None:
|
||||||
try:
|
try:
|
||||||
logger.info("Marking origin %r as up", origin)
|
logger.info("Marking origin %r as up", origin)
|
||||||
await self.store.set_destination_retry_timings(origin, None, 0, 0)
|
await self.store.set_destination_retry_timings(origin, None, 0, 0)
|
||||||
@ -133,14 +139,14 @@ class Authenticator:
|
|||||||
logger.exception("Error resetting retry timings on %s", origin)
|
logger.exception("Error resetting retry timings on %s", origin)
|
||||||
|
|
||||||
|
|
||||||
def _parse_auth_header(header_bytes):
|
def _parse_auth_header(header_bytes: bytes) -> Tuple[str, str, str]:
|
||||||
"""Parse an X-Matrix auth header
|
"""Parse an X-Matrix auth header
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
header_bytes (bytes): header value
|
header_bytes: header value
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple[str, str, str]: origin, key id, signature.
|
origin, key id, signature.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
AuthenticationError if the header could not be parsed
|
AuthenticationError if the header could not be parsed
|
||||||
@ -148,9 +154,9 @@ def _parse_auth_header(header_bytes):
|
|||||||
try:
|
try:
|
||||||
header_str = header_bytes.decode("utf-8")
|
header_str = header_bytes.decode("utf-8")
|
||||||
params = header_str.split(" ")[1].split(",")
|
params = header_str.split(" ")[1].split(",")
|
||||||
param_dict = dict(kv.split("=") for kv in params)
|
param_dict = {k: v for k, v in (kv.split("=", maxsplit=1) for kv in params)}
|
||||||
|
|
||||||
def strip_quotes(value):
|
def strip_quotes(value: str) -> str:
|
||||||
if value.startswith('"'):
|
if value.startswith('"'):
|
||||||
return value[1:-1]
|
return value[1:-1]
|
||||||
else:
|
else:
|
||||||
@ -233,23 +239,25 @@ class BaseFederationServlet:
|
|||||||
self.ratelimiter = ratelimiter
|
self.ratelimiter = ratelimiter
|
||||||
self.server_name = server_name
|
self.server_name = server_name
|
||||||
|
|
||||||
def _wrap(self, func):
|
def _wrap(self, func: Callable[..., Awaitable[Tuple[int, Any]]]) -> ServletCallback:
|
||||||
authenticator = self.authenticator
|
authenticator = self.authenticator
|
||||||
ratelimiter = self.ratelimiter
|
ratelimiter = self.ratelimiter
|
||||||
|
|
||||||
@functools.wraps(func)
|
@functools.wraps(func)
|
||||||
async def new_func(request, *args, **kwargs):
|
async def new_func(
|
||||||
|
request: SynapseRequest, *args: Any, **kwargs: str
|
||||||
|
) -> Optional[Tuple[int, Any]]:
|
||||||
"""A callback which can be passed to HttpServer.RegisterPaths
|
"""A callback which can be passed to HttpServer.RegisterPaths
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
request (twisted.web.http.Request):
|
request:
|
||||||
*args: unused?
|
*args: unused?
|
||||||
**kwargs (dict[unicode, unicode]): the dict mapping keys to path
|
**kwargs: the dict mapping keys to path components as specified
|
||||||
components as specified in the path match regexp.
|
in the path match regexp.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Tuple[int, object]|None: (response code, response object) as returned by
|
(response code, response object) as returned by the callback method.
|
||||||
the callback method. None if the request has already been handled.
|
None if the request has already been handled.
|
||||||
"""
|
"""
|
||||||
content = None
|
content = None
|
||||||
if request.method in [b"PUT", b"POST"]:
|
if request.method in [b"PUT", b"POST"]:
|
||||||
@ -257,7 +265,9 @@ class BaseFederationServlet:
|
|||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
origin = await authenticator.authenticate_request(request, content)
|
origin: Optional[str] = await authenticator.authenticate_request(
|
||||||
|
request, content
|
||||||
|
)
|
||||||
except NoAuthenticationError:
|
except NoAuthenticationError:
|
||||||
origin = None
|
origin = None
|
||||||
if self.REQUIRE_AUTH:
|
if self.REQUIRE_AUTH:
|
||||||
@ -301,7 +311,7 @@ class BaseFederationServlet:
|
|||||||
"client disconnected before we started processing "
|
"client disconnected before we started processing "
|
||||||
"request"
|
"request"
|
||||||
)
|
)
|
||||||
return -1, None
|
return None
|
||||||
response = await func(
|
response = await func(
|
||||||
origin, content, request.args, *args, **kwargs
|
origin, content, request.args, *args, **kwargs
|
||||||
)
|
)
|
||||||
@ -312,9 +322,9 @@ class BaseFederationServlet:
|
|||||||
|
|
||||||
return response
|
return response
|
||||||
|
|
||||||
return new_func
|
return cast(ServletCallback, new_func)
|
||||||
|
|
||||||
def register(self, server):
|
def register(self, server: HttpServer) -> None:
|
||||||
pattern = re.compile("^" + self.PREFIX + self.PATH + "$")
|
pattern = re.compile("^" + self.PREFIX + self.PATH + "$")
|
||||||
|
|
||||||
for method in ("GET", "PUT", "POST"):
|
for method in ("GET", "PUT", "POST"):
|
||||||
|
@ -174,6 +174,46 @@ class FederationBackfillServlet(BaseFederationServerServlet):
|
|||||||
return await self.handler.on_backfill_request(origin, room_id, versions, limit)
|
return await self.handler.on_backfill_request(origin, room_id, versions, limit)
|
||||||
|
|
||||||
|
|
||||||
|
class FederationTimestampLookupServlet(BaseFederationServerServlet):
|
||||||
|
"""
|
||||||
|
API endpoint to fetch the `event_id` of the closest event to the given
|
||||||
|
timestamp (`ts` query parameter) in the given direction (`dir` query
|
||||||
|
parameter).
|
||||||
|
|
||||||
|
Useful for other homeservers when they're unable to find an event locally.
|
||||||
|
|
||||||
|
`ts` is a timestamp in milliseconds where we will find the closest event in
|
||||||
|
the given direction.
|
||||||
|
|
||||||
|
`dir` can be `f` or `b` to indicate forwards and backwards in time from the
|
||||||
|
given timestamp.
|
||||||
|
|
||||||
|
GET /_matrix/federation/unstable/org.matrix.msc3030/timestamp_to_event/<roomID>?ts=<timestamp>&dir=<direction>
|
||||||
|
{
|
||||||
|
"event_id": ...
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATH = "/timestamp_to_event/(?P<room_id>[^/]*)/?"
|
||||||
|
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc3030"
|
||||||
|
|
||||||
|
async def on_GET(
|
||||||
|
self,
|
||||||
|
origin: str,
|
||||||
|
content: Literal[None],
|
||||||
|
query: Dict[bytes, List[bytes]],
|
||||||
|
room_id: str,
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
timestamp = parse_integer_from_args(query, "ts", required=True)
|
||||||
|
direction = parse_string_from_args(
|
||||||
|
query, "dir", default="f", allowed_values=["f", "b"], required=True
|
||||||
|
)
|
||||||
|
|
||||||
|
return await self.handler.on_timestamp_to_event_request(
|
||||||
|
origin, room_id, timestamp, direction
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class FederationQueryServlet(BaseFederationServerServlet):
|
class FederationQueryServlet(BaseFederationServerServlet):
|
||||||
PATH = "/query/(?P<query_type>[^/]*)"
|
PATH = "/query/(?P<query_type>[^/]*)"
|
||||||
|
|
||||||
@ -611,7 +651,6 @@ class FederationSpaceSummaryServlet(BaseFederationServlet):
|
|||||||
|
|
||||||
|
|
||||||
class FederationRoomHierarchyServlet(BaseFederationServlet):
|
class FederationRoomHierarchyServlet(BaseFederationServlet):
|
||||||
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946"
|
|
||||||
PATH = "/hierarchy/(?P<room_id>[^/]*)"
|
PATH = "/hierarchy/(?P<room_id>[^/]*)"
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
@ -637,6 +676,10 @@ class FederationRoomHierarchyServlet(BaseFederationServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class FederationRoomHierarchyUnstableServlet(FederationRoomHierarchyServlet):
|
||||||
|
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946"
|
||||||
|
|
||||||
|
|
||||||
class RoomComplexityServlet(BaseFederationServlet):
|
class RoomComplexityServlet(BaseFederationServlet):
|
||||||
"""
|
"""
|
||||||
Indicates to other servers how complex (and therefore likely
|
Indicates to other servers how complex (and therefore likely
|
||||||
@ -680,6 +723,7 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
|||||||
FederationStateV1Servlet,
|
FederationStateV1Servlet,
|
||||||
FederationStateIdsServlet,
|
FederationStateIdsServlet,
|
||||||
FederationBackfillServlet,
|
FederationBackfillServlet,
|
||||||
|
FederationTimestampLookupServlet,
|
||||||
FederationQueryServlet,
|
FederationQueryServlet,
|
||||||
FederationMakeJoinServlet,
|
FederationMakeJoinServlet,
|
||||||
FederationMakeLeaveServlet,
|
FederationMakeLeaveServlet,
|
||||||
@ -701,6 +745,7 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
|||||||
RoomComplexityServlet,
|
RoomComplexityServlet,
|
||||||
FederationSpaceSummaryServlet,
|
FederationSpaceSummaryServlet,
|
||||||
FederationRoomHierarchyServlet,
|
FederationRoomHierarchyServlet,
|
||||||
|
FederationRoomHierarchyUnstableServlet,
|
||||||
FederationV1SendKnockServlet,
|
FederationV1SendKnockServlet,
|
||||||
FederationMakeKnockServlet,
|
FederationMakeKnockServlet,
|
||||||
)
|
)
|
||||||
|
@ -18,6 +18,7 @@ import time
|
|||||||
import unicodedata
|
import unicodedata
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
from binascii import crc32
|
from binascii import crc32
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import (
|
from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
Any,
|
Any,
|
||||||
@ -38,6 +39,7 @@ import attr
|
|||||||
import bcrypt
|
import bcrypt
|
||||||
import pymacaroons
|
import pymacaroons
|
||||||
import unpaddedbase64
|
import unpaddedbase64
|
||||||
|
from pymacaroons.exceptions import MacaroonVerificationFailedException
|
||||||
|
|
||||||
from twisted.web.server import Request
|
from twisted.web.server import Request
|
||||||
|
|
||||||
@ -181,8 +183,11 @@ class LoginTokenAttributes:
|
|||||||
|
|
||||||
user_id = attr.ib(type=str)
|
user_id = attr.ib(type=str)
|
||||||
|
|
||||||
# the SSO Identity Provider that the user authenticated with, to get this token
|
|
||||||
auth_provider_id = attr.ib(type=str)
|
auth_provider_id = attr.ib(type=str)
|
||||||
|
"""The SSO Identity Provider that the user authenticated with, to get this token."""
|
||||||
|
|
||||||
|
auth_provider_session_id = attr.ib(type=Optional[str])
|
||||||
|
"""The session ID advertised by the SSO Identity Provider."""
|
||||||
|
|
||||||
|
|
||||||
class AuthHandler:
|
class AuthHandler:
|
||||||
@ -756,53 +761,109 @@ class AuthHandler:
|
|||||||
async def refresh_token(
|
async def refresh_token(
|
||||||
self,
|
self,
|
||||||
refresh_token: str,
|
refresh_token: str,
|
||||||
valid_until_ms: Optional[int],
|
access_token_valid_until_ms: Optional[int],
|
||||||
) -> Tuple[str, str]:
|
refresh_token_valid_until_ms: Optional[int],
|
||||||
|
) -> Tuple[str, str, Optional[int]]:
|
||||||
"""
|
"""
|
||||||
Consumes a refresh token and generate both a new access token and a new refresh token from it.
|
Consumes a refresh token and generate both a new access token and a new refresh token from it.
|
||||||
|
|
||||||
The consumed refresh token is considered invalid after the first use of the new access token or the new refresh token.
|
The consumed refresh token is considered invalid after the first use of the new access token or the new refresh token.
|
||||||
|
|
||||||
|
The lifetime of both the access token and refresh token will be capped so that they
|
||||||
|
do not exceed the session's ultimate expiry time, if applicable.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
refresh_token: The token to consume.
|
refresh_token: The token to consume.
|
||||||
valid_until_ms: The expiration timestamp of the new access token.
|
access_token_valid_until_ms: The expiration timestamp of the new access token.
|
||||||
|
None if the access token does not expire.
|
||||||
|
refresh_token_valid_until_ms: The expiration timestamp of the new refresh token.
|
||||||
|
None if the refresh token does not expire.
|
||||||
Returns:
|
Returns:
|
||||||
A tuple containing the new access token and refresh token
|
A tuple containing:
|
||||||
|
- the new access token
|
||||||
|
- the new refresh token
|
||||||
|
- the actual expiry time of the access token, which may be earlier than
|
||||||
|
`access_token_valid_until_ms`.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Verify the token signature first before looking up the token
|
# Verify the token signature first before looking up the token
|
||||||
if not self._verify_refresh_token(refresh_token):
|
if not self._verify_refresh_token(refresh_token):
|
||||||
raise SynapseError(401, "invalid refresh token", Codes.UNKNOWN_TOKEN)
|
raise SynapseError(
|
||||||
|
HTTPStatus.UNAUTHORIZED, "invalid refresh token", Codes.UNKNOWN_TOKEN
|
||||||
|
)
|
||||||
|
|
||||||
existing_token = await self.store.lookup_refresh_token(refresh_token)
|
existing_token = await self.store.lookup_refresh_token(refresh_token)
|
||||||
if existing_token is None:
|
if existing_token is None:
|
||||||
raise SynapseError(401, "refresh token does not exist", Codes.UNKNOWN_TOKEN)
|
raise SynapseError(
|
||||||
|
HTTPStatus.UNAUTHORIZED,
|
||||||
|
"refresh token does not exist",
|
||||||
|
Codes.UNKNOWN_TOKEN,
|
||||||
|
)
|
||||||
|
|
||||||
if (
|
if (
|
||||||
existing_token.has_next_access_token_been_used
|
existing_token.has_next_access_token_been_used
|
||||||
or existing_token.has_next_refresh_token_been_refreshed
|
or existing_token.has_next_refresh_token_been_refreshed
|
||||||
):
|
):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
403, "refresh token isn't valid anymore", Codes.FORBIDDEN
|
HTTPStatus.FORBIDDEN,
|
||||||
|
"refresh token isn't valid anymore",
|
||||||
|
Codes.FORBIDDEN,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
now_ms = self._clock.time_msec()
|
||||||
|
|
||||||
|
if existing_token.expiry_ts is not None and existing_token.expiry_ts < now_ms:
|
||||||
|
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.FORBIDDEN,
|
||||||
|
"The supplied refresh token has expired",
|
||||||
|
Codes.FORBIDDEN,
|
||||||
|
)
|
||||||
|
|
||||||
|
if existing_token.ultimate_session_expiry_ts is not None:
|
||||||
|
# This session has a bounded lifetime, even across refreshes.
|
||||||
|
|
||||||
|
if access_token_valid_until_ms is not None:
|
||||||
|
access_token_valid_until_ms = min(
|
||||||
|
access_token_valid_until_ms,
|
||||||
|
existing_token.ultimate_session_expiry_ts,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
access_token_valid_until_ms = existing_token.ultimate_session_expiry_ts
|
||||||
|
|
||||||
|
if refresh_token_valid_until_ms is not None:
|
||||||
|
refresh_token_valid_until_ms = min(
|
||||||
|
refresh_token_valid_until_ms,
|
||||||
|
existing_token.ultimate_session_expiry_ts,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
refresh_token_valid_until_ms = existing_token.ultimate_session_expiry_ts
|
||||||
|
if existing_token.ultimate_session_expiry_ts < now_ms:
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.FORBIDDEN,
|
||||||
|
"The session has expired and can no longer be refreshed",
|
||||||
|
Codes.FORBIDDEN,
|
||||||
|
)
|
||||||
|
|
||||||
(
|
(
|
||||||
new_refresh_token,
|
new_refresh_token,
|
||||||
new_refresh_token_id,
|
new_refresh_token_id,
|
||||||
) = await self.create_refresh_token_for_user_id(
|
) = await self.create_refresh_token_for_user_id(
|
||||||
user_id=existing_token.user_id, device_id=existing_token.device_id
|
user_id=existing_token.user_id,
|
||||||
|
device_id=existing_token.device_id,
|
||||||
|
expiry_ts=refresh_token_valid_until_ms,
|
||||||
|
ultimate_session_expiry_ts=existing_token.ultimate_session_expiry_ts,
|
||||||
)
|
)
|
||||||
access_token = await self.create_access_token_for_user_id(
|
access_token = await self.create_access_token_for_user_id(
|
||||||
user_id=existing_token.user_id,
|
user_id=existing_token.user_id,
|
||||||
device_id=existing_token.device_id,
|
device_id=existing_token.device_id,
|
||||||
valid_until_ms=valid_until_ms,
|
valid_until_ms=access_token_valid_until_ms,
|
||||||
refresh_token_id=new_refresh_token_id,
|
refresh_token_id=new_refresh_token_id,
|
||||||
)
|
)
|
||||||
await self.store.replace_refresh_token(
|
await self.store.replace_refresh_token(
|
||||||
existing_token.token_id, new_refresh_token_id
|
existing_token.token_id, new_refresh_token_id
|
||||||
)
|
)
|
||||||
return access_token, new_refresh_token
|
return access_token, new_refresh_token, access_token_valid_until_ms
|
||||||
|
|
||||||
def _verify_refresh_token(self, token: str) -> bool:
|
def _verify_refresh_token(self, token: str) -> bool:
|
||||||
"""
|
"""
|
||||||
@ -836,6 +897,8 @@ class AuthHandler:
|
|||||||
self,
|
self,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
device_id: str,
|
device_id: str,
|
||||||
|
expiry_ts: Optional[int],
|
||||||
|
ultimate_session_expiry_ts: Optional[int],
|
||||||
) -> Tuple[str, int]:
|
) -> Tuple[str, int]:
|
||||||
"""
|
"""
|
||||||
Creates a new refresh token for the user with the given user ID.
|
Creates a new refresh token for the user with the given user ID.
|
||||||
@ -843,6 +906,13 @@ class AuthHandler:
|
|||||||
Args:
|
Args:
|
||||||
user_id: canonical user ID
|
user_id: canonical user ID
|
||||||
device_id: the device ID to associate with the token.
|
device_id: the device ID to associate with the token.
|
||||||
|
expiry_ts (milliseconds since the epoch): Time after which the
|
||||||
|
refresh token cannot be used.
|
||||||
|
If None, the refresh token never expires until it has been used.
|
||||||
|
ultimate_session_expiry_ts (milliseconds since the epoch):
|
||||||
|
Time at which the session will end and can not be extended any
|
||||||
|
further.
|
||||||
|
If None, the session can be refreshed indefinitely.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The newly created refresh token and its ID in the database
|
The newly created refresh token and its ID in the database
|
||||||
@ -852,6 +922,8 @@ class AuthHandler:
|
|||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
token=refresh_token,
|
token=refresh_token,
|
||||||
device_id=device_id,
|
device_id=device_id,
|
||||||
|
expiry_ts=expiry_ts,
|
||||||
|
ultimate_session_expiry_ts=ultimate_session_expiry_ts,
|
||||||
)
|
)
|
||||||
return refresh_token, refresh_token_id
|
return refresh_token, refresh_token_id
|
||||||
|
|
||||||
@ -1582,6 +1654,7 @@ class AuthHandler:
|
|||||||
client_redirect_url: str,
|
client_redirect_url: str,
|
||||||
extra_attributes: Optional[JsonDict] = None,
|
extra_attributes: Optional[JsonDict] = None,
|
||||||
new_user: bool = False,
|
new_user: bool = False,
|
||||||
|
auth_provider_session_id: Optional[str] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Having figured out a mxid for this user, complete the HTTP request
|
"""Having figured out a mxid for this user, complete the HTTP request
|
||||||
|
|
||||||
@ -1597,6 +1670,7 @@ class AuthHandler:
|
|||||||
during successful login. Must be JSON serializable.
|
during successful login. Must be JSON serializable.
|
||||||
new_user: True if we should use wording appropriate to a user who has just
|
new_user: True if we should use wording appropriate to a user who has just
|
||||||
registered.
|
registered.
|
||||||
|
auth_provider_session_id: The session ID from the SSO IdP received during login.
|
||||||
"""
|
"""
|
||||||
# If the account has been deactivated, do not proceed with the login
|
# If the account has been deactivated, do not proceed with the login
|
||||||
# flow.
|
# flow.
|
||||||
@ -1617,6 +1691,7 @@ class AuthHandler:
|
|||||||
extra_attributes,
|
extra_attributes,
|
||||||
new_user=new_user,
|
new_user=new_user,
|
||||||
user_profile_data=profile,
|
user_profile_data=profile,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
def _complete_sso_login(
|
def _complete_sso_login(
|
||||||
@ -1628,6 +1703,7 @@ class AuthHandler:
|
|||||||
extra_attributes: Optional[JsonDict] = None,
|
extra_attributes: Optional[JsonDict] = None,
|
||||||
new_user: bool = False,
|
new_user: bool = False,
|
||||||
user_profile_data: Optional[ProfileInfo] = None,
|
user_profile_data: Optional[ProfileInfo] = None,
|
||||||
|
auth_provider_session_id: Optional[str] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
The synchronous portion of complete_sso_login.
|
The synchronous portion of complete_sso_login.
|
||||||
@ -1649,7 +1725,9 @@ class AuthHandler:
|
|||||||
|
|
||||||
# Create a login token
|
# Create a login token
|
||||||
login_token = self.macaroon_gen.generate_short_term_login_token(
|
login_token = self.macaroon_gen.generate_short_term_login_token(
|
||||||
registered_user_id, auth_provider_id=auth_provider_id
|
registered_user_id,
|
||||||
|
auth_provider_id=auth_provider_id,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Append the login token to the original redirect URL (i.e. with its query
|
# Append the login token to the original redirect URL (i.e. with its query
|
||||||
@ -1754,6 +1832,7 @@ class MacaroonGenerator:
|
|||||||
self,
|
self,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
auth_provider_id: str,
|
auth_provider_id: str,
|
||||||
|
auth_provider_session_id: Optional[str] = None,
|
||||||
duration_in_ms: int = (2 * 60 * 1000),
|
duration_in_ms: int = (2 * 60 * 1000),
|
||||||
) -> str:
|
) -> str:
|
||||||
macaroon = self._generate_base_macaroon(user_id)
|
macaroon = self._generate_base_macaroon(user_id)
|
||||||
@ -1762,6 +1841,10 @@ class MacaroonGenerator:
|
|||||||
expiry = now + duration_in_ms
|
expiry = now + duration_in_ms
|
||||||
macaroon.add_first_party_caveat("time < %d" % (expiry,))
|
macaroon.add_first_party_caveat("time < %d" % (expiry,))
|
||||||
macaroon.add_first_party_caveat("auth_provider_id = %s" % (auth_provider_id,))
|
macaroon.add_first_party_caveat("auth_provider_id = %s" % (auth_provider_id,))
|
||||||
|
if auth_provider_session_id is not None:
|
||||||
|
macaroon.add_first_party_caveat(
|
||||||
|
"auth_provider_session_id = %s" % (auth_provider_session_id,)
|
||||||
|
)
|
||||||
return macaroon.serialize()
|
return macaroon.serialize()
|
||||||
|
|
||||||
def verify_short_term_login_token(self, token: str) -> LoginTokenAttributes:
|
def verify_short_term_login_token(self, token: str) -> LoginTokenAttributes:
|
||||||
@ -1783,15 +1866,28 @@ class MacaroonGenerator:
|
|||||||
user_id = get_value_from_macaroon(macaroon, "user_id")
|
user_id = get_value_from_macaroon(macaroon, "user_id")
|
||||||
auth_provider_id = get_value_from_macaroon(macaroon, "auth_provider_id")
|
auth_provider_id = get_value_from_macaroon(macaroon, "auth_provider_id")
|
||||||
|
|
||||||
|
auth_provider_session_id: Optional[str] = None
|
||||||
|
try:
|
||||||
|
auth_provider_session_id = get_value_from_macaroon(
|
||||||
|
macaroon, "auth_provider_session_id"
|
||||||
|
)
|
||||||
|
except MacaroonVerificationFailedException:
|
||||||
|
pass
|
||||||
|
|
||||||
v = pymacaroons.Verifier()
|
v = pymacaroons.Verifier()
|
||||||
v.satisfy_exact("gen = 1")
|
v.satisfy_exact("gen = 1")
|
||||||
v.satisfy_exact("type = login")
|
v.satisfy_exact("type = login")
|
||||||
v.satisfy_general(lambda c: c.startswith("user_id = "))
|
v.satisfy_general(lambda c: c.startswith("user_id = "))
|
||||||
v.satisfy_general(lambda c: c.startswith("auth_provider_id = "))
|
v.satisfy_general(lambda c: c.startswith("auth_provider_id = "))
|
||||||
|
v.satisfy_general(lambda c: c.startswith("auth_provider_session_id = "))
|
||||||
satisfy_expiry(v, self.hs.get_clock().time_msec)
|
satisfy_expiry(v, self.hs.get_clock().time_msec)
|
||||||
v.verify(macaroon, self.hs.config.key.macaroon_secret_key)
|
v.verify(macaroon, self.hs.config.key.macaroon_secret_key)
|
||||||
|
|
||||||
return LoginTokenAttributes(user_id=user_id, auth_provider_id=auth_provider_id)
|
return LoginTokenAttributes(
|
||||||
|
user_id=user_id,
|
||||||
|
auth_provider_id=auth_provider_id,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
|
)
|
||||||
|
|
||||||
def generate_delete_pusher_token(self, user_id: str) -> str:
|
def generate_delete_pusher_token(self, user_id: str) -> str:
|
||||||
macaroon = self._generate_base_macaroon(user_id)
|
macaroon = self._generate_base_macaroon(user_id)
|
||||||
|
@ -301,6 +301,8 @@ class DeviceHandler(DeviceWorkerHandler):
|
|||||||
user_id: str,
|
user_id: str,
|
||||||
device_id: Optional[str],
|
device_id: Optional[str],
|
||||||
initial_device_display_name: Optional[str] = None,
|
initial_device_display_name: Optional[str] = None,
|
||||||
|
auth_provider_id: Optional[str] = None,
|
||||||
|
auth_provider_session_id: Optional[str] = None,
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
If the given device has not been registered, register it with the
|
If the given device has not been registered, register it with the
|
||||||
@ -312,6 +314,8 @@ class DeviceHandler(DeviceWorkerHandler):
|
|||||||
user_id: @user:id
|
user_id: @user:id
|
||||||
device_id: device id supplied by client
|
device_id: device id supplied by client
|
||||||
initial_device_display_name: device display name from client
|
initial_device_display_name: device display name from client
|
||||||
|
auth_provider_id: The SSO IdP the user used, if any.
|
||||||
|
auth_provider_session_id: The session ID (sid) got from the SSO IdP.
|
||||||
Returns:
|
Returns:
|
||||||
device id (generated if none was supplied)
|
device id (generated if none was supplied)
|
||||||
"""
|
"""
|
||||||
@ -323,6 +327,8 @@ class DeviceHandler(DeviceWorkerHandler):
|
|||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
device_id=device_id,
|
device_id=device_id,
|
||||||
initial_device_display_name=initial_device_display_name,
|
initial_device_display_name=initial_device_display_name,
|
||||||
|
auth_provider_id=auth_provider_id,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
if new_device:
|
if new_device:
|
||||||
await self.notify_device_update(user_id, [device_id])
|
await self.notify_device_update(user_id, [device_id])
|
||||||
@ -337,6 +343,8 @@ class DeviceHandler(DeviceWorkerHandler):
|
|||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
device_id=new_device_id,
|
device_id=new_device_id,
|
||||||
initial_device_display_name=initial_device_display_name,
|
initial_device_display_name=initial_device_display_name,
|
||||||
|
auth_provider_id=auth_provider_id,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
if new_device:
|
if new_device:
|
||||||
await self.notify_device_update(user_id, [new_device_id])
|
await self.notify_device_update(user_id, [new_device_id])
|
||||||
|
@ -122,9 +122,8 @@ class EventStreamHandler:
|
|||||||
events,
|
events,
|
||||||
time_now,
|
time_now,
|
||||||
as_client_event=as_client_event,
|
as_client_event=as_client_event,
|
||||||
# We don't bundle "live" events, as otherwise clients
|
# Don't bundle aggregations as this is a deprecated API.
|
||||||
# will end up double counting annotations.
|
bundle_aggregations=False,
|
||||||
bundle_relations=False,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
chunk = {
|
chunk = {
|
||||||
|
@ -68,6 +68,37 @@ if TYPE_CHECKING:
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def get_domains_from_state(state: StateMap[EventBase]) -> List[Tuple[str, int]]:
|
||||||
|
"""Get joined domains from state
|
||||||
|
|
||||||
|
Args:
|
||||||
|
state: State map from type/state key to event.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Returns a list of servers with the lowest depth of their joins.
|
||||||
|
Sorted by lowest depth first.
|
||||||
|
"""
|
||||||
|
joined_users = [
|
||||||
|
(state_key, int(event.depth))
|
||||||
|
for (e_type, state_key), event in state.items()
|
||||||
|
if e_type == EventTypes.Member and event.membership == Membership.JOIN
|
||||||
|
]
|
||||||
|
|
||||||
|
joined_domains: Dict[str, int] = {}
|
||||||
|
for u, d in joined_users:
|
||||||
|
try:
|
||||||
|
dom = get_domain_from_id(u)
|
||||||
|
old_d = joined_domains.get(dom)
|
||||||
|
if old_d:
|
||||||
|
joined_domains[dom] = min(d, old_d)
|
||||||
|
else:
|
||||||
|
joined_domains[dom] = d
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return sorted(joined_domains.items(), key=lambda d: d[1])
|
||||||
|
|
||||||
|
|
||||||
class FederationHandler:
|
class FederationHandler:
|
||||||
"""Handles general incoming federation requests
|
"""Handles general incoming federation requests
|
||||||
|
|
||||||
@ -268,36 +299,6 @@ class FederationHandler:
|
|||||||
|
|
||||||
curr_state = await self.state_handler.get_current_state(room_id)
|
curr_state = await self.state_handler.get_current_state(room_id)
|
||||||
|
|
||||||
def get_domains_from_state(state: StateMap[EventBase]) -> List[Tuple[str, int]]:
|
|
||||||
"""Get joined domains from state
|
|
||||||
|
|
||||||
Args:
|
|
||||||
state: State map from type/state key to event.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Returns a list of servers with the lowest depth of their joins.
|
|
||||||
Sorted by lowest depth first.
|
|
||||||
"""
|
|
||||||
joined_users = [
|
|
||||||
(state_key, int(event.depth))
|
|
||||||
for (e_type, state_key), event in state.items()
|
|
||||||
if e_type == EventTypes.Member and event.membership == Membership.JOIN
|
|
||||||
]
|
|
||||||
|
|
||||||
joined_domains: Dict[str, int] = {}
|
|
||||||
for u, d in joined_users:
|
|
||||||
try:
|
|
||||||
dom = get_domain_from_id(u)
|
|
||||||
old_d = joined_domains.get(dom)
|
|
||||||
if old_d:
|
|
||||||
joined_domains[dom] = min(d, old_d)
|
|
||||||
else:
|
|
||||||
joined_domains[dom] = d
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return sorted(joined_domains.items(), key=lambda d: d[1])
|
|
||||||
|
|
||||||
curr_domains = get_domains_from_state(curr_state)
|
curr_domains = get_domains_from_state(curr_state)
|
||||||
|
|
||||||
likely_domains = [
|
likely_domains = [
|
||||||
|
@ -165,7 +165,11 @@ class InitialSyncHandler:
|
|||||||
|
|
||||||
invite_event = await self.store.get_event(event.event_id)
|
invite_event = await self.store.get_event(event.event_id)
|
||||||
d["invite"] = await self._event_serializer.serialize_event(
|
d["invite"] = await self._event_serializer.serialize_event(
|
||||||
invite_event, time_now, as_client_event
|
invite_event,
|
||||||
|
time_now,
|
||||||
|
# Don't bundle aggregations as this is a deprecated API.
|
||||||
|
bundle_aggregations=False,
|
||||||
|
as_client_event=as_client_event,
|
||||||
)
|
)
|
||||||
|
|
||||||
rooms_ret.append(d)
|
rooms_ret.append(d)
|
||||||
@ -216,7 +220,11 @@ class InitialSyncHandler:
|
|||||||
d["messages"] = {
|
d["messages"] = {
|
||||||
"chunk": (
|
"chunk": (
|
||||||
await self._event_serializer.serialize_events(
|
await self._event_serializer.serialize_events(
|
||||||
messages, time_now=time_now, as_client_event=as_client_event
|
messages,
|
||||||
|
time_now=time_now,
|
||||||
|
# Don't bundle aggregations as this is a deprecated API.
|
||||||
|
bundle_aggregations=False,
|
||||||
|
as_client_event=as_client_event,
|
||||||
)
|
)
|
||||||
),
|
),
|
||||||
"start": await start_token.to_string(self.store),
|
"start": await start_token.to_string(self.store),
|
||||||
@ -226,6 +234,8 @@ class InitialSyncHandler:
|
|||||||
d["state"] = await self._event_serializer.serialize_events(
|
d["state"] = await self._event_serializer.serialize_events(
|
||||||
current_state.values(),
|
current_state.values(),
|
||||||
time_now=time_now,
|
time_now=time_now,
|
||||||
|
# Don't bundle aggregations as this is a deprecated API.
|
||||||
|
bundle_aggregations=False,
|
||||||
as_client_event=as_client_event,
|
as_client_event=as_client_event,
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -366,14 +376,18 @@ class InitialSyncHandler:
|
|||||||
"room_id": room_id,
|
"room_id": room_id,
|
||||||
"messages": {
|
"messages": {
|
||||||
"chunk": (
|
"chunk": (
|
||||||
await self._event_serializer.serialize_events(messages, time_now)
|
# Don't bundle aggregations as this is a deprecated API.
|
||||||
|
await self._event_serializer.serialize_events(
|
||||||
|
messages, time_now, bundle_aggregations=False
|
||||||
|
)
|
||||||
),
|
),
|
||||||
"start": await start_token.to_string(self.store),
|
"start": await start_token.to_string(self.store),
|
||||||
"end": await end_token.to_string(self.store),
|
"end": await end_token.to_string(self.store),
|
||||||
},
|
},
|
||||||
"state": (
|
"state": (
|
||||||
|
# Don't bundle aggregations as this is a deprecated API.
|
||||||
await self._event_serializer.serialize_events(
|
await self._event_serializer.serialize_events(
|
||||||
room_state.values(), time_now
|
room_state.values(), time_now, bundle_aggregations=False
|
||||||
)
|
)
|
||||||
),
|
),
|
||||||
"presence": [],
|
"presence": [],
|
||||||
@ -392,8 +406,9 @@ class InitialSyncHandler:
|
|||||||
|
|
||||||
# TODO: These concurrently
|
# TODO: These concurrently
|
||||||
time_now = self.clock.time_msec()
|
time_now = self.clock.time_msec()
|
||||||
|
# Don't bundle aggregations as this is a deprecated API.
|
||||||
state = await self._event_serializer.serialize_events(
|
state = await self._event_serializer.serialize_events(
|
||||||
current_state.values(), time_now
|
current_state.values(), time_now, bundle_aggregations=False
|
||||||
)
|
)
|
||||||
|
|
||||||
now_token = self.hs.get_event_sources().get_current_token()
|
now_token = self.hs.get_event_sources().get_current_token()
|
||||||
@ -467,7 +482,10 @@ class InitialSyncHandler:
|
|||||||
"room_id": room_id,
|
"room_id": room_id,
|
||||||
"messages": {
|
"messages": {
|
||||||
"chunk": (
|
"chunk": (
|
||||||
await self._event_serializer.serialize_events(messages, time_now)
|
# Don't bundle aggregations as this is a deprecated API.
|
||||||
|
await self._event_serializer.serialize_events(
|
||||||
|
messages, time_now, bundle_aggregations=False
|
||||||
|
)
|
||||||
),
|
),
|
||||||
"start": await start_token.to_string(self.store),
|
"start": await start_token.to_string(self.store),
|
||||||
"end": await end_token.to_string(self.store),
|
"end": await end_token.to_string(self.store),
|
||||||
|
@ -247,13 +247,7 @@ class MessageHandler:
|
|||||||
room_state = room_state_events[membership_event_id]
|
room_state = room_state_events[membership_event_id]
|
||||||
|
|
||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
events = await self._event_serializer.serialize_events(
|
events = await self._event_serializer.serialize_events(room_state.values(), now)
|
||||||
room_state.values(),
|
|
||||||
now,
|
|
||||||
# We don't bother bundling aggregations in when asked for state
|
|
||||||
# events, as clients won't use them.
|
|
||||||
bundle_relations=False,
|
|
||||||
)
|
|
||||||
return events
|
return events
|
||||||
|
|
||||||
async def get_joined_members(self, requester: Requester, room_id: str) -> dict:
|
async def get_joined_members(self, requester: Requester, room_id: str) -> dict:
|
||||||
|
@ -23,7 +23,7 @@ from authlib.common.security import generate_token
|
|||||||
from authlib.jose import JsonWebToken, jwt
|
from authlib.jose import JsonWebToken, jwt
|
||||||
from authlib.oauth2.auth import ClientAuth
|
from authlib.oauth2.auth import ClientAuth
|
||||||
from authlib.oauth2.rfc6749.parameters import prepare_grant_uri
|
from authlib.oauth2.rfc6749.parameters import prepare_grant_uri
|
||||||
from authlib.oidc.core import CodeIDToken, ImplicitIDToken, UserInfo
|
from authlib.oidc.core import CodeIDToken, UserInfo
|
||||||
from authlib.oidc.discovery import OpenIDProviderMetadata, get_well_known_url
|
from authlib.oidc.discovery import OpenIDProviderMetadata, get_well_known_url
|
||||||
from jinja2 import Environment, Template
|
from jinja2 import Environment, Template
|
||||||
from pymacaroons.exceptions import (
|
from pymacaroons.exceptions import (
|
||||||
@ -117,7 +117,8 @@ class OidcHandler:
|
|||||||
for idp_id, p in self._providers.items():
|
for idp_id, p in self._providers.items():
|
||||||
try:
|
try:
|
||||||
await p.load_metadata()
|
await p.load_metadata()
|
||||||
await p.load_jwks()
|
if not p._uses_userinfo:
|
||||||
|
await p.load_jwks()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise Exception(
|
raise Exception(
|
||||||
"Error while initialising OIDC provider %r" % (idp_id,)
|
"Error while initialising OIDC provider %r" % (idp_id,)
|
||||||
@ -498,10 +499,6 @@ class OidcProvider:
|
|||||||
return await self._jwks.get()
|
return await self._jwks.get()
|
||||||
|
|
||||||
async def _load_jwks(self) -> JWKS:
|
async def _load_jwks(self) -> JWKS:
|
||||||
if self._uses_userinfo:
|
|
||||||
# We're not using jwt signing, return an empty jwk set
|
|
||||||
return {"keys": []}
|
|
||||||
|
|
||||||
metadata = await self.load_metadata()
|
metadata = await self.load_metadata()
|
||||||
|
|
||||||
# Load the JWKS using the `jwks_uri` metadata.
|
# Load the JWKS using the `jwks_uri` metadata.
|
||||||
@ -663,7 +660,7 @@ class OidcProvider:
|
|||||||
|
|
||||||
return UserInfo(resp)
|
return UserInfo(resp)
|
||||||
|
|
||||||
async def _parse_id_token(self, token: Token, nonce: str) -> UserInfo:
|
async def _parse_id_token(self, token: Token, nonce: str) -> CodeIDToken:
|
||||||
"""Return an instance of UserInfo from token's ``id_token``.
|
"""Return an instance of UserInfo from token's ``id_token``.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
@ -673,7 +670,7 @@ class OidcProvider:
|
|||||||
request. This value should match the one inside the token.
|
request. This value should match the one inside the token.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
An object representing the user.
|
The decoded claims in the ID token.
|
||||||
"""
|
"""
|
||||||
metadata = await self.load_metadata()
|
metadata = await self.load_metadata()
|
||||||
claims_params = {
|
claims_params = {
|
||||||
@ -684,9 +681,6 @@ class OidcProvider:
|
|||||||
# If we got an `access_token`, there should be an `at_hash` claim
|
# If we got an `access_token`, there should be an `at_hash` claim
|
||||||
# in the `id_token` that we can check against.
|
# in the `id_token` that we can check against.
|
||||||
claims_params["access_token"] = token["access_token"]
|
claims_params["access_token"] = token["access_token"]
|
||||||
claims_cls = CodeIDToken
|
|
||||||
else:
|
|
||||||
claims_cls = ImplicitIDToken
|
|
||||||
|
|
||||||
alg_values = metadata.get("id_token_signing_alg_values_supported", ["RS256"])
|
alg_values = metadata.get("id_token_signing_alg_values_supported", ["RS256"])
|
||||||
jwt = JsonWebToken(alg_values)
|
jwt = JsonWebToken(alg_values)
|
||||||
@ -703,7 +697,7 @@ class OidcProvider:
|
|||||||
claims = jwt.decode(
|
claims = jwt.decode(
|
||||||
id_token,
|
id_token,
|
||||||
key=jwk_set,
|
key=jwk_set,
|
||||||
claims_cls=claims_cls,
|
claims_cls=CodeIDToken,
|
||||||
claims_options=claim_options,
|
claims_options=claim_options,
|
||||||
claims_params=claims_params,
|
claims_params=claims_params,
|
||||||
)
|
)
|
||||||
@ -713,7 +707,7 @@ class OidcProvider:
|
|||||||
claims = jwt.decode(
|
claims = jwt.decode(
|
||||||
id_token,
|
id_token,
|
||||||
key=jwk_set,
|
key=jwk_set,
|
||||||
claims_cls=claims_cls,
|
claims_cls=CodeIDToken,
|
||||||
claims_options=claim_options,
|
claims_options=claim_options,
|
||||||
claims_params=claims_params,
|
claims_params=claims_params,
|
||||||
)
|
)
|
||||||
@ -721,7 +715,8 @@ class OidcProvider:
|
|||||||
logger.debug("Decoded id_token JWT %r; validating", claims)
|
logger.debug("Decoded id_token JWT %r; validating", claims)
|
||||||
|
|
||||||
claims.validate(leeway=120) # allows 2 min of clock skew
|
claims.validate(leeway=120) # allows 2 min of clock skew
|
||||||
return UserInfo(claims)
|
|
||||||
|
return claims
|
||||||
|
|
||||||
async def handle_redirect_request(
|
async def handle_redirect_request(
|
||||||
self,
|
self,
|
||||||
@ -837,8 +832,22 @@ class OidcProvider:
|
|||||||
|
|
||||||
logger.debug("Successfully obtained OAuth2 token data: %r", token)
|
logger.debug("Successfully obtained OAuth2 token data: %r", token)
|
||||||
|
|
||||||
# Now that we have a token, get the userinfo, either by decoding the
|
# If there is an id_token, it should be validated, regardless of the
|
||||||
# `id_token` or by fetching the `userinfo_endpoint`.
|
# userinfo endpoint is used or not.
|
||||||
|
if token.get("id_token") is not None:
|
||||||
|
try:
|
||||||
|
id_token = await self._parse_id_token(token, nonce=session_data.nonce)
|
||||||
|
sid = id_token.get("sid")
|
||||||
|
except Exception as e:
|
||||||
|
logger.exception("Invalid id_token")
|
||||||
|
self._sso_handler.render_error(request, "invalid_token", str(e))
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
id_token = None
|
||||||
|
sid = None
|
||||||
|
|
||||||
|
# Now that we have a token, get the userinfo either from the `id_token`
|
||||||
|
# claims or by fetching the `userinfo_endpoint`.
|
||||||
if self._uses_userinfo:
|
if self._uses_userinfo:
|
||||||
try:
|
try:
|
||||||
userinfo = await self._fetch_userinfo(token)
|
userinfo = await self._fetch_userinfo(token)
|
||||||
@ -846,13 +855,14 @@ class OidcProvider:
|
|||||||
logger.exception("Could not fetch userinfo")
|
logger.exception("Could not fetch userinfo")
|
||||||
self._sso_handler.render_error(request, "fetch_error", str(e))
|
self._sso_handler.render_error(request, "fetch_error", str(e))
|
||||||
return
|
return
|
||||||
|
elif id_token is not None:
|
||||||
|
userinfo = UserInfo(id_token)
|
||||||
else:
|
else:
|
||||||
try:
|
logger.error("Missing id_token in token response")
|
||||||
userinfo = await self._parse_id_token(token, nonce=session_data.nonce)
|
self._sso_handler.render_error(
|
||||||
except Exception as e:
|
request, "invalid_token", "Missing id_token in token response"
|
||||||
logger.exception("Invalid id_token")
|
)
|
||||||
self._sso_handler.render_error(request, "invalid_token", str(e))
|
return
|
||||||
return
|
|
||||||
|
|
||||||
# first check if we're doing a UIA
|
# first check if we're doing a UIA
|
||||||
if session_data.ui_auth_session_id:
|
if session_data.ui_auth_session_id:
|
||||||
@ -884,7 +894,7 @@ class OidcProvider:
|
|||||||
# Call the mapper to register/login the user
|
# Call the mapper to register/login the user
|
||||||
try:
|
try:
|
||||||
await self._complete_oidc_login(
|
await self._complete_oidc_login(
|
||||||
userinfo, token, request, session_data.client_redirect_url
|
userinfo, token, request, session_data.client_redirect_url, sid
|
||||||
)
|
)
|
||||||
except MappingException as e:
|
except MappingException as e:
|
||||||
logger.exception("Could not map user")
|
logger.exception("Could not map user")
|
||||||
@ -896,6 +906,7 @@ class OidcProvider:
|
|||||||
token: Token,
|
token: Token,
|
||||||
request: SynapseRequest,
|
request: SynapseRequest,
|
||||||
client_redirect_url: str,
|
client_redirect_url: str,
|
||||||
|
sid: Optional[str],
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Given a UserInfo response, complete the login flow
|
"""Given a UserInfo response, complete the login flow
|
||||||
|
|
||||||
@ -1008,6 +1019,7 @@ class OidcProvider:
|
|||||||
oidc_response_to_user_attributes,
|
oidc_response_to_user_attributes,
|
||||||
grandfather_existing_users,
|
grandfather_existing_users,
|
||||||
extra_attributes,
|
extra_attributes,
|
||||||
|
auth_provider_session_id=sid,
|
||||||
)
|
)
|
||||||
|
|
||||||
def _remote_id_from_userinfo(self, userinfo: UserInfo) -> str:
|
def _remote_id_from_userinfo(self, userinfo: UserInfo) -> str:
|
||||||
|
@ -406,9 +406,6 @@ class PaginationHandler:
|
|||||||
force: set true to skip checking for joined users.
|
force: set true to skip checking for joined users.
|
||||||
"""
|
"""
|
||||||
with await self.pagination_lock.write(room_id):
|
with await self.pagination_lock.write(room_id):
|
||||||
# check we know about the room
|
|
||||||
await self.store.get_room_version_id(room_id)
|
|
||||||
|
|
||||||
# first check that we have no users in this room
|
# first check that we have no users in this room
|
||||||
if not force:
|
if not force:
|
||||||
joined = await self.store.is_host_joined(room_id, self._server_name)
|
joined = await self.store.is_host_joined(room_id, self._server_name)
|
||||||
|
@ -421,7 +421,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
|
|||||||
self._on_shutdown,
|
self._on_shutdown,
|
||||||
)
|
)
|
||||||
|
|
||||||
def _on_shutdown(self) -> None:
|
async def _on_shutdown(self) -> None:
|
||||||
if self._presence_enabled:
|
if self._presence_enabled:
|
||||||
self.hs.get_tcp_replication().send_command(
|
self.hs.get_tcp_replication().send_command(
|
||||||
ClearUserSyncsCommand(self.instance_id)
|
ClearUserSyncsCommand(self.instance_id)
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
# Copyright 2014 - 2016 OpenMarket Ltd
|
# Copyright 2014 - 2016 OpenMarket Ltd
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -116,9 +117,13 @@ class RegistrationHandler:
|
|||||||
self.pusher_pool = hs.get_pusherpool()
|
self.pusher_pool = hs.get_pusherpool()
|
||||||
|
|
||||||
self.session_lifetime = hs.config.registration.session_lifetime
|
self.session_lifetime = hs.config.registration.session_lifetime
|
||||||
|
self.nonrefreshable_access_token_lifetime = (
|
||||||
|
hs.config.registration.nonrefreshable_access_token_lifetime
|
||||||
|
)
|
||||||
self.refreshable_access_token_lifetime = (
|
self.refreshable_access_token_lifetime = (
|
||||||
hs.config.registration.refreshable_access_token_lifetime
|
hs.config.registration.refreshable_access_token_lifetime
|
||||||
)
|
)
|
||||||
|
self.refresh_token_lifetime = hs.config.registration.refresh_token_lifetime
|
||||||
|
|
||||||
init_counters_for_auth_provider("")
|
init_counters_for_auth_provider("")
|
||||||
|
|
||||||
@ -741,6 +746,7 @@ class RegistrationHandler:
|
|||||||
is_appservice_ghost: bool = False,
|
is_appservice_ghost: bool = False,
|
||||||
auth_provider_id: Optional[str] = None,
|
auth_provider_id: Optional[str] = None,
|
||||||
should_issue_refresh_token: bool = False,
|
should_issue_refresh_token: bool = False,
|
||||||
|
auth_provider_session_id: Optional[str] = None,
|
||||||
) -> Tuple[str, str, Optional[int], Optional[str]]:
|
) -> Tuple[str, str, Optional[int], Optional[str]]:
|
||||||
"""Register a device for a user and generate an access token.
|
"""Register a device for a user and generate an access token.
|
||||||
|
|
||||||
@ -751,9 +757,9 @@ class RegistrationHandler:
|
|||||||
device_id: The device ID to check, or None to generate a new one.
|
device_id: The device ID to check, or None to generate a new one.
|
||||||
initial_display_name: An optional display name for the device.
|
initial_display_name: An optional display name for the device.
|
||||||
is_guest: Whether this is a guest account
|
is_guest: Whether this is a guest account
|
||||||
auth_provider_id: The SSO IdP the user used, if any (just used for the
|
auth_provider_id: The SSO IdP the user used, if any.
|
||||||
prometheus metrics).
|
|
||||||
should_issue_refresh_token: Whether it should also issue a refresh token
|
should_issue_refresh_token: Whether it should also issue a refresh token
|
||||||
|
auth_provider_session_id: The session ID received during login from the SSO IdP.
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of device ID, access token, access token expiration time and refresh token
|
Tuple of device ID, access token, access token expiration time and refresh token
|
||||||
"""
|
"""
|
||||||
@ -764,6 +770,8 @@ class RegistrationHandler:
|
|||||||
is_guest=is_guest,
|
is_guest=is_guest,
|
||||||
is_appservice_ghost=is_appservice_ghost,
|
is_appservice_ghost=is_appservice_ghost,
|
||||||
should_issue_refresh_token=should_issue_refresh_token,
|
should_issue_refresh_token=should_issue_refresh_token,
|
||||||
|
auth_provider_id=auth_provider_id,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
login_counter.labels(
|
login_counter.labels(
|
||||||
@ -786,6 +794,8 @@ class RegistrationHandler:
|
|||||||
is_guest: bool = False,
|
is_guest: bool = False,
|
||||||
is_appservice_ghost: bool = False,
|
is_appservice_ghost: bool = False,
|
||||||
should_issue_refresh_token: bool = False,
|
should_issue_refresh_token: bool = False,
|
||||||
|
auth_provider_id: Optional[str] = None,
|
||||||
|
auth_provider_session_id: Optional[str] = None,
|
||||||
) -> LoginDict:
|
) -> LoginDict:
|
||||||
"""Helper for register_device
|
"""Helper for register_device
|
||||||
|
|
||||||
@ -793,40 +803,86 @@ class RegistrationHandler:
|
|||||||
class and RegisterDeviceReplicationServlet.
|
class and RegisterDeviceReplicationServlet.
|
||||||
"""
|
"""
|
||||||
assert not self.hs.config.worker.worker_app
|
assert not self.hs.config.worker.worker_app
|
||||||
valid_until_ms = None
|
now_ms = self.clock.time_msec()
|
||||||
|
access_token_expiry = None
|
||||||
if self.session_lifetime is not None:
|
if self.session_lifetime is not None:
|
||||||
if is_guest:
|
if is_guest:
|
||||||
raise Exception(
|
raise Exception(
|
||||||
"session_lifetime is not currently implemented for guest access"
|
"session_lifetime is not currently implemented for guest access"
|
||||||
)
|
)
|
||||||
valid_until_ms = self.clock.time_msec() + self.session_lifetime
|
access_token_expiry = now_ms + self.session_lifetime
|
||||||
|
|
||||||
|
if self.nonrefreshable_access_token_lifetime is not None:
|
||||||
|
if access_token_expiry is not None:
|
||||||
|
# Don't allow the non-refreshable access token to outlive the
|
||||||
|
# session.
|
||||||
|
access_token_expiry = min(
|
||||||
|
now_ms + self.nonrefreshable_access_token_lifetime,
|
||||||
|
access_token_expiry,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
access_token_expiry = now_ms + self.nonrefreshable_access_token_lifetime
|
||||||
|
|
||||||
refresh_token = None
|
refresh_token = None
|
||||||
refresh_token_id = None
|
refresh_token_id = None
|
||||||
|
|
||||||
registered_device_id = await self.device_handler.check_device_registered(
|
registered_device_id = await self.device_handler.check_device_registered(
|
||||||
user_id, device_id, initial_display_name
|
user_id,
|
||||||
|
device_id,
|
||||||
|
initial_display_name,
|
||||||
|
auth_provider_id=auth_provider_id,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
if is_guest:
|
if is_guest:
|
||||||
assert valid_until_ms is None
|
assert access_token_expiry is None
|
||||||
access_token = self.macaroon_gen.generate_guest_access_token(user_id)
|
access_token = self.macaroon_gen.generate_guest_access_token(user_id)
|
||||||
else:
|
else:
|
||||||
if should_issue_refresh_token:
|
if should_issue_refresh_token:
|
||||||
|
# A refreshable access token lifetime must be configured
|
||||||
|
# since we're told to issue a refresh token (the caller checks
|
||||||
|
# that this value is set before setting this flag).
|
||||||
|
assert self.refreshable_access_token_lifetime is not None
|
||||||
|
|
||||||
|
# Set the expiry time of the refreshable access token
|
||||||
|
access_token_expiry = now_ms + self.refreshable_access_token_lifetime
|
||||||
|
|
||||||
|
# Set the refresh token expiry time (if configured)
|
||||||
|
refresh_token_expiry = None
|
||||||
|
if self.refresh_token_lifetime is not None:
|
||||||
|
refresh_token_expiry = now_ms + self.refresh_token_lifetime
|
||||||
|
|
||||||
|
# Set an ultimate session expiry time (if configured)
|
||||||
|
ultimate_session_expiry_ts = None
|
||||||
|
if self.session_lifetime is not None:
|
||||||
|
ultimate_session_expiry_ts = now_ms + self.session_lifetime
|
||||||
|
|
||||||
|
# Also ensure that the issued tokens don't outlive the
|
||||||
|
# session.
|
||||||
|
# (It would be weird to configure a homeserver with a shorter
|
||||||
|
# session lifetime than token lifetime, but may as well handle
|
||||||
|
# it.)
|
||||||
|
access_token_expiry = min(
|
||||||
|
access_token_expiry, ultimate_session_expiry_ts
|
||||||
|
)
|
||||||
|
if refresh_token_expiry is not None:
|
||||||
|
refresh_token_expiry = min(
|
||||||
|
refresh_token_expiry, ultimate_session_expiry_ts
|
||||||
|
)
|
||||||
|
|
||||||
(
|
(
|
||||||
refresh_token,
|
refresh_token,
|
||||||
refresh_token_id,
|
refresh_token_id,
|
||||||
) = await self._auth_handler.create_refresh_token_for_user_id(
|
) = await self._auth_handler.create_refresh_token_for_user_id(
|
||||||
user_id,
|
user_id,
|
||||||
device_id=registered_device_id,
|
device_id=registered_device_id,
|
||||||
)
|
expiry_ts=refresh_token_expiry,
|
||||||
valid_until_ms = (
|
ultimate_session_expiry_ts=ultimate_session_expiry_ts,
|
||||||
self.clock.time_msec() + self.refreshable_access_token_lifetime
|
|
||||||
)
|
)
|
||||||
|
|
||||||
access_token = await self._auth_handler.create_access_token_for_user_id(
|
access_token = await self._auth_handler.create_access_token_for_user_id(
|
||||||
user_id,
|
user_id,
|
||||||
device_id=registered_device_id,
|
device_id=registered_device_id,
|
||||||
valid_until_ms=valid_until_ms,
|
valid_until_ms=access_token_expiry,
|
||||||
is_appservice_ghost=is_appservice_ghost,
|
is_appservice_ghost=is_appservice_ghost,
|
||||||
refresh_token_id=refresh_token_id,
|
refresh_token_id=refresh_token_id,
|
||||||
)
|
)
|
||||||
@ -834,7 +890,7 @@ class RegistrationHandler:
|
|||||||
return {
|
return {
|
||||||
"device_id": registered_device_id,
|
"device_id": registered_device_id,
|
||||||
"access_token": access_token,
|
"access_token": access_token,
|
||||||
"valid_until_ms": valid_until_ms,
|
"valid_until_ms": access_token_expiry,
|
||||||
"refresh_token": refresh_token,
|
"refresh_token": refresh_token,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -46,6 +46,7 @@ from synapse.api.constants import (
|
|||||||
from synapse.api.errors import (
|
from synapse.api.errors import (
|
||||||
AuthError,
|
AuthError,
|
||||||
Codes,
|
Codes,
|
||||||
|
HttpResponseException,
|
||||||
LimitExceededError,
|
LimitExceededError,
|
||||||
NotFoundError,
|
NotFoundError,
|
||||||
StoreError,
|
StoreError,
|
||||||
@ -56,6 +57,8 @@ from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
|
|||||||
from synapse.event_auth import validate_event_for_room_version
|
from synapse.event_auth import validate_event_for_room_version
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.events.utils import copy_power_levels_contents
|
from synapse.events.utils import copy_power_levels_contents
|
||||||
|
from synapse.federation.federation_client import InvalidResponseError
|
||||||
|
from synapse.handlers.federation import get_domains_from_state
|
||||||
from synapse.rest.admin._base import assert_user_is_admin
|
from synapse.rest.admin._base import assert_user_is_admin
|
||||||
from synapse.storage.state import StateFilter
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.streams import EventSource
|
from synapse.streams import EventSource
|
||||||
@ -1220,6 +1223,147 @@ class RoomContextHandler:
|
|||||||
return results
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
class TimestampLookupHandler:
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self.server_name = hs.hostname
|
||||||
|
self.store = hs.get_datastore()
|
||||||
|
self.state_handler = hs.get_state_handler()
|
||||||
|
self.federation_client = hs.get_federation_client()
|
||||||
|
|
||||||
|
async def get_event_for_timestamp(
|
||||||
|
self,
|
||||||
|
requester: Requester,
|
||||||
|
room_id: str,
|
||||||
|
timestamp: int,
|
||||||
|
direction: str,
|
||||||
|
) -> Tuple[str, int]:
|
||||||
|
"""Find the closest event to the given timestamp in the given direction.
|
||||||
|
If we can't find an event locally or the event we have locally is next to a gap,
|
||||||
|
it will ask other federated homeservers for an event.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
requester: The user making the request according to the access token
|
||||||
|
room_id: Room to fetch the event from
|
||||||
|
timestamp: The point in time (inclusive) we should navigate from in
|
||||||
|
the given direction to find the closest event.
|
||||||
|
direction: ["f"|"b"] to indicate whether we should navigate forward
|
||||||
|
or backward from the given timestamp to find the closest event.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A tuple containing the `event_id` closest to the given timestamp in
|
||||||
|
the given direction and the `origin_server_ts`.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
SynapseError if unable to find any event locally in the given direction
|
||||||
|
"""
|
||||||
|
|
||||||
|
local_event_id = await self.store.get_event_id_for_timestamp(
|
||||||
|
room_id, timestamp, direction
|
||||||
|
)
|
||||||
|
logger.debug(
|
||||||
|
"get_event_for_timestamp: locally, we found event_id=%s closest to timestamp=%s",
|
||||||
|
local_event_id,
|
||||||
|
timestamp,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check for gaps in the history where events could be hiding in between
|
||||||
|
# the timestamp given and the event we were able to find locally
|
||||||
|
is_event_next_to_backward_gap = False
|
||||||
|
is_event_next_to_forward_gap = False
|
||||||
|
if local_event_id:
|
||||||
|
local_event = await self.store.get_event(
|
||||||
|
local_event_id, allow_none=False, allow_rejected=False
|
||||||
|
)
|
||||||
|
|
||||||
|
if direction == "f":
|
||||||
|
# We only need to check for a backward gap if we're looking forwards
|
||||||
|
# to ensure there is nothing in between.
|
||||||
|
is_event_next_to_backward_gap = (
|
||||||
|
await self.store.is_event_next_to_backward_gap(local_event)
|
||||||
|
)
|
||||||
|
elif direction == "b":
|
||||||
|
# We only need to check for a forward gap if we're looking backwards
|
||||||
|
# to ensure there is nothing in between
|
||||||
|
is_event_next_to_forward_gap = (
|
||||||
|
await self.store.is_event_next_to_forward_gap(local_event)
|
||||||
|
)
|
||||||
|
|
||||||
|
# If we found a gap, we should probably ask another homeserver first
|
||||||
|
# about more history in between
|
||||||
|
if (
|
||||||
|
not local_event_id
|
||||||
|
or is_event_next_to_backward_gap
|
||||||
|
or is_event_next_to_forward_gap
|
||||||
|
):
|
||||||
|
logger.debug(
|
||||||
|
"get_event_for_timestamp: locally, we found event_id=%s closest to timestamp=%s which is next to a gap in event history so we're asking other homeservers first",
|
||||||
|
local_event_id,
|
||||||
|
timestamp,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Find other homeservers from the given state in the room
|
||||||
|
curr_state = await self.state_handler.get_current_state(room_id)
|
||||||
|
curr_domains = get_domains_from_state(curr_state)
|
||||||
|
likely_domains = [
|
||||||
|
domain for domain, depth in curr_domains if domain != self.server_name
|
||||||
|
]
|
||||||
|
|
||||||
|
# Loop through each homeserver candidate until we get a succesful response
|
||||||
|
for domain in likely_domains:
|
||||||
|
try:
|
||||||
|
remote_response = await self.federation_client.timestamp_to_event(
|
||||||
|
domain, room_id, timestamp, direction
|
||||||
|
)
|
||||||
|
logger.debug(
|
||||||
|
"get_event_for_timestamp: response from domain(%s)=%s",
|
||||||
|
domain,
|
||||||
|
remote_response,
|
||||||
|
)
|
||||||
|
|
||||||
|
# TODO: Do we want to persist this as an extremity?
|
||||||
|
# TODO: I think ideally, we would try to backfill from
|
||||||
|
# this event and run this whole
|
||||||
|
# `get_event_for_timestamp` function again to make sure
|
||||||
|
# they didn't give us an event from their gappy history.
|
||||||
|
remote_event_id = remote_response.event_id
|
||||||
|
origin_server_ts = remote_response.origin_server_ts
|
||||||
|
|
||||||
|
# Only return the remote event if it's closer than the local event
|
||||||
|
if not local_event or (
|
||||||
|
abs(origin_server_ts - timestamp)
|
||||||
|
< abs(local_event.origin_server_ts - timestamp)
|
||||||
|
):
|
||||||
|
return remote_event_id, origin_server_ts
|
||||||
|
except (HttpResponseException, InvalidResponseError) as ex:
|
||||||
|
# Let's not put a high priority on some other homeserver
|
||||||
|
# failing to respond or giving a random response
|
||||||
|
logger.debug(
|
||||||
|
"Failed to fetch /timestamp_to_event from %s because of exception(%s) %s args=%s",
|
||||||
|
domain,
|
||||||
|
type(ex).__name__,
|
||||||
|
ex,
|
||||||
|
ex.args,
|
||||||
|
)
|
||||||
|
except Exception as ex:
|
||||||
|
# But we do want to see some exceptions in our code
|
||||||
|
logger.warning(
|
||||||
|
"Failed to fetch /timestamp_to_event from %s because of exception(%s) %s args=%s",
|
||||||
|
domain,
|
||||||
|
type(ex).__name__,
|
||||||
|
ex,
|
||||||
|
ex.args,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not local_event_id:
|
||||||
|
raise SynapseError(
|
||||||
|
404,
|
||||||
|
"Unable to find event from %s in direction %s" % (timestamp, direction),
|
||||||
|
errcode=Codes.NOT_FOUND,
|
||||||
|
)
|
||||||
|
|
||||||
|
return local_event_id, local_event.origin_server_ts
|
||||||
|
|
||||||
|
|
||||||
class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
|
class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.store = hs.get_datastore()
|
self.store = hs.get_datastore()
|
||||||
@ -1391,20 +1535,13 @@ class RoomShutdownHandler:
|
|||||||
await self.store.block_room(room_id, requester_user_id)
|
await self.store.block_room(room_id, requester_user_id)
|
||||||
|
|
||||||
if not await self.store.get_room(room_id):
|
if not await self.store.get_room(room_id):
|
||||||
if block:
|
# if we don't know about the room, there is nothing left to do.
|
||||||
# We allow you to block an unknown room.
|
return {
|
||||||
return {
|
"kicked_users": [],
|
||||||
"kicked_users": [],
|
"failed_to_kick_users": [],
|
||||||
"failed_to_kick_users": [],
|
"local_aliases": [],
|
||||||
"local_aliases": [],
|
"new_room_id": None,
|
||||||
"new_room_id": None,
|
}
|
||||||
}
|
|
||||||
else:
|
|
||||||
# But if you don't want to preventatively block another room,
|
|
||||||
# this function can't do anything useful.
|
|
||||||
raise NotFoundError(
|
|
||||||
"Cannot shut down room: unknown room id %s" % (room_id,)
|
|
||||||
)
|
|
||||||
|
|
||||||
if new_room_user_id is not None:
|
if new_room_user_id is not None:
|
||||||
if not self.hs.is_mine_id(new_room_user_id):
|
if not self.hs.is_mine_id(new_room_user_id):
|
||||||
|
@ -36,8 +36,9 @@ from synapse.api.errors import (
|
|||||||
SynapseError,
|
SynapseError,
|
||||||
UnsupportedRoomVersionError,
|
UnsupportedRoomVersionError,
|
||||||
)
|
)
|
||||||
|
from synapse.api.ratelimiting import Ratelimiter
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict, Requester
|
||||||
from synapse.util.caches.response_cache import ResponseCache
|
from synapse.util.caches.response_cache import ResponseCache
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
@ -93,6 +94,9 @@ class RoomSummaryHandler:
|
|||||||
self._event_serializer = hs.get_event_client_serializer()
|
self._event_serializer = hs.get_event_client_serializer()
|
||||||
self._server_name = hs.hostname
|
self._server_name = hs.hostname
|
||||||
self._federation_client = hs.get_federation_client()
|
self._federation_client = hs.get_federation_client()
|
||||||
|
self._ratelimiter = Ratelimiter(
|
||||||
|
store=self._store, clock=hs.get_clock(), rate_hz=5, burst_count=10
|
||||||
|
)
|
||||||
|
|
||||||
# If a user tries to fetch the same page multiple times in quick succession,
|
# If a user tries to fetch the same page multiple times in quick succession,
|
||||||
# only process the first attempt and return its result to subsequent requests.
|
# only process the first attempt and return its result to subsequent requests.
|
||||||
@ -249,7 +253,7 @@ class RoomSummaryHandler:
|
|||||||
|
|
||||||
async def get_room_hierarchy(
|
async def get_room_hierarchy(
|
||||||
self,
|
self,
|
||||||
requester: str,
|
requester: Requester,
|
||||||
requested_room_id: str,
|
requested_room_id: str,
|
||||||
suggested_only: bool = False,
|
suggested_only: bool = False,
|
||||||
max_depth: Optional[int] = None,
|
max_depth: Optional[int] = None,
|
||||||
@ -276,6 +280,8 @@ class RoomSummaryHandler:
|
|||||||
Returns:
|
Returns:
|
||||||
The JSON hierarchy dictionary.
|
The JSON hierarchy dictionary.
|
||||||
"""
|
"""
|
||||||
|
await self._ratelimiter.ratelimit(requester)
|
||||||
|
|
||||||
# If a user tries to fetch the same page multiple times in quick succession,
|
# If a user tries to fetch the same page multiple times in quick succession,
|
||||||
# only process the first attempt and return its result to subsequent requests.
|
# only process the first attempt and return its result to subsequent requests.
|
||||||
#
|
#
|
||||||
@ -283,7 +289,7 @@ class RoomSummaryHandler:
|
|||||||
# to process multiple requests for the same page will result in errors.
|
# to process multiple requests for the same page will result in errors.
|
||||||
return await self._pagination_response_cache.wrap(
|
return await self._pagination_response_cache.wrap(
|
||||||
(
|
(
|
||||||
requester,
|
requester.user.to_string(),
|
||||||
requested_room_id,
|
requested_room_id,
|
||||||
suggested_only,
|
suggested_only,
|
||||||
max_depth,
|
max_depth,
|
||||||
@ -291,7 +297,7 @@ class RoomSummaryHandler:
|
|||||||
from_token,
|
from_token,
|
||||||
),
|
),
|
||||||
self._get_room_hierarchy,
|
self._get_room_hierarchy,
|
||||||
requester,
|
requester.user.to_string(),
|
||||||
requested_room_id,
|
requested_room_id,
|
||||||
suggested_only,
|
suggested_only,
|
||||||
max_depth,
|
max_depth,
|
||||||
|
@ -365,6 +365,7 @@ class SsoHandler:
|
|||||||
sso_to_matrix_id_mapper: Callable[[int], Awaitable[UserAttributes]],
|
sso_to_matrix_id_mapper: Callable[[int], Awaitable[UserAttributes]],
|
||||||
grandfather_existing_users: Callable[[], Awaitable[Optional[str]]],
|
grandfather_existing_users: Callable[[], Awaitable[Optional[str]]],
|
||||||
extra_login_attributes: Optional[JsonDict] = None,
|
extra_login_attributes: Optional[JsonDict] = None,
|
||||||
|
auth_provider_session_id: Optional[str] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Given an SSO ID, retrieve the user ID for it and possibly register the user.
|
Given an SSO ID, retrieve the user ID for it and possibly register the user.
|
||||||
@ -415,6 +416,8 @@ class SsoHandler:
|
|||||||
extra_login_attributes: An optional dictionary of extra
|
extra_login_attributes: An optional dictionary of extra
|
||||||
attributes to be provided to the client in the login response.
|
attributes to be provided to the client in the login response.
|
||||||
|
|
||||||
|
auth_provider_session_id: An optional session ID from the IdP.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
MappingException if there was a problem mapping the response to a user.
|
MappingException if there was a problem mapping the response to a user.
|
||||||
RedirectException: if the mapping provider needs to redirect the user
|
RedirectException: if the mapping provider needs to redirect the user
|
||||||
@ -490,6 +493,7 @@ class SsoHandler:
|
|||||||
client_redirect_url,
|
client_redirect_url,
|
||||||
extra_login_attributes,
|
extra_login_attributes,
|
||||||
new_user=new_user,
|
new_user=new_user,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _call_attribute_mapper(
|
async def _call_attribute_mapper(
|
||||||
|
@ -334,6 +334,19 @@ class SyncHandler:
|
|||||||
full_state: bool,
|
full_state: bool,
|
||||||
cache_context: ResponseCacheContext[SyncRequestKey],
|
cache_context: ResponseCacheContext[SyncRequestKey],
|
||||||
) -> SyncResult:
|
) -> SyncResult:
|
||||||
|
"""The start of the machinery that produces a /sync response.
|
||||||
|
|
||||||
|
See https://spec.matrix.org/v1.1/client-server-api/#syncing for full details.
|
||||||
|
|
||||||
|
This method does high-level bookkeeping:
|
||||||
|
- tracking the kind of sync in the logging context
|
||||||
|
- deleting any to_device messages whose delivery has been acknowledged.
|
||||||
|
- deciding if we should dispatch an instant or delayed response
|
||||||
|
- marking the sync as being lazily loaded, if appropriate
|
||||||
|
|
||||||
|
Computing the body of the response begins in the next method,
|
||||||
|
`current_sync_for_user`.
|
||||||
|
"""
|
||||||
if since_token is None:
|
if since_token is None:
|
||||||
sync_type = "initial_sync"
|
sync_type = "initial_sync"
|
||||||
elif full_state:
|
elif full_state:
|
||||||
@ -363,7 +376,7 @@ class SyncHandler:
|
|||||||
sync_config, since_token, full_state=full_state
|
sync_config, since_token, full_state=full_state
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
|
# Otherwise, we wait for something to happen and report it to the user.
|
||||||
async def current_sync_callback(
|
async def current_sync_callback(
|
||||||
before_token: StreamToken, after_token: StreamToken
|
before_token: StreamToken, after_token: StreamToken
|
||||||
) -> SyncResult:
|
) -> SyncResult:
|
||||||
@ -402,7 +415,12 @@ class SyncHandler:
|
|||||||
since_token: Optional[StreamToken] = None,
|
since_token: Optional[StreamToken] = None,
|
||||||
full_state: bool = False,
|
full_state: bool = False,
|
||||||
) -> SyncResult:
|
) -> SyncResult:
|
||||||
"""Get the sync for client needed to match what the server has now."""
|
"""Generates the response body of a sync result, represented as a SyncResult.
|
||||||
|
|
||||||
|
This is a wrapper around `generate_sync_result` which starts an open tracing
|
||||||
|
span to track the sync. See `generate_sync_result` for the next part of your
|
||||||
|
indoctrination.
|
||||||
|
"""
|
||||||
with start_active_span("current_sync_for_user"):
|
with start_active_span("current_sync_for_user"):
|
||||||
log_kv({"since_token": since_token})
|
log_kv({"since_token": since_token})
|
||||||
sync_result = await self.generate_sync_result(
|
sync_result = await self.generate_sync_result(
|
||||||
@ -560,7 +578,7 @@ class SyncHandler:
|
|||||||
# that have happened since `since_key` up to `end_key`, so we
|
# that have happened since `since_key` up to `end_key`, so we
|
||||||
# can just use `get_room_events_stream_for_room`.
|
# can just use `get_room_events_stream_for_room`.
|
||||||
# Otherwise, we want to return the last N events in the room
|
# Otherwise, we want to return the last N events in the room
|
||||||
# in toplogical ordering.
|
# in topological ordering.
|
||||||
if since_key:
|
if since_key:
|
||||||
events, end_key = await self.store.get_room_events_stream_for_room(
|
events, end_key = await self.store.get_room_events_stream_for_room(
|
||||||
room_id,
|
room_id,
|
||||||
@ -1042,7 +1060,18 @@ class SyncHandler:
|
|||||||
since_token: Optional[StreamToken] = None,
|
since_token: Optional[StreamToken] = None,
|
||||||
full_state: bool = False,
|
full_state: bool = False,
|
||||||
) -> SyncResult:
|
) -> SyncResult:
|
||||||
"""Generates a sync result."""
|
"""Generates the response body of a sync result.
|
||||||
|
|
||||||
|
This is represented by a `SyncResult` struct, which is built from small pieces
|
||||||
|
using a `SyncResultBuilder`. See also
|
||||||
|
https://spec.matrix.org/v1.1/client-server-api/#get_matrixclientv3sync
|
||||||
|
the `sync_result_builder` is passed as a mutable ("inout") parameter to various
|
||||||
|
helper functions. These retrieve and process the data which forms the sync body,
|
||||||
|
often writing to the `sync_result_builder` to store their output.
|
||||||
|
|
||||||
|
At the end, we transfer data from the `sync_result_builder` to a new `SyncResult`
|
||||||
|
instance to signify that the sync calculation is complete.
|
||||||
|
"""
|
||||||
# NB: The now_token gets changed by some of the generate_sync_* methods,
|
# NB: The now_token gets changed by some of the generate_sync_* methods,
|
||||||
# this is due to some of the underlying streams not supporting the ability
|
# this is due to some of the underlying streams not supporting the ability
|
||||||
# to query up to a given point.
|
# to query up to a given point.
|
||||||
@ -1344,14 +1373,22 @@ class SyncHandler:
|
|||||||
async def _generate_sync_entry_for_account_data(
|
async def _generate_sync_entry_for_account_data(
|
||||||
self, sync_result_builder: "SyncResultBuilder"
|
self, sync_result_builder: "SyncResultBuilder"
|
||||||
) -> Dict[str, Dict[str, JsonDict]]:
|
) -> Dict[str, Dict[str, JsonDict]]:
|
||||||
"""Generates the account data portion of the sync response. Populates
|
"""Generates the account data portion of the sync response.
|
||||||
`sync_result_builder` with the result.
|
|
||||||
|
Account data (called "Client Config" in the spec) can be set either globally
|
||||||
|
or for a specific room. Account data consists of a list of events which
|
||||||
|
accumulate state, much like a room.
|
||||||
|
|
||||||
|
This function retrieves global and per-room account data. The former is written
|
||||||
|
to the given `sync_result_builder`. The latter is returned directly, to be
|
||||||
|
later written to the `sync_result_builder` on a room-by-room basis.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
sync_result_builder
|
sync_result_builder
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A dictionary containing the per room account data.
|
A dictionary whose keys (room ids) map to the per room account data for that
|
||||||
|
room.
|
||||||
"""
|
"""
|
||||||
sync_config = sync_result_builder.sync_config
|
sync_config = sync_result_builder.sync_config
|
||||||
user_id = sync_result_builder.sync_config.user.to_string()
|
user_id = sync_result_builder.sync_config.user.to_string()
|
||||||
@ -1359,7 +1396,7 @@ class SyncHandler:
|
|||||||
|
|
||||||
if since_token and not sync_result_builder.full_state:
|
if since_token and not sync_result_builder.full_state:
|
||||||
(
|
(
|
||||||
account_data,
|
global_account_data,
|
||||||
account_data_by_room,
|
account_data_by_room,
|
||||||
) = await self.store.get_updated_account_data_for_user(
|
) = await self.store.get_updated_account_data_for_user(
|
||||||
user_id, since_token.account_data_key
|
user_id, since_token.account_data_key
|
||||||
@ -1370,23 +1407,23 @@ class SyncHandler:
|
|||||||
)
|
)
|
||||||
|
|
||||||
if push_rules_changed:
|
if push_rules_changed:
|
||||||
account_data["m.push_rules"] = await self.push_rules_for_user(
|
global_account_data["m.push_rules"] = await self.push_rules_for_user(
|
||||||
sync_config.user
|
sync_config.user
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
(
|
(
|
||||||
account_data,
|
global_account_data,
|
||||||
account_data_by_room,
|
account_data_by_room,
|
||||||
) = await self.store.get_account_data_for_user(sync_config.user.to_string())
|
) = await self.store.get_account_data_for_user(sync_config.user.to_string())
|
||||||
|
|
||||||
account_data["m.push_rules"] = await self.push_rules_for_user(
|
global_account_data["m.push_rules"] = await self.push_rules_for_user(
|
||||||
sync_config.user
|
sync_config.user
|
||||||
)
|
)
|
||||||
|
|
||||||
account_data_for_user = await sync_config.filter_collection.filter_account_data(
|
account_data_for_user = await sync_config.filter_collection.filter_account_data(
|
||||||
[
|
[
|
||||||
{"type": account_data_type, "content": content}
|
{"type": account_data_type, "content": content}
|
||||||
for account_data_type, content in account_data.items()
|
for account_data_type, content in global_account_data.items()
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -1460,18 +1497,31 @@ class SyncHandler:
|
|||||||
"""Generates the rooms portion of the sync response. Populates the
|
"""Generates the rooms portion of the sync response. Populates the
|
||||||
`sync_result_builder` with the result.
|
`sync_result_builder` with the result.
|
||||||
|
|
||||||
|
In the response that reaches the client, rooms are divided into four categories:
|
||||||
|
`invite`, `join`, `knock`, `leave`. These aren't the same as the four sets of
|
||||||
|
room ids returned by this function.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
sync_result_builder
|
sync_result_builder
|
||||||
account_data_by_room: Dictionary of per room account data
|
account_data_by_room: Dictionary of per room account data
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Returns a 4-tuple of
|
Returns a 4-tuple describing rooms the user has joined or left, and users who've
|
||||||
`(newly_joined_rooms, newly_joined_or_invited_users,
|
joined or left rooms any rooms the user is in. This gets used later in
|
||||||
newly_left_rooms, newly_left_users)`
|
`_generate_sync_entry_for_device_list`.
|
||||||
|
|
||||||
|
Its entries are:
|
||||||
|
- newly_joined_rooms
|
||||||
|
- newly_joined_or_invited_or_knocked_users
|
||||||
|
- newly_left_rooms
|
||||||
|
- newly_left_users
|
||||||
"""
|
"""
|
||||||
|
since_token = sync_result_builder.since_token
|
||||||
|
|
||||||
|
# 1. Start by fetching all ephemeral events in rooms we've joined (if required).
|
||||||
user_id = sync_result_builder.sync_config.user.to_string()
|
user_id = sync_result_builder.sync_config.user.to_string()
|
||||||
block_all_room_ephemeral = (
|
block_all_room_ephemeral = (
|
||||||
sync_result_builder.since_token is None
|
since_token is None
|
||||||
and sync_result_builder.sync_config.filter_collection.blocks_all_room_ephemeral()
|
and sync_result_builder.sync_config.filter_collection.blocks_all_room_ephemeral()
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -1485,9 +1535,8 @@ class SyncHandler:
|
|||||||
)
|
)
|
||||||
sync_result_builder.now_token = now_token
|
sync_result_builder.now_token = now_token
|
||||||
|
|
||||||
# We check up front if anything has changed, if it hasn't then there is
|
# 2. We check up front if anything has changed, if it hasn't then there is
|
||||||
# no point in going further.
|
# no point in going further.
|
||||||
since_token = sync_result_builder.since_token
|
|
||||||
if not sync_result_builder.full_state:
|
if not sync_result_builder.full_state:
|
||||||
if since_token and not ephemeral_by_room and not account_data_by_room:
|
if since_token and not ephemeral_by_room and not account_data_by_room:
|
||||||
have_changed = await self._have_rooms_changed(sync_result_builder)
|
have_changed = await self._have_rooms_changed(sync_result_builder)
|
||||||
@ -1500,20 +1549,8 @@ class SyncHandler:
|
|||||||
logger.debug("no-oping sync")
|
logger.debug("no-oping sync")
|
||||||
return set(), set(), set(), set()
|
return set(), set(), set(), set()
|
||||||
|
|
||||||
ignored_account_data = (
|
# 3. Work out which rooms need reporting in the sync response.
|
||||||
await self.store.get_global_account_data_by_type_for_user(
|
ignored_users = await self._get_ignored_users(user_id)
|
||||||
AccountDataTypes.IGNORED_USER_LIST, user_id=user_id
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
# If there is ignored users account data and it matches the proper type,
|
|
||||||
# then use it.
|
|
||||||
ignored_users: FrozenSet[str] = frozenset()
|
|
||||||
if ignored_account_data:
|
|
||||||
ignored_users_data = ignored_account_data.get("ignored_users", {})
|
|
||||||
if isinstance(ignored_users_data, dict):
|
|
||||||
ignored_users = frozenset(ignored_users_data.keys())
|
|
||||||
|
|
||||||
if since_token:
|
if since_token:
|
||||||
room_changes = await self._get_rooms_changed(
|
room_changes = await self._get_rooms_changed(
|
||||||
sync_result_builder, ignored_users
|
sync_result_builder, ignored_users
|
||||||
@ -1523,7 +1560,6 @@ class SyncHandler:
|
|||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
room_changes = await self._get_all_rooms(sync_result_builder, ignored_users)
|
room_changes = await self._get_all_rooms(sync_result_builder, ignored_users)
|
||||||
|
|
||||||
tags_by_room = await self.store.get_tags_for_user(user_id)
|
tags_by_room = await self.store.get_tags_for_user(user_id)
|
||||||
|
|
||||||
log_kv({"rooms_changed": len(room_changes.room_entries)})
|
log_kv({"rooms_changed": len(room_changes.room_entries)})
|
||||||
@ -1534,6 +1570,8 @@ class SyncHandler:
|
|||||||
newly_joined_rooms = room_changes.newly_joined_rooms
|
newly_joined_rooms = room_changes.newly_joined_rooms
|
||||||
newly_left_rooms = room_changes.newly_left_rooms
|
newly_left_rooms = room_changes.newly_left_rooms
|
||||||
|
|
||||||
|
# 4. We need to apply further processing to `room_entries` (rooms considered
|
||||||
|
# joined or archived).
|
||||||
async def handle_room_entries(room_entry: "RoomSyncResultBuilder") -> None:
|
async def handle_room_entries(room_entry: "RoomSyncResultBuilder") -> None:
|
||||||
logger.debug("Generating room entry for %s", room_entry.room_id)
|
logger.debug("Generating room entry for %s", room_entry.room_id)
|
||||||
await self._generate_room_entry(
|
await self._generate_room_entry(
|
||||||
@ -1552,31 +1590,13 @@ class SyncHandler:
|
|||||||
sync_result_builder.invited.extend(invited)
|
sync_result_builder.invited.extend(invited)
|
||||||
sync_result_builder.knocked.extend(knocked)
|
sync_result_builder.knocked.extend(knocked)
|
||||||
|
|
||||||
# Now we want to get any newly joined, invited or knocking users
|
# 5. Work out which users have joined or left rooms we're in. We use this
|
||||||
newly_joined_or_invited_or_knocked_users = set()
|
# to build the device_list part of the sync response in
|
||||||
newly_left_users = set()
|
# `_generate_sync_entry_for_device_list`.
|
||||||
if since_token:
|
(
|
||||||
for joined_sync in sync_result_builder.joined:
|
newly_joined_or_invited_or_knocked_users,
|
||||||
it = itertools.chain(
|
newly_left_users,
|
||||||
joined_sync.timeline.events, joined_sync.state.values()
|
) = sync_result_builder.calculate_user_changes()
|
||||||
)
|
|
||||||
for event in it:
|
|
||||||
if event.type == EventTypes.Member:
|
|
||||||
if (
|
|
||||||
event.membership == Membership.JOIN
|
|
||||||
or event.membership == Membership.INVITE
|
|
||||||
or event.membership == Membership.KNOCK
|
|
||||||
):
|
|
||||||
newly_joined_or_invited_or_knocked_users.add(
|
|
||||||
event.state_key
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
prev_content = event.unsigned.get("prev_content", {})
|
|
||||||
prev_membership = prev_content.get("membership", None)
|
|
||||||
if prev_membership == Membership.JOIN:
|
|
||||||
newly_left_users.add(event.state_key)
|
|
||||||
|
|
||||||
newly_left_users -= newly_joined_or_invited_or_knocked_users
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
set(newly_joined_rooms),
|
set(newly_joined_rooms),
|
||||||
@ -1585,11 +1605,36 @@ class SyncHandler:
|
|||||||
newly_left_users,
|
newly_left_users,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def _get_ignored_users(self, user_id: str) -> FrozenSet[str]:
|
||||||
|
"""Retrieve the users ignored by the given user from their global account_data.
|
||||||
|
|
||||||
|
Returns an empty set if
|
||||||
|
- there is no global account_data entry for ignored_users
|
||||||
|
- there is such an entry, but it's not a JSON object.
|
||||||
|
"""
|
||||||
|
# TODO: Can we `SELECT ignored_user_id FROM ignored_users WHERE ignorer_user_id=?;` instead?
|
||||||
|
ignored_account_data = (
|
||||||
|
await self.store.get_global_account_data_by_type_for_user(
|
||||||
|
AccountDataTypes.IGNORED_USER_LIST, user_id=user_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# If there is ignored users account data and it matches the proper type,
|
||||||
|
# then use it.
|
||||||
|
ignored_users: FrozenSet[str] = frozenset()
|
||||||
|
if ignored_account_data:
|
||||||
|
ignored_users_data = ignored_account_data.get("ignored_users", {})
|
||||||
|
if isinstance(ignored_users_data, dict):
|
||||||
|
ignored_users = frozenset(ignored_users_data.keys())
|
||||||
|
return ignored_users
|
||||||
|
|
||||||
async def _have_rooms_changed(
|
async def _have_rooms_changed(
|
||||||
self, sync_result_builder: "SyncResultBuilder"
|
self, sync_result_builder: "SyncResultBuilder"
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""Returns whether there may be any new events that should be sent down
|
"""Returns whether there may be any new events that should be sent down
|
||||||
the sync. Returns True if there are.
|
the sync. Returns True if there are.
|
||||||
|
|
||||||
|
Does not modify the `sync_result_builder`.
|
||||||
"""
|
"""
|
||||||
user_id = sync_result_builder.sync_config.user.to_string()
|
user_id = sync_result_builder.sync_config.user.to_string()
|
||||||
since_token = sync_result_builder.since_token
|
since_token = sync_result_builder.since_token
|
||||||
@ -1597,12 +1642,13 @@ class SyncHandler:
|
|||||||
|
|
||||||
assert since_token
|
assert since_token
|
||||||
|
|
||||||
# Get a list of membership change events that have happened.
|
# Get a list of membership change events that have happened to the user
|
||||||
rooms_changed = await self.store.get_membership_changes_for_user(
|
# requesting the sync.
|
||||||
|
membership_changes = await self.store.get_membership_changes_for_user(
|
||||||
user_id, since_token.room_key, now_token.room_key
|
user_id, since_token.room_key, now_token.room_key
|
||||||
)
|
)
|
||||||
|
|
||||||
if rooms_changed:
|
if membership_changes:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
stream_id = since_token.room_key.stream
|
stream_id = since_token.room_key.stream
|
||||||
@ -1614,7 +1660,25 @@ class SyncHandler:
|
|||||||
async def _get_rooms_changed(
|
async def _get_rooms_changed(
|
||||||
self, sync_result_builder: "SyncResultBuilder", ignored_users: FrozenSet[str]
|
self, sync_result_builder: "SyncResultBuilder", ignored_users: FrozenSet[str]
|
||||||
) -> _RoomChanges:
|
) -> _RoomChanges:
|
||||||
"""Gets the the changes that have happened since the last sync."""
|
"""Determine the changes in rooms to report to the user.
|
||||||
|
|
||||||
|
Ideally, we want to report all events whose stream ordering `s` lies in the
|
||||||
|
range `since_token < s <= now_token`, where the two tokens are read from the
|
||||||
|
sync_result_builder.
|
||||||
|
|
||||||
|
If there are too many events in that range to report, things get complicated.
|
||||||
|
In this situation we return a truncated list of the most recent events, and
|
||||||
|
indicate in the response that there is a "gap" of omitted events. Additionally:
|
||||||
|
|
||||||
|
- we include a "state_delta", to describe the changes in state over the gap,
|
||||||
|
- we include all membership events applying to the user making the request,
|
||||||
|
even those in the gap.
|
||||||
|
|
||||||
|
See the spec for the rationale:
|
||||||
|
https://spec.matrix.org/v1.1/client-server-api/#syncing
|
||||||
|
|
||||||
|
The sync_result_builder is not modified by this function.
|
||||||
|
"""
|
||||||
user_id = sync_result_builder.sync_config.user.to_string()
|
user_id = sync_result_builder.sync_config.user.to_string()
|
||||||
since_token = sync_result_builder.since_token
|
since_token = sync_result_builder.since_token
|
||||||
now_token = sync_result_builder.now_token
|
now_token = sync_result_builder.now_token
|
||||||
@ -1622,21 +1686,36 @@ class SyncHandler:
|
|||||||
|
|
||||||
assert since_token
|
assert since_token
|
||||||
|
|
||||||
# Get a list of membership change events that have happened.
|
# The spec
|
||||||
rooms_changed = await self.store.get_membership_changes_for_user(
|
# https://spec.matrix.org/v1.1/client-server-api/#get_matrixclientv3sync
|
||||||
|
# notes that membership events need special consideration:
|
||||||
|
#
|
||||||
|
# > When a sync is limited, the server MUST return membership events for events
|
||||||
|
# > in the gap (between since and the start of the returned timeline), regardless
|
||||||
|
# > as to whether or not they are redundant.
|
||||||
|
#
|
||||||
|
# We fetch such events here, but we only seem to use them for categorising rooms
|
||||||
|
# as newly joined, newly left, invited or knocked.
|
||||||
|
# TODO: we've already called this function and ran this query in
|
||||||
|
# _have_rooms_changed. We could keep the results in memory to avoid a
|
||||||
|
# second query, at the cost of more complicated source code.
|
||||||
|
membership_change_events = await self.store.get_membership_changes_for_user(
|
||||||
user_id, since_token.room_key, now_token.room_key
|
user_id, since_token.room_key, now_token.room_key
|
||||||
)
|
)
|
||||||
|
|
||||||
mem_change_events_by_room_id: Dict[str, List[EventBase]] = {}
|
mem_change_events_by_room_id: Dict[str, List[EventBase]] = {}
|
||||||
for event in rooms_changed:
|
for event in membership_change_events:
|
||||||
mem_change_events_by_room_id.setdefault(event.room_id, []).append(event)
|
mem_change_events_by_room_id.setdefault(event.room_id, []).append(event)
|
||||||
|
|
||||||
newly_joined_rooms = []
|
newly_joined_rooms: List[str] = []
|
||||||
newly_left_rooms = []
|
newly_left_rooms: List[str] = []
|
||||||
room_entries = []
|
room_entries: List[RoomSyncResultBuilder] = []
|
||||||
invited = []
|
invited: List[InvitedSyncResult] = []
|
||||||
knocked = []
|
knocked: List[KnockedSyncResult] = []
|
||||||
for room_id, events in mem_change_events_by_room_id.items():
|
for room_id, events in mem_change_events_by_room_id.items():
|
||||||
|
# The body of this loop will add this room to at least one of the five lists
|
||||||
|
# above. Things get messy if you've e.g. joined, left, joined then left the
|
||||||
|
# room all in the same sync period.
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"Membership changes in %s: [%s]",
|
"Membership changes in %s: [%s]",
|
||||||
room_id,
|
room_id,
|
||||||
@ -1691,6 +1770,7 @@ class SyncHandler:
|
|||||||
|
|
||||||
if not non_joins:
|
if not non_joins:
|
||||||
continue
|
continue
|
||||||
|
last_non_join = non_joins[-1]
|
||||||
|
|
||||||
# Check if we have left the room. This can either be because we were
|
# Check if we have left the room. This can either be because we were
|
||||||
# joined before *or* that we since joined and then left.
|
# joined before *or* that we since joined and then left.
|
||||||
@ -1712,18 +1792,18 @@ class SyncHandler:
|
|||||||
newly_left_rooms.append(room_id)
|
newly_left_rooms.append(room_id)
|
||||||
|
|
||||||
# Only bother if we're still currently invited
|
# Only bother if we're still currently invited
|
||||||
should_invite = non_joins[-1].membership == Membership.INVITE
|
should_invite = last_non_join.membership == Membership.INVITE
|
||||||
if should_invite:
|
if should_invite:
|
||||||
if event.sender not in ignored_users:
|
if last_non_join.sender not in ignored_users:
|
||||||
invite_room_sync = InvitedSyncResult(room_id, invite=non_joins[-1])
|
invite_room_sync = InvitedSyncResult(room_id, invite=last_non_join)
|
||||||
if invite_room_sync:
|
if invite_room_sync:
|
||||||
invited.append(invite_room_sync)
|
invited.append(invite_room_sync)
|
||||||
|
|
||||||
# Only bother if our latest membership in the room is knock (and we haven't
|
# Only bother if our latest membership in the room is knock (and we haven't
|
||||||
# been accepted/rejected in the meantime).
|
# been accepted/rejected in the meantime).
|
||||||
should_knock = non_joins[-1].membership == Membership.KNOCK
|
should_knock = last_non_join.membership == Membership.KNOCK
|
||||||
if should_knock:
|
if should_knock:
|
||||||
knock_room_sync = KnockedSyncResult(room_id, knock=non_joins[-1])
|
knock_room_sync = KnockedSyncResult(room_id, knock=last_non_join)
|
||||||
if knock_room_sync:
|
if knock_room_sync:
|
||||||
knocked.append(knock_room_sync)
|
knocked.append(knock_room_sync)
|
||||||
|
|
||||||
@ -1781,7 +1861,9 @@ class SyncHandler:
|
|||||||
|
|
||||||
timeline_limit = sync_config.filter_collection.timeline_limit()
|
timeline_limit = sync_config.filter_collection.timeline_limit()
|
||||||
|
|
||||||
# Get all events for rooms we're currently joined to.
|
# Get all events since the `from_key` in rooms we're currently joined to.
|
||||||
|
# If there are too many, we get the most recent events only. This leaves
|
||||||
|
# a "gap" in the timeline, as described by the spec for /sync.
|
||||||
room_to_events = await self.store.get_room_events_stream_for_rooms(
|
room_to_events = await self.store.get_room_events_stream_for_rooms(
|
||||||
room_ids=sync_result_builder.joined_room_ids,
|
room_ids=sync_result_builder.joined_room_ids,
|
||||||
from_key=since_token.room_key,
|
from_key=since_token.room_key,
|
||||||
@ -1842,6 +1924,10 @@ class SyncHandler:
|
|||||||
) -> _RoomChanges:
|
) -> _RoomChanges:
|
||||||
"""Returns entries for all rooms for the user.
|
"""Returns entries for all rooms for the user.
|
||||||
|
|
||||||
|
Like `_get_rooms_changed`, but assumes the `since_token` is `None`.
|
||||||
|
|
||||||
|
This function does not modify the sync_result_builder.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
sync_result_builder
|
sync_result_builder
|
||||||
ignored_users: Set of users ignored by user.
|
ignored_users: Set of users ignored by user.
|
||||||
@ -1853,16 +1939,9 @@ class SyncHandler:
|
|||||||
now_token = sync_result_builder.now_token
|
now_token = sync_result_builder.now_token
|
||||||
sync_config = sync_result_builder.sync_config
|
sync_config = sync_result_builder.sync_config
|
||||||
|
|
||||||
membership_list = (
|
|
||||||
Membership.INVITE,
|
|
||||||
Membership.KNOCK,
|
|
||||||
Membership.JOIN,
|
|
||||||
Membership.LEAVE,
|
|
||||||
Membership.BAN,
|
|
||||||
)
|
|
||||||
|
|
||||||
room_list = await self.store.get_rooms_for_local_user_where_membership_is(
|
room_list = await self.store.get_rooms_for_local_user_where_membership_is(
|
||||||
user_id=user_id, membership_list=membership_list
|
user_id=user_id,
|
||||||
|
membership_list=Membership.LIST,
|
||||||
)
|
)
|
||||||
|
|
||||||
room_entries = []
|
room_entries = []
|
||||||
@ -2212,8 +2291,7 @@ def _calculate_state(
|
|||||||
# to only include membership events for the senders in the timeline.
|
# to only include membership events for the senders in the timeline.
|
||||||
# In practice, we can do this by removing them from the p_ids list,
|
# In practice, we can do this by removing them from the p_ids list,
|
||||||
# which is the list of relevant state we know we have already sent to the client.
|
# which is the list of relevant state we know we have already sent to the client.
|
||||||
# see https://github.com/matrix-org/synapse/pull/2970
|
# see https://github.com/matrix-org/synapse/pull/2970/files/efcdacad7d1b7f52f879179701c7e0d9b763511f#r204732809
|
||||||
# /files/efcdacad7d1b7f52f879179701c7e0d9b763511f#r204732809
|
|
||||||
|
|
||||||
if lazy_load_members:
|
if lazy_load_members:
|
||||||
p_ids.difference_update(
|
p_ids.difference_update(
|
||||||
@ -2262,6 +2340,39 @@ class SyncResultBuilder:
|
|||||||
groups: Optional[GroupsSyncResult] = None
|
groups: Optional[GroupsSyncResult] = None
|
||||||
to_device: List[JsonDict] = attr.Factory(list)
|
to_device: List[JsonDict] = attr.Factory(list)
|
||||||
|
|
||||||
|
def calculate_user_changes(self) -> Tuple[Set[str], Set[str]]:
|
||||||
|
"""Work out which other users have joined or left rooms we are joined to.
|
||||||
|
|
||||||
|
This data only is only useful for an incremental sync.
|
||||||
|
|
||||||
|
The SyncResultBuilder is not modified by this function.
|
||||||
|
"""
|
||||||
|
newly_joined_or_invited_or_knocked_users = set()
|
||||||
|
newly_left_users = set()
|
||||||
|
if self.since_token:
|
||||||
|
for joined_sync in self.joined:
|
||||||
|
it = itertools.chain(
|
||||||
|
joined_sync.timeline.events, joined_sync.state.values()
|
||||||
|
)
|
||||||
|
for event in it:
|
||||||
|
if event.type == EventTypes.Member:
|
||||||
|
if (
|
||||||
|
event.membership == Membership.JOIN
|
||||||
|
or event.membership == Membership.INVITE
|
||||||
|
or event.membership == Membership.KNOCK
|
||||||
|
):
|
||||||
|
newly_joined_or_invited_or_knocked_users.add(
|
||||||
|
event.state_key
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
prev_content = event.unsigned.get("prev_content", {})
|
||||||
|
prev_membership = prev_content.get("membership", None)
|
||||||
|
if prev_membership == Membership.JOIN:
|
||||||
|
newly_left_users.add(event.state_key)
|
||||||
|
|
||||||
|
newly_left_users -= newly_joined_or_invited_or_knocked_users
|
||||||
|
return newly_joined_or_invited_or_knocked_users, newly_left_users
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, auto_attribs=True)
|
@attr.s(slots=True, auto_attribs=True)
|
||||||
class RoomSyncResultBuilder:
|
class RoomSyncResultBuilder:
|
||||||
|
@ -79,6 +79,35 @@ def parse_integer(
|
|||||||
return parse_integer_from_args(args, name, default, required)
|
return parse_integer_from_args(args, name, default, required)
|
||||||
|
|
||||||
|
|
||||||
|
@overload
|
||||||
|
def parse_integer_from_args(
|
||||||
|
args: Mapping[bytes, Sequence[bytes]],
|
||||||
|
name: str,
|
||||||
|
default: Optional[int] = None,
|
||||||
|
) -> Optional[int]:
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
|
@overload
|
||||||
|
def parse_integer_from_args(
|
||||||
|
args: Mapping[bytes, Sequence[bytes]],
|
||||||
|
name: str,
|
||||||
|
*,
|
||||||
|
required: Literal[True],
|
||||||
|
) -> int:
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
|
@overload
|
||||||
|
def parse_integer_from_args(
|
||||||
|
args: Mapping[bytes, Sequence[bytes]],
|
||||||
|
name: str,
|
||||||
|
default: Optional[int] = None,
|
||||||
|
required: bool = False,
|
||||||
|
) -> Optional[int]:
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
def parse_integer_from_args(
|
def parse_integer_from_args(
|
||||||
args: Mapping[bytes, Sequence[bytes]],
|
args: Mapping[bytes, Sequence[bytes]],
|
||||||
name: str,
|
name: str,
|
||||||
|
@ -24,6 +24,7 @@ from typing import (
|
|||||||
List,
|
List,
|
||||||
Optional,
|
Optional,
|
||||||
Tuple,
|
Tuple,
|
||||||
|
TypeVar,
|
||||||
Union,
|
Union,
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -81,10 +82,19 @@ from synapse.http.server import (
|
|||||||
)
|
)
|
||||||
from synapse.http.servlet import parse_json_object_from_request
|
from synapse.http.servlet import parse_json_object_from_request
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
from synapse.logging.context import (
|
||||||
|
defer_to_thread,
|
||||||
|
make_deferred_yieldable,
|
||||||
|
run_in_background,
|
||||||
|
)
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.rest.client.login import LoginResponse
|
from synapse.rest.client.login import LoginResponse
|
||||||
from synapse.storage import DataStore
|
from synapse.storage import DataStore
|
||||||
|
from synapse.storage.background_updates import (
|
||||||
|
DEFAULT_BATCH_SIZE_CALLBACK,
|
||||||
|
MIN_BATCH_SIZE_CALLBACK,
|
||||||
|
ON_UPDATE_CALLBACK,
|
||||||
|
)
|
||||||
from synapse.storage.database import DatabasePool, LoggingTransaction
|
from synapse.storage.database import DatabasePool, LoggingTransaction
|
||||||
from synapse.storage.databases.main.roommember import ProfileInfo
|
from synapse.storage.databases.main.roommember import ProfileInfo
|
||||||
from synapse.storage.state import StateFilter
|
from synapse.storage.state import StateFilter
|
||||||
@ -98,12 +108,16 @@ from synapse.types import (
|
|||||||
create_requester,
|
create_requester,
|
||||||
)
|
)
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
|
from synapse.util.async_helpers import maybe_awaitable
|
||||||
from synapse.util.caches.descriptors import cached
|
from synapse.util.caches.descriptors import cached
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.app.generic_worker import GenericWorkerSlavedStore
|
from synapse.app.generic_worker import GenericWorkerSlavedStore
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
|
||||||
|
T = TypeVar("T")
|
||||||
|
|
||||||
"""
|
"""
|
||||||
This package defines the 'stable' API which can be used by extension modules which
|
This package defines the 'stable' API which can be used by extension modules which
|
||||||
are loaded into Synapse.
|
are loaded into Synapse.
|
||||||
@ -307,7 +321,25 @@ class ModuleApi:
|
|||||||
auth_checkers=auth_checkers,
|
auth_checkers=auth_checkers,
|
||||||
)
|
)
|
||||||
|
|
||||||
def register_web_resource(self, path: str, resource: Resource):
|
def register_background_update_controller_callbacks(
|
||||||
|
self,
|
||||||
|
on_update: ON_UPDATE_CALLBACK,
|
||||||
|
default_batch_size: Optional[DEFAULT_BATCH_SIZE_CALLBACK] = None,
|
||||||
|
min_batch_size: Optional[MIN_BATCH_SIZE_CALLBACK] = None,
|
||||||
|
) -> None:
|
||||||
|
"""Registers background update controller callbacks.
|
||||||
|
|
||||||
|
Added in Synapse v1.49.0.
|
||||||
|
"""
|
||||||
|
|
||||||
|
for db in self._hs.get_datastores().databases:
|
||||||
|
db.updates.register_update_controller_callbacks(
|
||||||
|
on_update=on_update,
|
||||||
|
default_batch_size=default_batch_size,
|
||||||
|
min_batch_size=min_batch_size,
|
||||||
|
)
|
||||||
|
|
||||||
|
def register_web_resource(self, path: str, resource: Resource) -> None:
|
||||||
"""Registers a web resource to be served at the given path.
|
"""Registers a web resource to be served at the given path.
|
||||||
|
|
||||||
This function should be called during initialisation of the module.
|
This function should be called during initialisation of the module.
|
||||||
@ -432,7 +464,7 @@ class ModuleApi:
|
|||||||
username: provided user id
|
username: provided user id
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
str: qualified @user:id
|
qualified @user:id
|
||||||
"""
|
"""
|
||||||
if username.startswith("@"):
|
if username.startswith("@"):
|
||||||
return username
|
return username
|
||||||
@ -468,7 +500,7 @@ class ModuleApi:
|
|||||||
"""
|
"""
|
||||||
return await self._store.user_get_threepids(user_id)
|
return await self._store.user_get_threepids(user_id)
|
||||||
|
|
||||||
def check_user_exists(self, user_id: str):
|
def check_user_exists(self, user_id: str) -> "defer.Deferred[Optional[str]]":
|
||||||
"""Check if user exists.
|
"""Check if user exists.
|
||||||
|
|
||||||
Added in Synapse v0.25.0.
|
Added in Synapse v0.25.0.
|
||||||
@ -477,13 +509,18 @@ class ModuleApi:
|
|||||||
user_id: Complete @user:id
|
user_id: Complete @user:id
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred[str|None]: Canonical (case-corrected) user_id, or None
|
Canonical (case-corrected) user_id, or None
|
||||||
if the user is not registered.
|
if the user is not registered.
|
||||||
"""
|
"""
|
||||||
return defer.ensureDeferred(self._auth_handler.check_user_exists(user_id))
|
return defer.ensureDeferred(self._auth_handler.check_user_exists(user_id))
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def register(self, localpart, displayname=None, emails: Optional[List[str]] = None):
|
def register(
|
||||||
|
self,
|
||||||
|
localpart: str,
|
||||||
|
displayname: Optional[str] = None,
|
||||||
|
emails: Optional[List[str]] = None,
|
||||||
|
) -> Generator["defer.Deferred[Any]", Any, Tuple[str, str]]:
|
||||||
"""Registers a new user with given localpart and optional displayname, emails.
|
"""Registers a new user with given localpart and optional displayname, emails.
|
||||||
|
|
||||||
Also returns an access token for the new user.
|
Also returns an access token for the new user.
|
||||||
@ -495,12 +532,12 @@ class ModuleApi:
|
|||||||
Added in Synapse v0.25.0.
|
Added in Synapse v0.25.0.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
localpart (str): The localpart of the new user.
|
localpart: The localpart of the new user.
|
||||||
displayname (str|None): The displayname of the new user.
|
displayname: The displayname of the new user.
|
||||||
emails (List[str]): Emails to bind to the new user.
|
emails: Emails to bind to the new user.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Deferred[tuple[str, str]]: a 2-tuple of (user_id, access_token)
|
a 2-tuple of (user_id, access_token)
|
||||||
"""
|
"""
|
||||||
logger.warning(
|
logger.warning(
|
||||||
"Using deprecated ModuleApi.register which creates a dummy user device."
|
"Using deprecated ModuleApi.register which creates a dummy user device."
|
||||||
@ -510,23 +547,26 @@ class ModuleApi:
|
|||||||
return user_id, access_token
|
return user_id, access_token
|
||||||
|
|
||||||
def register_user(
|
def register_user(
|
||||||
self, localpart, displayname=None, emails: Optional[List[str]] = None
|
self,
|
||||||
):
|
localpart: str,
|
||||||
|
displayname: Optional[str] = None,
|
||||||
|
emails: Optional[List[str]] = None,
|
||||||
|
) -> "defer.Deferred[str]":
|
||||||
"""Registers a new user with given localpart and optional displayname, emails.
|
"""Registers a new user with given localpart and optional displayname, emails.
|
||||||
|
|
||||||
Added in Synapse v1.2.0.
|
Added in Synapse v1.2.0.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
localpart (str): The localpart of the new user.
|
localpart: The localpart of the new user.
|
||||||
displayname (str|None): The displayname of the new user.
|
displayname: The displayname of the new user.
|
||||||
emails (List[str]): Emails to bind to the new user.
|
emails: Emails to bind to the new user.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
SynapseError if there is an error performing the registration. Check the
|
SynapseError if there is an error performing the registration. Check the
|
||||||
'errcode' property for more information on the reason for failure
|
'errcode' property for more information on the reason for failure
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
defer.Deferred[str]: user_id
|
user_id
|
||||||
"""
|
"""
|
||||||
return defer.ensureDeferred(
|
return defer.ensureDeferred(
|
||||||
self._hs.get_registration_handler().register_user(
|
self._hs.get_registration_handler().register_user(
|
||||||
@ -536,20 +576,25 @@ class ModuleApi:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
def register_device(self, user_id, device_id=None, initial_display_name=None):
|
def register_device(
|
||||||
|
self,
|
||||||
|
user_id: str,
|
||||||
|
device_id: Optional[str] = None,
|
||||||
|
initial_display_name: Optional[str] = None,
|
||||||
|
) -> "defer.Deferred[Tuple[str, str, Optional[int], Optional[str]]]":
|
||||||
"""Register a device for a user and generate an access token.
|
"""Register a device for a user and generate an access token.
|
||||||
|
|
||||||
Added in Synapse v1.2.0.
|
Added in Synapse v1.2.0.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
user_id (str): full canonical @user:id
|
user_id: full canonical @user:id
|
||||||
device_id (str|None): The device ID to check, or None to generate
|
device_id: The device ID to check, or None to generate
|
||||||
a new one.
|
a new one.
|
||||||
initial_display_name (str|None): An optional display name for the
|
initial_display_name: An optional display name for the
|
||||||
device.
|
device.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
defer.Deferred[tuple[str, str]]: Tuple of device ID and access token
|
Tuple of device ID, access token, access token expiration time and refresh token
|
||||||
"""
|
"""
|
||||||
return defer.ensureDeferred(
|
return defer.ensureDeferred(
|
||||||
self._hs.get_registration_handler().register_device(
|
self._hs.get_registration_handler().register_device(
|
||||||
@ -582,6 +627,7 @@ class ModuleApi:
|
|||||||
user_id: str,
|
user_id: str,
|
||||||
duration_in_ms: int = (2 * 60 * 1000),
|
duration_in_ms: int = (2 * 60 * 1000),
|
||||||
auth_provider_id: str = "",
|
auth_provider_id: str = "",
|
||||||
|
auth_provider_session_id: Optional[str] = None,
|
||||||
) -> str:
|
) -> str:
|
||||||
"""Generate a login token suitable for m.login.token authentication
|
"""Generate a login token suitable for m.login.token authentication
|
||||||
|
|
||||||
@ -599,11 +645,14 @@ class ModuleApi:
|
|||||||
return self._hs.get_macaroon_generator().generate_short_term_login_token(
|
return self._hs.get_macaroon_generator().generate_short_term_login_token(
|
||||||
user_id,
|
user_id,
|
||||||
auth_provider_id,
|
auth_provider_id,
|
||||||
|
auth_provider_session_id,
|
||||||
duration_in_ms,
|
duration_in_ms,
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def invalidate_access_token(self, access_token):
|
def invalidate_access_token(
|
||||||
|
self, access_token: str
|
||||||
|
) -> Generator["defer.Deferred[Any]", Any, None]:
|
||||||
"""Invalidate an access token for a user
|
"""Invalidate an access token for a user
|
||||||
|
|
||||||
Added in Synapse v0.25.0.
|
Added in Synapse v0.25.0.
|
||||||
@ -635,14 +684,20 @@ class ModuleApi:
|
|||||||
self._auth_handler.delete_access_token(access_token)
|
self._auth_handler.delete_access_token(access_token)
|
||||||
)
|
)
|
||||||
|
|
||||||
def run_db_interaction(self, desc, func, *args, **kwargs):
|
def run_db_interaction(
|
||||||
|
self,
|
||||||
|
desc: str,
|
||||||
|
func: Callable[..., T],
|
||||||
|
*args: Any,
|
||||||
|
**kwargs: Any,
|
||||||
|
) -> "defer.Deferred[T]":
|
||||||
"""Run a function with a database connection
|
"""Run a function with a database connection
|
||||||
|
|
||||||
Added in Synapse v0.25.0.
|
Added in Synapse v0.25.0.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
desc (str): description for the transaction, for metrics etc
|
desc: description for the transaction, for metrics etc
|
||||||
func (func): function to be run. Passed a database cursor object
|
func: function to be run. Passed a database cursor object
|
||||||
as well as *args and **kwargs
|
as well as *args and **kwargs
|
||||||
*args: positional args to be passed to func
|
*args: positional args to be passed to func
|
||||||
**kwargs: named args to be passed to func
|
**kwargs: named args to be passed to func
|
||||||
@ -656,7 +711,7 @@ class ModuleApi:
|
|||||||
|
|
||||||
def complete_sso_login(
|
def complete_sso_login(
|
||||||
self, registered_user_id: str, request: SynapseRequest, client_redirect_url: str
|
self, registered_user_id: str, request: SynapseRequest, client_redirect_url: str
|
||||||
):
|
) -> None:
|
||||||
"""Complete a SSO login by redirecting the user to a page to confirm whether they
|
"""Complete a SSO login by redirecting the user to a page to confirm whether they
|
||||||
want their access token sent to `client_redirect_url`, or redirect them to that
|
want their access token sent to `client_redirect_url`, or redirect them to that
|
||||||
URL with a token directly if the URL matches with one of the whitelisted clients.
|
URL with a token directly if the URL matches with one of the whitelisted clients.
|
||||||
@ -686,7 +741,7 @@ class ModuleApi:
|
|||||||
client_redirect_url: str,
|
client_redirect_url: str,
|
||||||
new_user: bool = False,
|
new_user: bool = False,
|
||||||
auth_provider_id: str = "<unknown>",
|
auth_provider_id: str = "<unknown>",
|
||||||
):
|
) -> None:
|
||||||
"""Complete a SSO login by redirecting the user to a page to confirm whether they
|
"""Complete a SSO login by redirecting the user to a page to confirm whether they
|
||||||
want their access token sent to `client_redirect_url`, or redirect them to that
|
want their access token sent to `client_redirect_url`, or redirect them to that
|
||||||
URL with a token directly if the URL matches with one of the whitelisted clients.
|
URL with a token directly if the URL matches with one of the whitelisted clients.
|
||||||
@ -925,11 +980,11 @@ class ModuleApi:
|
|||||||
self,
|
self,
|
||||||
f: Callable,
|
f: Callable,
|
||||||
msec: float,
|
msec: float,
|
||||||
*args,
|
*args: object,
|
||||||
desc: Optional[str] = None,
|
desc: Optional[str] = None,
|
||||||
run_on_all_instances: bool = False,
|
run_on_all_instances: bool = False,
|
||||||
**kwargs,
|
**kwargs: object,
|
||||||
):
|
) -> None:
|
||||||
"""Wraps a function as a background process and calls it repeatedly.
|
"""Wraps a function as a background process and calls it repeatedly.
|
||||||
|
|
||||||
NOTE: Will only run on the instance that is configured to run
|
NOTE: Will only run on the instance that is configured to run
|
||||||
@ -960,9 +1015,7 @@ class ModuleApi:
|
|||||||
run_as_background_process,
|
run_as_background_process,
|
||||||
msec,
|
msec,
|
||||||
desc,
|
desc,
|
||||||
f,
|
lambda: maybe_awaitable(f(*args, **kwargs)),
|
||||||
*args,
|
|
||||||
**kwargs,
|
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
@ -970,13 +1023,18 @@ class ModuleApi:
|
|||||||
f,
|
f,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def sleep(self, seconds: float) -> None:
|
||||||
|
"""Sleeps for the given number of seconds."""
|
||||||
|
|
||||||
|
await self._clock.sleep(seconds)
|
||||||
|
|
||||||
async def send_mail(
|
async def send_mail(
|
||||||
self,
|
self,
|
||||||
recipient: str,
|
recipient: str,
|
||||||
subject: str,
|
subject: str,
|
||||||
html: str,
|
html: str,
|
||||||
text: str,
|
text: str,
|
||||||
):
|
) -> None:
|
||||||
"""Send an email on behalf of the homeserver.
|
"""Send an email on behalf of the homeserver.
|
||||||
|
|
||||||
Added in Synapse v1.39.0.
|
Added in Synapse v1.39.0.
|
||||||
@ -1124,6 +1182,26 @@ class ModuleApi:
|
|||||||
|
|
||||||
return {key: state_events[event_id] for key, event_id in state_ids.items()}
|
return {key: state_events[event_id] for key, event_id in state_ids.items()}
|
||||||
|
|
||||||
|
async def defer_to_thread(
|
||||||
|
self,
|
||||||
|
f: Callable[..., T],
|
||||||
|
*args: Any,
|
||||||
|
**kwargs: Any,
|
||||||
|
) -> T:
|
||||||
|
"""Runs the given function in a separate thread from Synapse's thread pool.
|
||||||
|
|
||||||
|
Added in Synapse v1.49.0.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
f: The function to run.
|
||||||
|
args: The function's arguments.
|
||||||
|
kwargs: The function's keyword arguments.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The return value of the function once ran in a thread.
|
||||||
|
"""
|
||||||
|
return await defer_to_thread(self._hs.get_reactor(), f, *args, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
class PublicRoomListManager:
|
class PublicRoomListManager:
|
||||||
"""Contains methods for adding to, removing from and querying whether a room
|
"""Contains methods for adding to, removing from and querying whether a room
|
||||||
|
@ -21,6 +21,8 @@ from twisted.internet.interfaces import IDelayedCall
|
|||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.push import Pusher, PusherConfig, PusherConfigException, ThrottleParams
|
from synapse.push import Pusher, PusherConfig, PusherConfigException, ThrottleParams
|
||||||
from synapse.push.mailer import Mailer
|
from synapse.push.mailer import Mailer
|
||||||
|
from synapse.push.push_types import EmailReason
|
||||||
|
from synapse.storage.databases.main.event_push_actions import EmailPushAction
|
||||||
from synapse.util.threepids import validate_email
|
from synapse.util.threepids import validate_email
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
@ -190,7 +192,7 @@ class EmailPusher(Pusher):
|
|||||||
# we then consider all previously outstanding notifications
|
# we then consider all previously outstanding notifications
|
||||||
# to be delivered.
|
# to be delivered.
|
||||||
|
|
||||||
reason = {
|
reason: EmailReason = {
|
||||||
"room_id": push_action["room_id"],
|
"room_id": push_action["room_id"],
|
||||||
"now": self.clock.time_msec(),
|
"now": self.clock.time_msec(),
|
||||||
"received_at": received_at,
|
"received_at": received_at,
|
||||||
@ -275,7 +277,7 @@ class EmailPusher(Pusher):
|
|||||||
return may_send_at
|
return may_send_at
|
||||||
|
|
||||||
async def sent_notif_update_throttle(
|
async def sent_notif_update_throttle(
|
||||||
self, room_id: str, notified_push_action: dict
|
self, room_id: str, notified_push_action: EmailPushAction
|
||||||
) -> None:
|
) -> None:
|
||||||
# We have sent a notification, so update the throttle accordingly.
|
# We have sent a notification, so update the throttle accordingly.
|
||||||
# If the event that triggered the notif happened more than
|
# If the event that triggered the notif happened more than
|
||||||
@ -315,7 +317,9 @@ class EmailPusher(Pusher):
|
|||||||
self.pusher_id, room_id, self.throttle_params[room_id]
|
self.pusher_id, room_id, self.throttle_params[room_id]
|
||||||
)
|
)
|
||||||
|
|
||||||
async def send_notification(self, push_actions: List[dict], reason: dict) -> None:
|
async def send_notification(
|
||||||
|
self, push_actions: List[EmailPushAction], reason: EmailReason
|
||||||
|
) -> None:
|
||||||
logger.info("Sending notif email for user %r", self.user_id)
|
logger.info("Sending notif email for user %r", self.user_id)
|
||||||
|
|
||||||
await self.mailer.send_notification_mail(
|
await self.mailer.send_notification_mail(
|
||||||
|
@ -26,6 +26,7 @@ from synapse.events import EventBase
|
|||||||
from synapse.logging import opentracing
|
from synapse.logging import opentracing
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.push import Pusher, PusherConfig, PusherConfigException
|
from synapse.push import Pusher, PusherConfig, PusherConfigException
|
||||||
|
from synapse.storage.databases.main.event_push_actions import HttpPushAction
|
||||||
|
|
||||||
from . import push_rule_evaluator, push_tools
|
from . import push_rule_evaluator, push_tools
|
||||||
|
|
||||||
@ -273,7 +274,7 @@ class HttpPusher(Pusher):
|
|||||||
)
|
)
|
||||||
break
|
break
|
||||||
|
|
||||||
async def _process_one(self, push_action: dict) -> bool:
|
async def _process_one(self, push_action: HttpPushAction) -> bool:
|
||||||
if "notify" not in push_action["actions"]:
|
if "notify" not in push_action["actions"]:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, TypeVar
|
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, TypeVar
|
||||||
|
|
||||||
import bleach
|
import bleach
|
||||||
import jinja2
|
import jinja2
|
||||||
@ -28,6 +28,14 @@ from synapse.push.presentable_names import (
|
|||||||
descriptor_from_member_events,
|
descriptor_from_member_events,
|
||||||
name_from_member_event,
|
name_from_member_event,
|
||||||
)
|
)
|
||||||
|
from synapse.push.push_types import (
|
||||||
|
EmailReason,
|
||||||
|
MessageVars,
|
||||||
|
NotifVars,
|
||||||
|
RoomVars,
|
||||||
|
TemplateVars,
|
||||||
|
)
|
||||||
|
from synapse.storage.databases.main.event_push_actions import EmailPushAction
|
||||||
from synapse.storage.state import StateFilter
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.types import StateMap, UserID
|
from synapse.types import StateMap, UserID
|
||||||
from synapse.util.async_helpers import concurrently_execute
|
from synapse.util.async_helpers import concurrently_execute
|
||||||
@ -135,7 +143,7 @@ class Mailer:
|
|||||||
% urllib.parse.urlencode(params)
|
% urllib.parse.urlencode(params)
|
||||||
)
|
)
|
||||||
|
|
||||||
template_vars = {"link": link}
|
template_vars: TemplateVars = {"link": link}
|
||||||
|
|
||||||
await self.send_email(
|
await self.send_email(
|
||||||
email_address,
|
email_address,
|
||||||
@ -165,7 +173,7 @@ class Mailer:
|
|||||||
% urllib.parse.urlencode(params)
|
% urllib.parse.urlencode(params)
|
||||||
)
|
)
|
||||||
|
|
||||||
template_vars = {"link": link}
|
template_vars: TemplateVars = {"link": link}
|
||||||
|
|
||||||
await self.send_email(
|
await self.send_email(
|
||||||
email_address,
|
email_address,
|
||||||
@ -196,7 +204,7 @@ class Mailer:
|
|||||||
% urllib.parse.urlencode(params)
|
% urllib.parse.urlencode(params)
|
||||||
)
|
)
|
||||||
|
|
||||||
template_vars = {"link": link}
|
template_vars: TemplateVars = {"link": link}
|
||||||
|
|
||||||
await self.send_email(
|
await self.send_email(
|
||||||
email_address,
|
email_address,
|
||||||
@ -210,8 +218,8 @@ class Mailer:
|
|||||||
app_id: str,
|
app_id: str,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
email_address: str,
|
email_address: str,
|
||||||
push_actions: Iterable[Dict[str, Any]],
|
push_actions: Iterable[EmailPushAction],
|
||||||
reason: Dict[str, Any],
|
reason: EmailReason,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Send email regarding a user's room notifications
|
Send email regarding a user's room notifications
|
||||||
@ -230,7 +238,7 @@ class Mailer:
|
|||||||
[pa["event_id"] for pa in push_actions]
|
[pa["event_id"] for pa in push_actions]
|
||||||
)
|
)
|
||||||
|
|
||||||
notifs_by_room: Dict[str, List[Dict[str, Any]]] = {}
|
notifs_by_room: Dict[str, List[EmailPushAction]] = {}
|
||||||
for pa in push_actions:
|
for pa in push_actions:
|
||||||
notifs_by_room.setdefault(pa["room_id"], []).append(pa)
|
notifs_by_room.setdefault(pa["room_id"], []).append(pa)
|
||||||
|
|
||||||
@ -258,7 +266,7 @@ class Mailer:
|
|||||||
# actually sort our so-called rooms_in_order list, most recent room first
|
# actually sort our so-called rooms_in_order list, most recent room first
|
||||||
rooms_in_order.sort(key=lambda r: -(notifs_by_room[r][-1]["received_ts"] or 0))
|
rooms_in_order.sort(key=lambda r: -(notifs_by_room[r][-1]["received_ts"] or 0))
|
||||||
|
|
||||||
rooms: List[Dict[str, Any]] = []
|
rooms: List[RoomVars] = []
|
||||||
|
|
||||||
for r in rooms_in_order:
|
for r in rooms_in_order:
|
||||||
roomvars = await self._get_room_vars(
|
roomvars = await self._get_room_vars(
|
||||||
@ -289,7 +297,7 @@ class Mailer:
|
|||||||
notifs_by_room, state_by_room, notif_events, reason
|
notifs_by_room, state_by_room, notif_events, reason
|
||||||
)
|
)
|
||||||
|
|
||||||
template_vars = {
|
template_vars: TemplateVars = {
|
||||||
"user_display_name": user_display_name,
|
"user_display_name": user_display_name,
|
||||||
"unsubscribe_link": self._make_unsubscribe_link(
|
"unsubscribe_link": self._make_unsubscribe_link(
|
||||||
user_id, app_id, email_address
|
user_id, app_id, email_address
|
||||||
@ -302,10 +310,10 @@ class Mailer:
|
|||||||
await self.send_email(email_address, summary_text, template_vars)
|
await self.send_email(email_address, summary_text, template_vars)
|
||||||
|
|
||||||
async def send_email(
|
async def send_email(
|
||||||
self, email_address: str, subject: str, extra_template_vars: Dict[str, Any]
|
self, email_address: str, subject: str, extra_template_vars: TemplateVars
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Send an email with the given information and template text"""
|
"""Send an email with the given information and template text"""
|
||||||
template_vars = {
|
template_vars: TemplateVars = {
|
||||||
"app_name": self.app_name,
|
"app_name": self.app_name,
|
||||||
"server_name": self.hs.config.server.server_name,
|
"server_name": self.hs.config.server.server_name,
|
||||||
}
|
}
|
||||||
@ -327,10 +335,10 @@ class Mailer:
|
|||||||
self,
|
self,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
notifs: Iterable[Dict[str, Any]],
|
notifs: Iterable[EmailPushAction],
|
||||||
notif_events: Dict[str, EventBase],
|
notif_events: Dict[str, EventBase],
|
||||||
room_state_ids: StateMap[str],
|
room_state_ids: StateMap[str],
|
||||||
) -> Dict[str, Any]:
|
) -> RoomVars:
|
||||||
"""
|
"""
|
||||||
Generate the variables for notifications on a per-room basis.
|
Generate the variables for notifications on a per-room basis.
|
||||||
|
|
||||||
@ -356,7 +364,7 @@ class Mailer:
|
|||||||
|
|
||||||
room_name = await calculate_room_name(self.store, room_state_ids, user_id)
|
room_name = await calculate_room_name(self.store, room_state_ids, user_id)
|
||||||
|
|
||||||
room_vars: Dict[str, Any] = {
|
room_vars: RoomVars = {
|
||||||
"title": room_name,
|
"title": room_name,
|
||||||
"hash": string_ordinal_total(room_id), # See sender avatar hash
|
"hash": string_ordinal_total(room_id), # See sender avatar hash
|
||||||
"notifs": [],
|
"notifs": [],
|
||||||
@ -417,11 +425,11 @@ class Mailer:
|
|||||||
|
|
||||||
async def _get_notif_vars(
|
async def _get_notif_vars(
|
||||||
self,
|
self,
|
||||||
notif: Dict[str, Any],
|
notif: EmailPushAction,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
notif_event: EventBase,
|
notif_event: EventBase,
|
||||||
room_state_ids: StateMap[str],
|
room_state_ids: StateMap[str],
|
||||||
) -> Dict[str, Any]:
|
) -> NotifVars:
|
||||||
"""
|
"""
|
||||||
Generate the variables for a single notification.
|
Generate the variables for a single notification.
|
||||||
|
|
||||||
@ -442,7 +450,7 @@ class Mailer:
|
|||||||
after_limit=CONTEXT_AFTER,
|
after_limit=CONTEXT_AFTER,
|
||||||
)
|
)
|
||||||
|
|
||||||
ret = {
|
ret: NotifVars = {
|
||||||
"link": self._make_notif_link(notif),
|
"link": self._make_notif_link(notif),
|
||||||
"ts": notif["received_ts"],
|
"ts": notif["received_ts"],
|
||||||
"messages": [],
|
"messages": [],
|
||||||
@ -461,8 +469,8 @@ class Mailer:
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
async def _get_message_vars(
|
async def _get_message_vars(
|
||||||
self, notif: Dict[str, Any], event: EventBase, room_state_ids: StateMap[str]
|
self, notif: EmailPushAction, event: EventBase, room_state_ids: StateMap[str]
|
||||||
) -> Optional[Dict[str, Any]]:
|
) -> Optional[MessageVars]:
|
||||||
"""
|
"""
|
||||||
Generate the variables for a single event, if possible.
|
Generate the variables for a single event, if possible.
|
||||||
|
|
||||||
@ -494,7 +502,9 @@ class Mailer:
|
|||||||
|
|
||||||
if sender_state_event:
|
if sender_state_event:
|
||||||
sender_name = name_from_member_event(sender_state_event)
|
sender_name = name_from_member_event(sender_state_event)
|
||||||
sender_avatar_url = sender_state_event.content.get("avatar_url")
|
sender_avatar_url: Optional[str] = sender_state_event.content.get(
|
||||||
|
"avatar_url"
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
# No state could be found, fallback to the MXID.
|
# No state could be found, fallback to the MXID.
|
||||||
sender_name = event.sender
|
sender_name = event.sender
|
||||||
@ -504,7 +514,7 @@ class Mailer:
|
|||||||
# sender_hash % the number of default images to choose from
|
# sender_hash % the number of default images to choose from
|
||||||
sender_hash = string_ordinal_total(event.sender)
|
sender_hash = string_ordinal_total(event.sender)
|
||||||
|
|
||||||
ret = {
|
ret: MessageVars = {
|
||||||
"event_type": event.type,
|
"event_type": event.type,
|
||||||
"is_historical": event.event_id != notif["event_id"],
|
"is_historical": event.event_id != notif["event_id"],
|
||||||
"id": event.event_id,
|
"id": event.event_id,
|
||||||
@ -519,6 +529,8 @@ class Mailer:
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
msgtype = event.content.get("msgtype")
|
msgtype = event.content.get("msgtype")
|
||||||
|
if not isinstance(msgtype, str):
|
||||||
|
msgtype = None
|
||||||
|
|
||||||
ret["msgtype"] = msgtype
|
ret["msgtype"] = msgtype
|
||||||
|
|
||||||
@ -533,7 +545,7 @@ class Mailer:
|
|||||||
return ret
|
return ret
|
||||||
|
|
||||||
def _add_text_message_vars(
|
def _add_text_message_vars(
|
||||||
self, messagevars: Dict[str, Any], event: EventBase
|
self, messagevars: MessageVars, event: EventBase
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Potentially add a sanitised message body to the message variables.
|
Potentially add a sanitised message body to the message variables.
|
||||||
@ -543,8 +555,8 @@ class Mailer:
|
|||||||
event: The event under consideration.
|
event: The event under consideration.
|
||||||
"""
|
"""
|
||||||
msgformat = event.content.get("format")
|
msgformat = event.content.get("format")
|
||||||
|
if not isinstance(msgformat, str):
|
||||||
messagevars["format"] = msgformat
|
msgformat = None
|
||||||
|
|
||||||
formatted_body = event.content.get("formatted_body")
|
formatted_body = event.content.get("formatted_body")
|
||||||
body = event.content.get("body")
|
body = event.content.get("body")
|
||||||
@ -555,7 +567,7 @@ class Mailer:
|
|||||||
messagevars["body_text_html"] = safe_text(body)
|
messagevars["body_text_html"] = safe_text(body)
|
||||||
|
|
||||||
def _add_image_message_vars(
|
def _add_image_message_vars(
|
||||||
self, messagevars: Dict[str, Any], event: EventBase
|
self, messagevars: MessageVars, event: EventBase
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Potentially add an image URL to the message variables.
|
Potentially add an image URL to the message variables.
|
||||||
@ -570,7 +582,7 @@ class Mailer:
|
|||||||
async def _make_summary_text_single_room(
|
async def _make_summary_text_single_room(
|
||||||
self,
|
self,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
notifs: List[Dict[str, Any]],
|
notifs: List[EmailPushAction],
|
||||||
room_state_ids: StateMap[str],
|
room_state_ids: StateMap[str],
|
||||||
notif_events: Dict[str, EventBase],
|
notif_events: Dict[str, EventBase],
|
||||||
user_id: str,
|
user_id: str,
|
||||||
@ -685,10 +697,10 @@ class Mailer:
|
|||||||
|
|
||||||
async def _make_summary_text(
|
async def _make_summary_text(
|
||||||
self,
|
self,
|
||||||
notifs_by_room: Dict[str, List[Dict[str, Any]]],
|
notifs_by_room: Dict[str, List[EmailPushAction]],
|
||||||
room_state_ids: Dict[str, StateMap[str]],
|
room_state_ids: Dict[str, StateMap[str]],
|
||||||
notif_events: Dict[str, EventBase],
|
notif_events: Dict[str, EventBase],
|
||||||
reason: Dict[str, Any],
|
reason: EmailReason,
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
Make a summary text for the email when multiple rooms have notifications.
|
Make a summary text for the email when multiple rooms have notifications.
|
||||||
@ -718,7 +730,7 @@ class Mailer:
|
|||||||
async def _make_summary_text_from_member_events(
|
async def _make_summary_text_from_member_events(
|
||||||
self,
|
self,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
notifs: List[Dict[str, Any]],
|
notifs: List[EmailPushAction],
|
||||||
room_state_ids: StateMap[str],
|
room_state_ids: StateMap[str],
|
||||||
notif_events: Dict[str, EventBase],
|
notif_events: Dict[str, EventBase],
|
||||||
) -> str:
|
) -> str:
|
||||||
@ -805,7 +817,7 @@ class Mailer:
|
|||||||
base_url = "https://matrix.to/#"
|
base_url = "https://matrix.to/#"
|
||||||
return "%s/%s" % (base_url, room_id)
|
return "%s/%s" % (base_url, room_id)
|
||||||
|
|
||||||
def _make_notif_link(self, notif: Dict[str, str]) -> str:
|
def _make_notif_link(self, notif: EmailPushAction) -> str:
|
||||||
"""
|
"""
|
||||||
Generate a link to open an event in the web client.
|
Generate a link to open an event in the web client.
|
||||||
|
|
||||||
|
136
synapse/push/push_types.py
Normal file
136
synapse/push/push_types.py
Normal file
@ -0,0 +1,136 @@
|
|||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
|
from typing_extensions import TypedDict
|
||||||
|
|
||||||
|
|
||||||
|
class EmailReason(TypedDict, total=False):
|
||||||
|
"""
|
||||||
|
Information on the event that triggered the email to be sent
|
||||||
|
|
||||||
|
room_id: the ID of the room the event was sent in
|
||||||
|
now: timestamp in ms when the email is being sent out
|
||||||
|
room_name: a human-readable name for the room the event was sent in
|
||||||
|
received_at: the time in milliseconds at which the event was received
|
||||||
|
delay_before_mail_ms: the amount of time in milliseconds Synapse always waits
|
||||||
|
before ever emailing about a notification (to give the user a chance to respond
|
||||||
|
to other push or notice the window)
|
||||||
|
last_sent_ts: the time in milliseconds at which a notification was last sent
|
||||||
|
for an event in this room
|
||||||
|
throttle_ms: the minimum amount of time in milliseconds between two
|
||||||
|
notifications can be sent for this room
|
||||||
|
"""
|
||||||
|
|
||||||
|
room_id: str
|
||||||
|
now: int
|
||||||
|
room_name: Optional[str]
|
||||||
|
received_at: int
|
||||||
|
delay_before_mail_ms: int
|
||||||
|
last_sent_ts: int
|
||||||
|
throttle_ms: int
|
||||||
|
|
||||||
|
|
||||||
|
class MessageVars(TypedDict, total=False):
|
||||||
|
"""
|
||||||
|
Details about a specific message to include in a notification
|
||||||
|
|
||||||
|
event_type: the type of the event
|
||||||
|
is_historical: a boolean, which is `False` if the message is the one
|
||||||
|
that triggered the notification, `True` otherwise
|
||||||
|
id: the ID of the event
|
||||||
|
ts: the time in milliseconds at which the event was sent
|
||||||
|
sender_name: the display name for the event's sender
|
||||||
|
sender_avatar_url: the avatar URL (as a `mxc://` URL) for the event's
|
||||||
|
sender
|
||||||
|
sender_hash: a hash of the user ID of the sender
|
||||||
|
msgtype: the type of the message
|
||||||
|
body_text_html: html representation of the message
|
||||||
|
body_text_plain: plaintext representation of the message
|
||||||
|
image_url: mxc url of an image, when "msgtype" is "m.image"
|
||||||
|
"""
|
||||||
|
|
||||||
|
event_type: str
|
||||||
|
is_historical: bool
|
||||||
|
id: str
|
||||||
|
ts: int
|
||||||
|
sender_name: str
|
||||||
|
sender_avatar_url: Optional[str]
|
||||||
|
sender_hash: int
|
||||||
|
msgtype: Optional[str]
|
||||||
|
body_text_html: str
|
||||||
|
body_text_plain: str
|
||||||
|
image_url: str
|
||||||
|
|
||||||
|
|
||||||
|
class NotifVars(TypedDict):
|
||||||
|
"""
|
||||||
|
Details about an event we are about to include in a notification
|
||||||
|
|
||||||
|
link: a `matrix.to` link to the event
|
||||||
|
ts: the time in milliseconds at which the event was received
|
||||||
|
messages: a list of messages containing one message before the event, the
|
||||||
|
message in the event, and one message after the event.
|
||||||
|
"""
|
||||||
|
|
||||||
|
link: str
|
||||||
|
ts: Optional[int]
|
||||||
|
messages: List[MessageVars]
|
||||||
|
|
||||||
|
|
||||||
|
class RoomVars(TypedDict):
|
||||||
|
"""
|
||||||
|
Represents a room containing events to include in the email.
|
||||||
|
|
||||||
|
title: a human-readable name for the room
|
||||||
|
hash: a hash of the ID of the room
|
||||||
|
invite: a boolean, which is `True` if the room is an invite the user hasn't
|
||||||
|
accepted yet, `False` otherwise
|
||||||
|
notifs: a list of events, or an empty list if `invite` is `True`.
|
||||||
|
link: a `matrix.to` link to the room
|
||||||
|
avator_url: url to the room's avator
|
||||||
|
"""
|
||||||
|
|
||||||
|
title: Optional[str]
|
||||||
|
hash: int
|
||||||
|
invite: bool
|
||||||
|
notifs: List[NotifVars]
|
||||||
|
link: str
|
||||||
|
avatar_url: Optional[str]
|
||||||
|
|
||||||
|
|
||||||
|
class TemplateVars(TypedDict, total=False):
|
||||||
|
"""
|
||||||
|
Generic structure for passing to the email sender, can hold all the fields used in email templates.
|
||||||
|
|
||||||
|
app_name: name of the app/service this homeserver is associated with
|
||||||
|
server_name: name of our own homeserver
|
||||||
|
link: a link to include into the email to be sent
|
||||||
|
user_display_name: the display name for the user receiving the notification
|
||||||
|
unsubscribe_link: the link users can click to unsubscribe from email notifications
|
||||||
|
summary_text: a summary of the notification(s). The text used can be customised
|
||||||
|
by configuring the various settings in the `email.subjects` section of the
|
||||||
|
configuration file.
|
||||||
|
rooms: a list of rooms containing events to include in the email
|
||||||
|
reason: information on the event that triggered the email to be sent
|
||||||
|
"""
|
||||||
|
|
||||||
|
app_name: str
|
||||||
|
server_name: str
|
||||||
|
link: str
|
||||||
|
user_display_name: str
|
||||||
|
unsubscribe_link: str
|
||||||
|
summary_text: str
|
||||||
|
rooms: List[RoomVars]
|
||||||
|
reason: EmailReason
|
@ -86,7 +86,7 @@ REQUIREMENTS = [
|
|||||||
# We enforce that we have a `cryptography` version that bundles an `openssl`
|
# We enforce that we have a `cryptography` version that bundles an `openssl`
|
||||||
# with the latest security patches.
|
# with the latest security patches.
|
||||||
"cryptography>=3.4.7",
|
"cryptography>=3.4.7",
|
||||||
"ijson>=3.0",
|
"ijson>=3.1",
|
||||||
]
|
]
|
||||||
|
|
||||||
CONDITIONAL_REQUIREMENTS = {
|
CONDITIONAL_REQUIREMENTS = {
|
||||||
|
@ -46,6 +46,8 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
|
|||||||
is_guest,
|
is_guest,
|
||||||
is_appservice_ghost,
|
is_appservice_ghost,
|
||||||
should_issue_refresh_token,
|
should_issue_refresh_token,
|
||||||
|
auth_provider_id,
|
||||||
|
auth_provider_session_id,
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
@ -63,6 +65,8 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
|
|||||||
"is_guest": is_guest,
|
"is_guest": is_guest,
|
||||||
"is_appservice_ghost": is_appservice_ghost,
|
"is_appservice_ghost": is_appservice_ghost,
|
||||||
"should_issue_refresh_token": should_issue_refresh_token,
|
"should_issue_refresh_token": should_issue_refresh_token,
|
||||||
|
"auth_provider_id": auth_provider_id,
|
||||||
|
"auth_provider_session_id": auth_provider_session_id,
|
||||||
}
|
}
|
||||||
|
|
||||||
async def _handle_request(self, request, user_id):
|
async def _handle_request(self, request, user_id):
|
||||||
@ -73,6 +77,8 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
|
|||||||
is_guest = content["is_guest"]
|
is_guest = content["is_guest"]
|
||||||
is_appservice_ghost = content["is_appservice_ghost"]
|
is_appservice_ghost = content["is_appservice_ghost"]
|
||||||
should_issue_refresh_token = content["should_issue_refresh_token"]
|
should_issue_refresh_token = content["should_issue_refresh_token"]
|
||||||
|
auth_provider_id = content["auth_provider_id"]
|
||||||
|
auth_provider_session_id = content["auth_provider_session_id"]
|
||||||
|
|
||||||
res = await self.registration_handler.register_device_inner(
|
res = await self.registration_handler.register_device_inner(
|
||||||
user_id,
|
user_id,
|
||||||
@ -81,6 +87,8 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
|
|||||||
is_guest,
|
is_guest,
|
||||||
is_appservice_ghost=is_appservice_ghost,
|
is_appservice_ghost=is_appservice_ghost,
|
||||||
should_issue_refresh_token=should_issue_refresh_token,
|
should_issue_refresh_token=should_issue_refresh_token,
|
||||||
|
auth_provider_id=auth_provider_id,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, res
|
return 200, res
|
||||||
|
@ -14,10 +14,18 @@
|
|||||||
from typing import List, Optional, Tuple
|
from typing import List, Optional, Tuple
|
||||||
|
|
||||||
from synapse.storage.database import LoggingDatabaseConnection
|
from synapse.storage.database import LoggingDatabaseConnection
|
||||||
from synapse.storage.util.id_generators import _load_current_id
|
from synapse.storage.util.id_generators import AbstractStreamIdTracker, _load_current_id
|
||||||
|
|
||||||
|
|
||||||
class SlavedIdTracker:
|
class SlavedIdTracker(AbstractStreamIdTracker):
|
||||||
|
"""Tracks the "current" stream ID of a stream with a single writer.
|
||||||
|
|
||||||
|
See `AbstractStreamIdTracker` for more details.
|
||||||
|
|
||||||
|
Note that this class does not work correctly when there are multiple
|
||||||
|
writers.
|
||||||
|
"""
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
db_conn: LoggingDatabaseConnection,
|
db_conn: LoggingDatabaseConnection,
|
||||||
@ -36,17 +44,7 @@ class SlavedIdTracker:
|
|||||||
self._current = (max if self.step > 0 else min)(self._current, new_id)
|
self._current = (max if self.step > 0 else min)(self._current, new_id)
|
||||||
|
|
||||||
def get_current_token(self) -> int:
|
def get_current_token(self) -> int:
|
||||||
"""
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
int
|
|
||||||
"""
|
|
||||||
return self._current
|
return self._current
|
||||||
|
|
||||||
def get_current_token_for_writer(self, instance_name: str) -> int:
|
def get_current_token_for_writer(self, instance_name: str) -> int:
|
||||||
"""Returns the position of the given writer.
|
|
||||||
|
|
||||||
For streams with single writers this is equivalent to
|
|
||||||
`get_current_token`.
|
|
||||||
"""
|
|
||||||
return self.get_current_token()
|
return self.get_current_token()
|
||||||
|
@ -13,7 +13,6 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
|
|
||||||
from synapse.replication.tcp.streams import PushRulesStream
|
from synapse.replication.tcp.streams import PushRulesStream
|
||||||
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
|
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
|
||||||
|
|
||||||
@ -25,9 +24,6 @@ class SlavedPushRuleStore(SlavedEventStore, PushRulesWorkerStore):
|
|||||||
return self._push_rules_stream_id_gen.get_current_token()
|
return self._push_rules_stream_id_gen.get_current_token()
|
||||||
|
|
||||||
def process_replication_rows(self, stream_name, instance_name, token, rows):
|
def process_replication_rows(self, stream_name, instance_name, token, rows):
|
||||||
# We assert this for the benefit of mypy
|
|
||||||
assert isinstance(self._push_rules_stream_id_gen, SlavedIdTracker)
|
|
||||||
|
|
||||||
if stream_name == PushRulesStream.NAME:
|
if stream_name == PushRulesStream.NAME:
|
||||||
self._push_rules_stream_id_gen.advance(instance_name, token)
|
self._push_rules_stream_id_gen.advance(instance_name, token)
|
||||||
for row in rows:
|
for row in rows:
|
||||||
|
@ -14,7 +14,7 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import heapq
|
import heapq
|
||||||
from collections.abc import Iterable
|
from collections.abc import Iterable
|
||||||
from typing import TYPE_CHECKING, List, Optional, Tuple, Type
|
from typing import TYPE_CHECKING, Optional, Tuple, Type
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
@ -157,7 +157,7 @@ class EventsStream(Stream):
|
|||||||
|
|
||||||
# now we fetch up to that many rows from the events table
|
# now we fetch up to that many rows from the events table
|
||||||
|
|
||||||
event_rows: List[Tuple] = await self._store.get_all_new_forward_event_rows(
|
event_rows = await self._store.get_all_new_forward_event_rows(
|
||||||
instance_name, from_token, current_token, target_row_count
|
instance_name, from_token, current_token, target_row_count
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -191,7 +191,7 @@ class EventsStream(Stream):
|
|||||||
# finally, fetch the ex-outliers rows. We assume there are few enough of these
|
# finally, fetch the ex-outliers rows. We assume there are few enough of these
|
||||||
# not to bother with the limit.
|
# not to bother with the limit.
|
||||||
|
|
||||||
ex_outliers_rows: List[Tuple] = await self._store.get_ex_outlier_stream_rows(
|
ex_outliers_rows = await self._store.get_ex_outlier_stream_rows(
|
||||||
instance_name, from_token, upper_limit
|
instance_name, from_token, upper_limit
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -17,6 +17,7 @@
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
import platform
|
import platform
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Optional, Tuple
|
from typing import TYPE_CHECKING, Optional, Tuple
|
||||||
|
|
||||||
import synapse
|
import synapse
|
||||||
@ -39,6 +40,10 @@ from synapse.rest.admin.event_reports import (
|
|||||||
EventReportDetailRestServlet,
|
EventReportDetailRestServlet,
|
||||||
EventReportsRestServlet,
|
EventReportsRestServlet,
|
||||||
)
|
)
|
||||||
|
from synapse.rest.admin.federation import (
|
||||||
|
DestinationsRestServlet,
|
||||||
|
ListDestinationsRestServlet,
|
||||||
|
)
|
||||||
from synapse.rest.admin.groups import DeleteGroupAdminRestServlet
|
from synapse.rest.admin.groups import DeleteGroupAdminRestServlet
|
||||||
from synapse.rest.admin.media import ListMediaInRoom, register_servlets_for_media_repo
|
from synapse.rest.admin.media import ListMediaInRoom, register_servlets_for_media_repo
|
||||||
from synapse.rest.admin.registration_tokens import (
|
from synapse.rest.admin.registration_tokens import (
|
||||||
@ -98,7 +103,7 @@ class VersionServlet(RestServlet):
|
|||||||
}
|
}
|
||||||
|
|
||||||
def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
return 200, self.res
|
return HTTPStatus.OK, self.res
|
||||||
|
|
||||||
|
|
||||||
class PurgeHistoryRestServlet(RestServlet):
|
class PurgeHistoryRestServlet(RestServlet):
|
||||||
@ -130,7 +135,7 @@ class PurgeHistoryRestServlet(RestServlet):
|
|||||||
event = await self.store.get_event(event_id)
|
event = await self.store.get_event(event_id)
|
||||||
|
|
||||||
if event.room_id != room_id:
|
if event.room_id != room_id:
|
||||||
raise SynapseError(400, "Event is for wrong room.")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Event is for wrong room.")
|
||||||
|
|
||||||
# RoomStreamToken expects [int] not Optional[int]
|
# RoomStreamToken expects [int] not Optional[int]
|
||||||
assert event.internal_metadata.stream_ordering is not None
|
assert event.internal_metadata.stream_ordering is not None
|
||||||
@ -144,7 +149,9 @@ class PurgeHistoryRestServlet(RestServlet):
|
|||||||
ts = body["purge_up_to_ts"]
|
ts = body["purge_up_to_ts"]
|
||||||
if not isinstance(ts, int):
|
if not isinstance(ts, int):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "purge_up_to_ts must be an int", errcode=Codes.BAD_JSON
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"purge_up_to_ts must be an int",
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
)
|
)
|
||||||
|
|
||||||
stream_ordering = await self.store.find_first_stream_ordering_after_ts(ts)
|
stream_ordering = await self.store.find_first_stream_ordering_after_ts(ts)
|
||||||
@ -160,7 +167,9 @@ class PurgeHistoryRestServlet(RestServlet):
|
|||||||
stream_ordering,
|
stream_ordering,
|
||||||
)
|
)
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
404, "there is no event to be purged", errcode=Codes.NOT_FOUND
|
HTTPStatus.NOT_FOUND,
|
||||||
|
"there is no event to be purged",
|
||||||
|
errcode=Codes.NOT_FOUND,
|
||||||
)
|
)
|
||||||
(stream, topo, _event_id) = r
|
(stream, topo, _event_id) = r
|
||||||
token = "t%d-%d" % (topo, stream)
|
token = "t%d-%d" % (topo, stream)
|
||||||
@ -173,7 +182,7 @@ class PurgeHistoryRestServlet(RestServlet):
|
|||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"must specify purge_up_to_event_id or purge_up_to_ts",
|
"must specify purge_up_to_event_id or purge_up_to_ts",
|
||||||
errcode=Codes.BAD_JSON,
|
errcode=Codes.BAD_JSON,
|
||||||
)
|
)
|
||||||
@ -182,7 +191,7 @@ class PurgeHistoryRestServlet(RestServlet):
|
|||||||
room_id, token, delete_local_events=delete_local_events
|
room_id, token, delete_local_events=delete_local_events
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {"purge_id": purge_id}
|
return HTTPStatus.OK, {"purge_id": purge_id}
|
||||||
|
|
||||||
|
|
||||||
class PurgeHistoryStatusRestServlet(RestServlet):
|
class PurgeHistoryStatusRestServlet(RestServlet):
|
||||||
@ -201,7 +210,7 @@ class PurgeHistoryStatusRestServlet(RestServlet):
|
|||||||
if purge_status is None:
|
if purge_status is None:
|
||||||
raise NotFoundError("purge id '%s' not found" % purge_id)
|
raise NotFoundError("purge id '%s' not found" % purge_id)
|
||||||
|
|
||||||
return 200, purge_status.asdict()
|
return HTTPStatus.OK, purge_status.asdict()
|
||||||
|
|
||||||
|
|
||||||
########################################################################################
|
########################################################################################
|
||||||
@ -256,6 +265,8 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
|||||||
ListRegistrationTokensRestServlet(hs).register(http_server)
|
ListRegistrationTokensRestServlet(hs).register(http_server)
|
||||||
NewRegistrationTokenRestServlet(hs).register(http_server)
|
NewRegistrationTokenRestServlet(hs).register(http_server)
|
||||||
RegistrationTokenRestServlet(hs).register(http_server)
|
RegistrationTokenRestServlet(hs).register(http_server)
|
||||||
|
DestinationsRestServlet(hs).register(http_server)
|
||||||
|
ListDestinationsRestServlet(hs).register(http_server)
|
||||||
|
|
||||||
# Some servlets only get registered for the main process.
|
# Some servlets only get registered for the main process.
|
||||||
if hs.config.worker.worker_app is None:
|
if hs.config.worker.worker_app is None:
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import re
|
import re
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import Iterable, Pattern
|
from typing import Iterable, Pattern
|
||||||
|
|
||||||
from synapse.api.auth import Auth
|
from synapse.api.auth import Auth
|
||||||
@ -62,4 +63,4 @@ async def assert_user_is_admin(auth: Auth, user_id: UserID) -> None:
|
|||||||
"""
|
"""
|
||||||
is_admin = await auth.is_server_admin(user_id)
|
is_admin = await auth.is_server_admin(user_id)
|
||||||
if not is_admin:
|
if not is_admin:
|
||||||
raise AuthError(403, "You are not a server admin")
|
raise AuthError(HTTPStatus.FORBIDDEN, "You are not a server admin")
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
from synapse.api.errors import NotFoundError, SynapseError
|
from synapse.api.errors import NotFoundError, SynapseError
|
||||||
@ -53,7 +54,7 @@ class DeviceRestServlet(RestServlet):
|
|||||||
|
|
||||||
target_user = UserID.from_string(user_id)
|
target_user = UserID.from_string(user_id)
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Can only lookup local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only lookup local users")
|
||||||
|
|
||||||
u = await self.store.get_user_by_id(target_user.to_string())
|
u = await self.store.get_user_by_id(target_user.to_string())
|
||||||
if u is None:
|
if u is None:
|
||||||
@ -62,7 +63,7 @@ class DeviceRestServlet(RestServlet):
|
|||||||
device = await self.device_handler.get_device(
|
device = await self.device_handler.get_device(
|
||||||
target_user.to_string(), device_id
|
target_user.to_string(), device_id
|
||||||
)
|
)
|
||||||
return 200, device
|
return HTTPStatus.OK, device
|
||||||
|
|
||||||
async def on_DELETE(
|
async def on_DELETE(
|
||||||
self, request: SynapseRequest, user_id: str, device_id: str
|
self, request: SynapseRequest, user_id: str, device_id: str
|
||||||
@ -71,14 +72,14 @@ class DeviceRestServlet(RestServlet):
|
|||||||
|
|
||||||
target_user = UserID.from_string(user_id)
|
target_user = UserID.from_string(user_id)
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Can only lookup local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only lookup local users")
|
||||||
|
|
||||||
u = await self.store.get_user_by_id(target_user.to_string())
|
u = await self.store.get_user_by_id(target_user.to_string())
|
||||||
if u is None:
|
if u is None:
|
||||||
raise NotFoundError("Unknown user")
|
raise NotFoundError("Unknown user")
|
||||||
|
|
||||||
await self.device_handler.delete_device(target_user.to_string(), device_id)
|
await self.device_handler.delete_device(target_user.to_string(), device_id)
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
async def on_PUT(
|
async def on_PUT(
|
||||||
self, request: SynapseRequest, user_id: str, device_id: str
|
self, request: SynapseRequest, user_id: str, device_id: str
|
||||||
@ -87,7 +88,7 @@ class DeviceRestServlet(RestServlet):
|
|||||||
|
|
||||||
target_user = UserID.from_string(user_id)
|
target_user = UserID.from_string(user_id)
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Can only lookup local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only lookup local users")
|
||||||
|
|
||||||
u = await self.store.get_user_by_id(target_user.to_string())
|
u = await self.store.get_user_by_id(target_user.to_string())
|
||||||
if u is None:
|
if u is None:
|
||||||
@ -97,7 +98,7 @@ class DeviceRestServlet(RestServlet):
|
|||||||
await self.device_handler.update_device(
|
await self.device_handler.update_device(
|
||||||
target_user.to_string(), device_id, body
|
target_user.to_string(), device_id, body
|
||||||
)
|
)
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
|
|
||||||
class DevicesRestServlet(RestServlet):
|
class DevicesRestServlet(RestServlet):
|
||||||
@ -124,14 +125,14 @@ class DevicesRestServlet(RestServlet):
|
|||||||
|
|
||||||
target_user = UserID.from_string(user_id)
|
target_user = UserID.from_string(user_id)
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Can only lookup local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only lookup local users")
|
||||||
|
|
||||||
u = await self.store.get_user_by_id(target_user.to_string())
|
u = await self.store.get_user_by_id(target_user.to_string())
|
||||||
if u is None:
|
if u is None:
|
||||||
raise NotFoundError("Unknown user")
|
raise NotFoundError("Unknown user")
|
||||||
|
|
||||||
devices = await self.device_handler.get_devices_by_user(target_user.to_string())
|
devices = await self.device_handler.get_devices_by_user(target_user.to_string())
|
||||||
return 200, {"devices": devices, "total": len(devices)}
|
return HTTPStatus.OK, {"devices": devices, "total": len(devices)}
|
||||||
|
|
||||||
|
|
||||||
class DeleteDevicesRestServlet(RestServlet):
|
class DeleteDevicesRestServlet(RestServlet):
|
||||||
@ -155,7 +156,7 @@ class DeleteDevicesRestServlet(RestServlet):
|
|||||||
|
|
||||||
target_user = UserID.from_string(user_id)
|
target_user = UserID.from_string(user_id)
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Can only lookup local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only lookup local users")
|
||||||
|
|
||||||
u = await self.store.get_user_by_id(target_user.to_string())
|
u = await self.store.get_user_by_id(target_user.to_string())
|
||||||
if u is None:
|
if u is None:
|
||||||
@ -167,4 +168,4 @@ class DeleteDevicesRestServlet(RestServlet):
|
|||||||
await self.device_handler.delete_devices(
|
await self.device_handler.delete_devices(
|
||||||
target_user.to_string(), body["devices"]
|
target_user.to_string(), body["devices"]
|
||||||
)
|
)
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||||
@ -66,21 +67,23 @@ class EventReportsRestServlet(RestServlet):
|
|||||||
|
|
||||||
if start < 0:
|
if start < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"The start parameter must be a positive integer.",
|
"The start parameter must be a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
if limit < 0:
|
if limit < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"The limit parameter must be a positive integer.",
|
"The limit parameter must be a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
if direction not in ("f", "b"):
|
if direction not in ("f", "b"):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "Unknown direction: %s" % (direction,), errcode=Codes.INVALID_PARAM
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Unknown direction: %s" % (direction,),
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
event_reports, total = await self.store.get_event_reports_paginate(
|
event_reports, total = await self.store.get_event_reports_paginate(
|
||||||
@ -90,7 +93,7 @@ class EventReportsRestServlet(RestServlet):
|
|||||||
if (start + limit) < total:
|
if (start + limit) < total:
|
||||||
ret["next_token"] = start + len(event_reports)
|
ret["next_token"] = start + len(event_reports)
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
|
|
||||||
class EventReportDetailRestServlet(RestServlet):
|
class EventReportDetailRestServlet(RestServlet):
|
||||||
@ -127,13 +130,17 @@ class EventReportDetailRestServlet(RestServlet):
|
|||||||
try:
|
try:
|
||||||
resolved_report_id = int(report_id)
|
resolved_report_id = int(report_id)
|
||||||
except ValueError:
|
except ValueError:
|
||||||
raise SynapseError(400, message, errcode=Codes.INVALID_PARAM)
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, message, errcode=Codes.INVALID_PARAM
|
||||||
|
)
|
||||||
|
|
||||||
if resolved_report_id < 0:
|
if resolved_report_id < 0:
|
||||||
raise SynapseError(400, message, errcode=Codes.INVALID_PARAM)
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, message, errcode=Codes.INVALID_PARAM
|
||||||
|
)
|
||||||
|
|
||||||
ret = await self.store.get_event_report(resolved_report_id)
|
ret = await self.store.get_event_report(resolved_report_id)
|
||||||
if not ret:
|
if not ret:
|
||||||
raise NotFoundError("Event report not found")
|
raise NotFoundError("Event report not found")
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
135
synapse/rest/admin/federation.py
Normal file
135
synapse/rest/admin/federation.py
Normal file
@ -0,0 +1,135 @@
|
|||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
import logging
|
||||||
|
from http import HTTPStatus
|
||||||
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
|
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||||
|
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||||
|
from synapse.http.site import SynapseRequest
|
||||||
|
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
|
||||||
|
from synapse.storage.databases.main.transactions import DestinationSortOrder
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class ListDestinationsRestServlet(RestServlet):
|
||||||
|
"""Get request to list all destinations.
|
||||||
|
This needs user to have administrator access in Synapse.
|
||||||
|
|
||||||
|
GET /_synapse/admin/v1/federation/destinations?from=0&limit=10
|
||||||
|
|
||||||
|
returns:
|
||||||
|
200 OK with list of destinations if success otherwise an error.
|
||||||
|
|
||||||
|
The parameters `from` and `limit` are required only for pagination.
|
||||||
|
By default, a `limit` of 100 is used.
|
||||||
|
The parameter `destination` can be used to filter by destination.
|
||||||
|
The parameter `order_by` can be used to order the result.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/federation/destinations$")
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
self._store = hs.get_datastore()
|
||||||
|
|
||||||
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
|
await assert_requester_is_admin(self._auth, request)
|
||||||
|
|
||||||
|
start = parse_integer(request, "from", default=0)
|
||||||
|
limit = parse_integer(request, "limit", default=100)
|
||||||
|
|
||||||
|
if start < 0:
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Query parameter from must be a string representing a positive integer.",
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
|
)
|
||||||
|
|
||||||
|
if limit < 0:
|
||||||
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Query parameter limit must be a string representing a positive integer.",
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
|
)
|
||||||
|
|
||||||
|
destination = parse_string(request, "destination")
|
||||||
|
|
||||||
|
order_by = parse_string(
|
||||||
|
request,
|
||||||
|
"order_by",
|
||||||
|
default=DestinationSortOrder.DESTINATION.value,
|
||||||
|
allowed_values=[dest.value for dest in DestinationSortOrder],
|
||||||
|
)
|
||||||
|
|
||||||
|
direction = parse_string(request, "dir", default="f", allowed_values=("f", "b"))
|
||||||
|
|
||||||
|
destinations, total = await self._store.get_destinations_paginate(
|
||||||
|
start, limit, destination, order_by, direction
|
||||||
|
)
|
||||||
|
response = {"destinations": destinations, "total": total}
|
||||||
|
if (start + limit) < total:
|
||||||
|
response["next_token"] = str(start + len(destinations))
|
||||||
|
|
||||||
|
return HTTPStatus.OK, response
|
||||||
|
|
||||||
|
|
||||||
|
class DestinationsRestServlet(RestServlet):
|
||||||
|
"""Get details of a destination.
|
||||||
|
This needs user to have administrator access in Synapse.
|
||||||
|
|
||||||
|
GET /_synapse/admin/v1/federation/destinations/<destination>
|
||||||
|
|
||||||
|
returns:
|
||||||
|
200 OK with details of a destination if success otherwise an error.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/federation/destinations/(?P<destination>[^/]+)$")
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
self._store = hs.get_datastore()
|
||||||
|
|
||||||
|
async def on_GET(
|
||||||
|
self, request: SynapseRequest, destination: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
await assert_requester_is_admin(self._auth, request)
|
||||||
|
|
||||||
|
destination_retry_timings = await self._store.get_destination_retry_timings(
|
||||||
|
destination
|
||||||
|
)
|
||||||
|
|
||||||
|
if not destination_retry_timings:
|
||||||
|
raise NotFoundError("Unknown destination")
|
||||||
|
|
||||||
|
last_successful_stream_ordering = (
|
||||||
|
await self._store.get_destination_last_successful_stream_ordering(
|
||||||
|
destination
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
response = {
|
||||||
|
"destination": destination,
|
||||||
|
"failure_ts": destination_retry_timings.failure_ts,
|
||||||
|
"retry_last_ts": destination_retry_timings.retry_last_ts,
|
||||||
|
"retry_interval": destination_retry_timings.retry_interval,
|
||||||
|
"last_successful_stream_ordering": last_successful_stream_ordering,
|
||||||
|
}
|
||||||
|
|
||||||
|
return HTTPStatus.OK, response
|
@ -12,6 +12,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
@ -43,7 +44,7 @@ class DeleteGroupAdminRestServlet(RestServlet):
|
|||||||
await assert_user_is_admin(self.auth, requester.user)
|
await assert_user_is_admin(self.auth, requester.user)
|
||||||
|
|
||||||
if not self.is_mine_id(group_id):
|
if not self.is_mine_id(group_id):
|
||||||
raise SynapseError(400, "Can only delete local groups")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only delete local groups")
|
||||||
|
|
||||||
await self.group_server.delete_group(group_id, requester.user.to_string())
|
await self.group_server.delete_group(group_id, requester.user.to_string())
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
@ -14,6 +14,7 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
|
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
|
||||||
@ -62,7 +63,7 @@ class QuarantineMediaInRoom(RestServlet):
|
|||||||
room_id, requester.user.to_string()
|
room_id, requester.user.to_string()
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {"num_quarantined": num_quarantined}
|
return HTTPStatus.OK, {"num_quarantined": num_quarantined}
|
||||||
|
|
||||||
|
|
||||||
class QuarantineMediaByUser(RestServlet):
|
class QuarantineMediaByUser(RestServlet):
|
||||||
@ -89,7 +90,7 @@ class QuarantineMediaByUser(RestServlet):
|
|||||||
user_id, requester.user.to_string()
|
user_id, requester.user.to_string()
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {"num_quarantined": num_quarantined}
|
return HTTPStatus.OK, {"num_quarantined": num_quarantined}
|
||||||
|
|
||||||
|
|
||||||
class QuarantineMediaByID(RestServlet):
|
class QuarantineMediaByID(RestServlet):
|
||||||
@ -118,7 +119,7 @@ class QuarantineMediaByID(RestServlet):
|
|||||||
server_name, media_id, requester.user.to_string()
|
server_name, media_id, requester.user.to_string()
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
|
|
||||||
class UnquarantineMediaByID(RestServlet):
|
class UnquarantineMediaByID(RestServlet):
|
||||||
@ -147,7 +148,7 @@ class UnquarantineMediaByID(RestServlet):
|
|||||||
# Remove from quarantine this media id
|
# Remove from quarantine this media id
|
||||||
await self.store.quarantine_media_by_id(server_name, media_id, None)
|
await self.store.quarantine_media_by_id(server_name, media_id, None)
|
||||||
|
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
|
|
||||||
class ProtectMediaByID(RestServlet):
|
class ProtectMediaByID(RestServlet):
|
||||||
@ -170,7 +171,7 @@ class ProtectMediaByID(RestServlet):
|
|||||||
# Protect this media id
|
# Protect this media id
|
||||||
await self.store.mark_local_media_as_safe(media_id, safe=True)
|
await self.store.mark_local_media_as_safe(media_id, safe=True)
|
||||||
|
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
|
|
||||||
class UnprotectMediaByID(RestServlet):
|
class UnprotectMediaByID(RestServlet):
|
||||||
@ -193,7 +194,7 @@ class UnprotectMediaByID(RestServlet):
|
|||||||
# Unprotect this media id
|
# Unprotect this media id
|
||||||
await self.store.mark_local_media_as_safe(media_id, safe=False)
|
await self.store.mark_local_media_as_safe(media_id, safe=False)
|
||||||
|
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
|
|
||||||
class ListMediaInRoom(RestServlet):
|
class ListMediaInRoom(RestServlet):
|
||||||
@ -211,11 +212,11 @@ class ListMediaInRoom(RestServlet):
|
|||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self.auth.get_user_by_req(request)
|
||||||
is_admin = await self.auth.is_server_admin(requester.user)
|
is_admin = await self.auth.is_server_admin(requester.user)
|
||||||
if not is_admin:
|
if not is_admin:
|
||||||
raise AuthError(403, "You are not a server admin")
|
raise AuthError(HTTPStatus.FORBIDDEN, "You are not a server admin")
|
||||||
|
|
||||||
local_mxcs, remote_mxcs = await self.store.get_media_mxcs_in_room(room_id)
|
local_mxcs, remote_mxcs = await self.store.get_media_mxcs_in_room(room_id)
|
||||||
|
|
||||||
return 200, {"local": local_mxcs, "remote": remote_mxcs}
|
return HTTPStatus.OK, {"local": local_mxcs, "remote": remote_mxcs}
|
||||||
|
|
||||||
|
|
||||||
class PurgeMediaCacheRestServlet(RestServlet):
|
class PurgeMediaCacheRestServlet(RestServlet):
|
||||||
@ -233,13 +234,13 @@ class PurgeMediaCacheRestServlet(RestServlet):
|
|||||||
|
|
||||||
if before_ts < 0:
|
if before_ts < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter before_ts must be a positive integer.",
|
"Query parameter before_ts must be a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
elif before_ts < 30000000000: # Dec 1970 in milliseconds, Aug 2920 in seconds
|
elif before_ts < 30000000000: # Dec 1970 in milliseconds, Aug 2920 in seconds
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter before_ts you provided is from the year 1970. "
|
"Query parameter before_ts you provided is from the year 1970. "
|
||||||
+ "Double check that you are providing a timestamp in milliseconds.",
|
+ "Double check that you are providing a timestamp in milliseconds.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
@ -247,7 +248,7 @@ class PurgeMediaCacheRestServlet(RestServlet):
|
|||||||
|
|
||||||
ret = await self.media_repository.delete_old_remote_media(before_ts)
|
ret = await self.media_repository.delete_old_remote_media(before_ts)
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
|
|
||||||
class DeleteMediaByID(RestServlet):
|
class DeleteMediaByID(RestServlet):
|
||||||
@ -267,7 +268,7 @@ class DeleteMediaByID(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if self.server_name != server_name:
|
if self.server_name != server_name:
|
||||||
raise SynapseError(400, "Can only delete local media")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only delete local media")
|
||||||
|
|
||||||
if await self.store.get_local_media(media_id) is None:
|
if await self.store.get_local_media(media_id) is None:
|
||||||
raise NotFoundError("Unknown media")
|
raise NotFoundError("Unknown media")
|
||||||
@ -277,7 +278,7 @@ class DeleteMediaByID(RestServlet):
|
|||||||
deleted_media, total = await self.media_repository.delete_local_media_ids(
|
deleted_media, total = await self.media_repository.delete_local_media_ids(
|
||||||
[media_id]
|
[media_id]
|
||||||
)
|
)
|
||||||
return 200, {"deleted_media": deleted_media, "total": total}
|
return HTTPStatus.OK, {"deleted_media": deleted_media, "total": total}
|
||||||
|
|
||||||
|
|
||||||
class DeleteMediaByDateSize(RestServlet):
|
class DeleteMediaByDateSize(RestServlet):
|
||||||
@ -304,26 +305,26 @@ class DeleteMediaByDateSize(RestServlet):
|
|||||||
|
|
||||||
if before_ts < 0:
|
if before_ts < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter before_ts must be a positive integer.",
|
"Query parameter before_ts must be a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
elif before_ts < 30000000000: # Dec 1970 in milliseconds, Aug 2920 in seconds
|
elif before_ts < 30000000000: # Dec 1970 in milliseconds, Aug 2920 in seconds
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter before_ts you provided is from the year 1970. "
|
"Query parameter before_ts you provided is from the year 1970. "
|
||||||
+ "Double check that you are providing a timestamp in milliseconds.",
|
+ "Double check that you are providing a timestamp in milliseconds.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
if size_gt < 0:
|
if size_gt < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter size_gt must be a string representing a positive integer.",
|
"Query parameter size_gt must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
if self.server_name != server_name:
|
if self.server_name != server_name:
|
||||||
raise SynapseError(400, "Can only delete local media")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only delete local media")
|
||||||
|
|
||||||
logging.info(
|
logging.info(
|
||||||
"Deleting local media by timestamp: %s, size larger than: %s, keep profile media: %s"
|
"Deleting local media by timestamp: %s, size larger than: %s, keep profile media: %s"
|
||||||
@ -333,7 +334,7 @@ class DeleteMediaByDateSize(RestServlet):
|
|||||||
deleted_media, total = await self.media_repository.delete_old_local_media(
|
deleted_media, total = await self.media_repository.delete_old_local_media(
|
||||||
before_ts, size_gt, keep_profiles
|
before_ts, size_gt, keep_profiles
|
||||||
)
|
)
|
||||||
return 200, {"deleted_media": deleted_media, "total": total}
|
return HTTPStatus.OK, {"deleted_media": deleted_media, "total": total}
|
||||||
|
|
||||||
|
|
||||||
class UserMediaRestServlet(RestServlet):
|
class UserMediaRestServlet(RestServlet):
|
||||||
@ -369,7 +370,7 @@ class UserMediaRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if not self.is_mine(UserID.from_string(user_id)):
|
if not self.is_mine(UserID.from_string(user_id)):
|
||||||
raise SynapseError(400, "Can only look up local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only look up local users")
|
||||||
|
|
||||||
user = await self.store.get_user_by_id(user_id)
|
user = await self.store.get_user_by_id(user_id)
|
||||||
if user is None:
|
if user is None:
|
||||||
@ -380,14 +381,14 @@ class UserMediaRestServlet(RestServlet):
|
|||||||
|
|
||||||
if start < 0:
|
if start < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter from must be a string representing a positive integer.",
|
"Query parameter from must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
if limit < 0:
|
if limit < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter limit must be a string representing a positive integer.",
|
"Query parameter limit must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -425,7 +426,7 @@ class UserMediaRestServlet(RestServlet):
|
|||||||
if (start + limit) < total:
|
if (start + limit) < total:
|
||||||
ret["next_token"] = start + len(media)
|
ret["next_token"] = start + len(media)
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
async def on_DELETE(
|
async def on_DELETE(
|
||||||
self, request: SynapseRequest, user_id: str
|
self, request: SynapseRequest, user_id: str
|
||||||
@ -436,7 +437,7 @@ class UserMediaRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if not self.is_mine(UserID.from_string(user_id)):
|
if not self.is_mine(UserID.from_string(user_id)):
|
||||||
raise SynapseError(400, "Can only look up local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only look up local users")
|
||||||
|
|
||||||
user = await self.store.get_user_by_id(user_id)
|
user = await self.store.get_user_by_id(user_id)
|
||||||
if user is None:
|
if user is None:
|
||||||
@ -447,14 +448,14 @@ class UserMediaRestServlet(RestServlet):
|
|||||||
|
|
||||||
if start < 0:
|
if start < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter from must be a string representing a positive integer.",
|
"Query parameter from must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
if limit < 0:
|
if limit < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter limit must be a string representing a positive integer.",
|
"Query parameter limit must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -492,7 +493,7 @@ class UserMediaRestServlet(RestServlet):
|
|||||||
([row["media_id"] for row in media])
|
([row["media_id"] for row in media])
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {"deleted_media": deleted_media, "total": total}
|
return HTTPStatus.OK, {"deleted_media": deleted_media, "total": total}
|
||||||
|
|
||||||
|
|
||||||
def register_servlets_for_media_repo(hs: "HomeServer", http_server: HttpServer) -> None:
|
def register_servlets_for_media_repo(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
|
@ -14,6 +14,7 @@
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
import string
|
import string
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||||
@ -77,7 +78,7 @@ class ListRegistrationTokensRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
valid = parse_boolean(request, "valid")
|
valid = parse_boolean(request, "valid")
|
||||||
token_list = await self.store.get_registration_tokens(valid)
|
token_list = await self.store.get_registration_tokens(valid)
|
||||||
return 200, {"registration_tokens": token_list}
|
return HTTPStatus.OK, {"registration_tokens": token_list}
|
||||||
|
|
||||||
|
|
||||||
class NewRegistrationTokenRestServlet(RestServlet):
|
class NewRegistrationTokenRestServlet(RestServlet):
|
||||||
@ -123,16 +124,20 @@ class NewRegistrationTokenRestServlet(RestServlet):
|
|||||||
if "token" in body:
|
if "token" in body:
|
||||||
token = body["token"]
|
token = body["token"]
|
||||||
if not isinstance(token, str):
|
if not isinstance(token, str):
|
||||||
raise SynapseError(400, "token must be a string", Codes.INVALID_PARAM)
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"token must be a string",
|
||||||
|
Codes.INVALID_PARAM,
|
||||||
|
)
|
||||||
if not (0 < len(token) <= 64):
|
if not (0 < len(token) <= 64):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"token must not be empty and must not be longer than 64 characters",
|
"token must not be empty and must not be longer than 64 characters",
|
||||||
Codes.INVALID_PARAM,
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
if not set(token).issubset(self.allowed_chars_set):
|
if not set(token).issubset(self.allowed_chars_set):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"token must consist only of characters matched by the regex [A-Za-z0-9-_]",
|
"token must consist only of characters matched by the regex [A-Za-z0-9-_]",
|
||||||
Codes.INVALID_PARAM,
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -142,11 +147,13 @@ class NewRegistrationTokenRestServlet(RestServlet):
|
|||||||
length = body.get("length", 16)
|
length = body.get("length", 16)
|
||||||
if not isinstance(length, int):
|
if not isinstance(length, int):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "length must be an integer", Codes.INVALID_PARAM
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"length must be an integer",
|
||||||
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
if not (0 < length <= 64):
|
if not (0 < length <= 64):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"length must be greater than zero and not greater than 64",
|
"length must be greater than zero and not greater than 64",
|
||||||
Codes.INVALID_PARAM,
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -162,7 +169,7 @@ class NewRegistrationTokenRestServlet(RestServlet):
|
|||||||
or (isinstance(uses_allowed, int) and uses_allowed >= 0)
|
or (isinstance(uses_allowed, int) and uses_allowed >= 0)
|
||||||
):
|
):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"uses_allowed must be a non-negative integer or null",
|
"uses_allowed must be a non-negative integer or null",
|
||||||
Codes.INVALID_PARAM,
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -170,11 +177,15 @@ class NewRegistrationTokenRestServlet(RestServlet):
|
|||||||
expiry_time = body.get("expiry_time", None)
|
expiry_time = body.get("expiry_time", None)
|
||||||
if not isinstance(expiry_time, (int, type(None))):
|
if not isinstance(expiry_time, (int, type(None))):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "expiry_time must be an integer or null", Codes.INVALID_PARAM
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"expiry_time must be an integer or null",
|
||||||
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
if isinstance(expiry_time, int) and expiry_time < self.clock.time_msec():
|
if isinstance(expiry_time, int) and expiry_time < self.clock.time_msec():
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "expiry_time must not be in the past", Codes.INVALID_PARAM
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"expiry_time must not be in the past",
|
||||||
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
created = await self.store.create_registration_token(
|
created = await self.store.create_registration_token(
|
||||||
@ -182,7 +193,9 @@ class NewRegistrationTokenRestServlet(RestServlet):
|
|||||||
)
|
)
|
||||||
if not created:
|
if not created:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, f"Token already exists: {token}", Codes.INVALID_PARAM
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
f"Token already exists: {token}",
|
||||||
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
resp = {
|
resp = {
|
||||||
@ -192,7 +205,7 @@ class NewRegistrationTokenRestServlet(RestServlet):
|
|||||||
"completed": 0,
|
"completed": 0,
|
||||||
"expiry_time": expiry_time,
|
"expiry_time": expiry_time,
|
||||||
}
|
}
|
||||||
return 200, resp
|
return HTTPStatus.OK, resp
|
||||||
|
|
||||||
|
|
||||||
class RegistrationTokenRestServlet(RestServlet):
|
class RegistrationTokenRestServlet(RestServlet):
|
||||||
@ -261,7 +274,7 @@ class RegistrationTokenRestServlet(RestServlet):
|
|||||||
if token_info is None:
|
if token_info is None:
|
||||||
raise NotFoundError(f"No such registration token: {token}")
|
raise NotFoundError(f"No such registration token: {token}")
|
||||||
|
|
||||||
return 200, token_info
|
return HTTPStatus.OK, token_info
|
||||||
|
|
||||||
async def on_PUT(self, request: SynapseRequest, token: str) -> Tuple[int, JsonDict]:
|
async def on_PUT(self, request: SynapseRequest, token: str) -> Tuple[int, JsonDict]:
|
||||||
"""Update a registration token."""
|
"""Update a registration token."""
|
||||||
@ -277,7 +290,7 @@ class RegistrationTokenRestServlet(RestServlet):
|
|||||||
or (isinstance(uses_allowed, int) and uses_allowed >= 0)
|
or (isinstance(uses_allowed, int) and uses_allowed >= 0)
|
||||||
):
|
):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"uses_allowed must be a non-negative integer or null",
|
"uses_allowed must be a non-negative integer or null",
|
||||||
Codes.INVALID_PARAM,
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -287,11 +300,15 @@ class RegistrationTokenRestServlet(RestServlet):
|
|||||||
expiry_time = body["expiry_time"]
|
expiry_time = body["expiry_time"]
|
||||||
if not isinstance(expiry_time, (int, type(None))):
|
if not isinstance(expiry_time, (int, type(None))):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "expiry_time must be an integer or null", Codes.INVALID_PARAM
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"expiry_time must be an integer or null",
|
||||||
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
if isinstance(expiry_time, int) and expiry_time < self.clock.time_msec():
|
if isinstance(expiry_time, int) and expiry_time < self.clock.time_msec():
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "expiry_time must not be in the past", Codes.INVALID_PARAM
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"expiry_time must not be in the past",
|
||||||
|
Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
new_attributes["expiry_time"] = expiry_time
|
new_attributes["expiry_time"] = expiry_time
|
||||||
|
|
||||||
@ -307,7 +324,7 @@ class RegistrationTokenRestServlet(RestServlet):
|
|||||||
if token_info is None:
|
if token_info is None:
|
||||||
raise NotFoundError(f"No such registration token: {token}")
|
raise NotFoundError(f"No such registration token: {token}")
|
||||||
|
|
||||||
return 200, token_info
|
return HTTPStatus.OK, token_info
|
||||||
|
|
||||||
async def on_DELETE(
|
async def on_DELETE(
|
||||||
self, request: SynapseRequest, token: str
|
self, request: SynapseRequest, token: str
|
||||||
@ -316,6 +333,6 @@ class RegistrationTokenRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if await self.store.delete_registration_token(token):
|
if await self.store.delete_registration_token(token):
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
raise NotFoundError(f"No such registration token: {token}")
|
raise NotFoundError(f"No such registration token: {token}")
|
||||||
|
@ -102,10 +102,9 @@ class RoomRestV2Servlet(RestServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if not RoomID.is_valid(room_id):
|
if not RoomID.is_valid(room_id):
|
||||||
raise SynapseError(400, "%s is not a legal room ID" % (room_id,))
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "%s is not a legal room ID" % (room_id,)
|
||||||
if not await self._store.get_room(room_id):
|
)
|
||||||
raise NotFoundError("Unknown room id %s" % (room_id,))
|
|
||||||
|
|
||||||
delete_id = self._pagination_handler.start_shutdown_and_purge_room(
|
delete_id = self._pagination_handler.start_shutdown_and_purge_room(
|
||||||
room_id=room_id,
|
room_id=room_id,
|
||||||
@ -118,7 +117,7 @@ class RoomRestV2Servlet(RestServlet):
|
|||||||
force_purge=force_purge,
|
force_purge=force_purge,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {"delete_id": delete_id}
|
return HTTPStatus.OK, {"delete_id": delete_id}
|
||||||
|
|
||||||
|
|
||||||
class DeleteRoomStatusByRoomIdRestServlet(RestServlet):
|
class DeleteRoomStatusByRoomIdRestServlet(RestServlet):
|
||||||
@ -137,7 +136,9 @@ class DeleteRoomStatusByRoomIdRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self._auth, request)
|
await assert_requester_is_admin(self._auth, request)
|
||||||
|
|
||||||
if not RoomID.is_valid(room_id):
|
if not RoomID.is_valid(room_id):
|
||||||
raise SynapseError(400, "%s is not a legal room ID" % (room_id,))
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "%s is not a legal room ID" % (room_id,)
|
||||||
|
)
|
||||||
|
|
||||||
delete_ids = self._pagination_handler.get_delete_ids_by_room(room_id)
|
delete_ids = self._pagination_handler.get_delete_ids_by_room(room_id)
|
||||||
if delete_ids is None:
|
if delete_ids is None:
|
||||||
@ -153,7 +154,7 @@ class DeleteRoomStatusByRoomIdRestServlet(RestServlet):
|
|||||||
**delete.asdict(),
|
**delete.asdict(),
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
return 200, {"results": cast(JsonDict, response)}
|
return HTTPStatus.OK, {"results": cast(JsonDict, response)}
|
||||||
|
|
||||||
|
|
||||||
class DeleteRoomStatusByDeleteIdRestServlet(RestServlet):
|
class DeleteRoomStatusByDeleteIdRestServlet(RestServlet):
|
||||||
@ -175,7 +176,7 @@ class DeleteRoomStatusByDeleteIdRestServlet(RestServlet):
|
|||||||
if delete_status is None:
|
if delete_status is None:
|
||||||
raise NotFoundError("delete id '%s' not found" % delete_id)
|
raise NotFoundError("delete id '%s' not found" % delete_id)
|
||||||
|
|
||||||
return 200, cast(JsonDict, delete_status.asdict())
|
return HTTPStatus.OK, cast(JsonDict, delete_status.asdict())
|
||||||
|
|
||||||
|
|
||||||
class ListRoomRestServlet(RestServlet):
|
class ListRoomRestServlet(RestServlet):
|
||||||
@ -217,7 +218,7 @@ class ListRoomRestServlet(RestServlet):
|
|||||||
RoomSortOrder.STATE_EVENTS.value,
|
RoomSortOrder.STATE_EVENTS.value,
|
||||||
):
|
):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Unknown value for order_by: %s" % (order_by,),
|
"Unknown value for order_by: %s" % (order_by,),
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -225,7 +226,7 @@ class ListRoomRestServlet(RestServlet):
|
|||||||
search_term = parse_string(request, "search_term", encoding="utf-8")
|
search_term = parse_string(request, "search_term", encoding="utf-8")
|
||||||
if search_term == "":
|
if search_term == "":
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"search_term cannot be an empty string",
|
"search_term cannot be an empty string",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -233,7 +234,9 @@ class ListRoomRestServlet(RestServlet):
|
|||||||
direction = parse_string(request, "dir", default="f")
|
direction = parse_string(request, "dir", default="f")
|
||||||
if direction not in ("f", "b"):
|
if direction not in ("f", "b"):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "Unknown direction: %s" % (direction,), errcode=Codes.INVALID_PARAM
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Unknown direction: %s" % (direction,),
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
reverse_order = True if direction == "b" else False
|
reverse_order = True if direction == "b" else False
|
||||||
@ -265,7 +268,7 @@ class ListRoomRestServlet(RestServlet):
|
|||||||
else:
|
else:
|
||||||
response["prev_batch"] = 0
|
response["prev_batch"] = 0
|
||||||
|
|
||||||
return 200, response
|
return HTTPStatus.OK, response
|
||||||
|
|
||||||
|
|
||||||
class RoomRestServlet(RestServlet):
|
class RoomRestServlet(RestServlet):
|
||||||
@ -310,7 +313,7 @@ class RoomRestServlet(RestServlet):
|
|||||||
members = await self.store.get_users_in_room(room_id)
|
members = await self.store.get_users_in_room(room_id)
|
||||||
ret["joined_local_devices"] = await self.store.count_devices_by_users(members)
|
ret["joined_local_devices"] = await self.store.count_devices_by_users(members)
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
async def on_DELETE(
|
async def on_DELETE(
|
||||||
self, request: SynapseRequest, room_id: str
|
self, request: SynapseRequest, room_id: str
|
||||||
@ -386,7 +389,7 @@ class RoomRestServlet(RestServlet):
|
|||||||
# See https://github.com/python/mypy/issues/4976#issuecomment-579883622
|
# See https://github.com/python/mypy/issues/4976#issuecomment-579883622
|
||||||
# for some discussion on why this is necessary. Either way,
|
# for some discussion on why this is necessary. Either way,
|
||||||
# `ret` is an opaque dictionary blob as far as the rest of the app cares.
|
# `ret` is an opaque dictionary blob as far as the rest of the app cares.
|
||||||
return 200, cast(JsonDict, ret)
|
return HTTPStatus.OK, cast(JsonDict, ret)
|
||||||
|
|
||||||
|
|
||||||
class RoomMembersRestServlet(RestServlet):
|
class RoomMembersRestServlet(RestServlet):
|
||||||
@ -413,7 +416,7 @@ class RoomMembersRestServlet(RestServlet):
|
|||||||
members = await self.store.get_users_in_room(room_id)
|
members = await self.store.get_users_in_room(room_id)
|
||||||
ret = {"members": members, "total": len(members)}
|
ret = {"members": members, "total": len(members)}
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
|
|
||||||
class RoomStateRestServlet(RestServlet):
|
class RoomStateRestServlet(RestServlet):
|
||||||
@ -443,16 +446,10 @@ class RoomStateRestServlet(RestServlet):
|
|||||||
event_ids = await self.store.get_current_state_ids(room_id)
|
event_ids = await self.store.get_current_state_ids(room_id)
|
||||||
events = await self.store.get_events(event_ids.values())
|
events = await self.store.get_events(event_ids.values())
|
||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
room_state = await self._event_serializer.serialize_events(
|
room_state = await self._event_serializer.serialize_events(events.values(), now)
|
||||||
events.values(),
|
|
||||||
now,
|
|
||||||
# We don't bother bundling aggregations in when asked for state
|
|
||||||
# events, as clients won't use them.
|
|
||||||
bundle_relations=False,
|
|
||||||
)
|
|
||||||
ret = {"state": room_state}
|
ret = {"state": room_state}
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
|
|
||||||
class JoinRoomAliasServlet(ResolveRoomIdMixin, RestServlet):
|
class JoinRoomAliasServlet(ResolveRoomIdMixin, RestServlet):
|
||||||
@ -481,7 +478,10 @@ class JoinRoomAliasServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
target_user = UserID.from_string(content["user_id"])
|
target_user = UserID.from_string(content["user_id"])
|
||||||
|
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "This endpoint can only be used with local users")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"This endpoint can only be used with local users",
|
||||||
|
)
|
||||||
|
|
||||||
if not await self.admin_handler.get_user(target_user):
|
if not await self.admin_handler.get_user(target_user):
|
||||||
raise NotFoundError("User not found")
|
raise NotFoundError("User not found")
|
||||||
@ -527,7 +527,7 @@ class JoinRoomAliasServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
ratelimit=False,
|
ratelimit=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {"room_id": room_id}
|
return HTTPStatus.OK, {"room_id": room_id}
|
||||||
|
|
||||||
|
|
||||||
class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
||||||
@ -568,7 +568,7 @@ class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
# Figure out which local users currently have power in the room, if any.
|
# Figure out which local users currently have power in the room, if any.
|
||||||
room_state = await self.state_handler.get_current_state(room_id)
|
room_state = await self.state_handler.get_current_state(room_id)
|
||||||
if not room_state:
|
if not room_state:
|
||||||
raise SynapseError(400, "Server not in room")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Server not in room")
|
||||||
|
|
||||||
create_event = room_state[(EventTypes.Create, "")]
|
create_event = room_state[(EventTypes.Create, "")]
|
||||||
power_levels = room_state.get((EventTypes.PowerLevels, ""))
|
power_levels = room_state.get((EventTypes.PowerLevels, ""))
|
||||||
@ -582,7 +582,9 @@ class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
admin_users.sort(key=lambda user: user_power[user])
|
admin_users.sort(key=lambda user: user_power[user])
|
||||||
|
|
||||||
if not admin_users:
|
if not admin_users:
|
||||||
raise SynapseError(400, "No local admin user in room")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "No local admin user in room"
|
||||||
|
)
|
||||||
|
|
||||||
admin_user_id = None
|
admin_user_id = None
|
||||||
|
|
||||||
@ -599,7 +601,7 @@ class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
|
|
||||||
if not admin_user_id:
|
if not admin_user_id:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"No local admin user in room",
|
"No local admin user in room",
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -610,7 +612,7 @@ class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
admin_user_id = create_event.sender
|
admin_user_id = create_event.sender
|
||||||
if not self.is_mine_id(admin_user_id):
|
if not self.is_mine_id(admin_user_id):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"No local admin user in room",
|
"No local admin user in room",
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -639,7 +641,8 @@ class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
except AuthError:
|
except AuthError:
|
||||||
# The admin user we found turned out not to have enough power.
|
# The admin user we found turned out not to have enough power.
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "No local admin user in room with power to update power levels."
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"No local admin user in room with power to update power levels.",
|
||||||
)
|
)
|
||||||
|
|
||||||
# Now we check if the user we're granting admin rights to is already in
|
# Now we check if the user we're granting admin rights to is already in
|
||||||
@ -653,7 +656,7 @@ class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if is_joined:
|
if is_joined:
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
join_rules = room_state.get((EventTypes.JoinRules, ""))
|
join_rules = room_state.get((EventTypes.JoinRules, ""))
|
||||||
is_public = False
|
is_public = False
|
||||||
@ -661,7 +664,7 @@ class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
is_public = join_rules.content.get("join_rule") == JoinRules.PUBLIC
|
is_public = join_rules.content.get("join_rule") == JoinRules.PUBLIC
|
||||||
|
|
||||||
if is_public:
|
if is_public:
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
await self.room_member_handler.update_membership(
|
await self.room_member_handler.update_membership(
|
||||||
fake_requester,
|
fake_requester,
|
||||||
@ -670,7 +673,7 @@ class MakeRoomAdminRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
action=Membership.INVITE,
|
action=Membership.INVITE,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
|
|
||||||
class ForwardExtremitiesRestServlet(ResolveRoomIdMixin, RestServlet):
|
class ForwardExtremitiesRestServlet(ResolveRoomIdMixin, RestServlet):
|
||||||
@ -702,7 +705,7 @@ class ForwardExtremitiesRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
room_id, _ = await self.resolve_room_id(room_identifier)
|
room_id, _ = await self.resolve_room_id(room_identifier)
|
||||||
|
|
||||||
deleted_count = await self.store.delete_forward_extremities_for_room(room_id)
|
deleted_count = await self.store.delete_forward_extremities_for_room(room_id)
|
||||||
return 200, {"deleted": deleted_count}
|
return HTTPStatus.OK, {"deleted": deleted_count}
|
||||||
|
|
||||||
async def on_GET(
|
async def on_GET(
|
||||||
self, request: SynapseRequest, room_identifier: str
|
self, request: SynapseRequest, room_identifier: str
|
||||||
@ -713,7 +716,7 @@ class ForwardExtremitiesRestServlet(ResolveRoomIdMixin, RestServlet):
|
|||||||
room_id, _ = await self.resolve_room_id(room_identifier)
|
room_id, _ = await self.resolve_room_id(room_identifier)
|
||||||
|
|
||||||
extremities = await self.store.get_forward_extremities_for_room(room_id)
|
extremities = await self.store.get_forward_extremities_for_room(room_id)
|
||||||
return 200, {"count": len(extremities), "results": extremities}
|
return HTTPStatus.OK, {"count": len(extremities), "results": extremities}
|
||||||
|
|
||||||
|
|
||||||
class RoomEventContextServlet(RestServlet):
|
class RoomEventContextServlet(RestServlet):
|
||||||
@ -762,7 +765,9 @@ class RoomEventContextServlet(RestServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if not results:
|
if not results:
|
||||||
raise SynapseError(404, "Event not found.", errcode=Codes.NOT_FOUND)
|
raise SynapseError(
|
||||||
|
HTTPStatus.NOT_FOUND, "Event not found.", errcode=Codes.NOT_FOUND
|
||||||
|
)
|
||||||
|
|
||||||
time_now = self.clock.time_msec()
|
time_now = self.clock.time_msec()
|
||||||
results["events_before"] = await self._event_serializer.serialize_events(
|
results["events_before"] = await self._event_serializer.serialize_events(
|
||||||
@ -775,13 +780,10 @@ class RoomEventContextServlet(RestServlet):
|
|||||||
results["events_after"], time_now
|
results["events_after"], time_now
|
||||||
)
|
)
|
||||||
results["state"] = await self._event_serializer.serialize_events(
|
results["state"] = await self._event_serializer.serialize_events(
|
||||||
results["state"],
|
results["state"], time_now
|
||||||
time_now,
|
|
||||||
# No need to bundle aggregations for state events
|
|
||||||
bundle_relations=False,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, results
|
return HTTPStatus.OK, results
|
||||||
|
|
||||||
|
|
||||||
class BlockRoomRestServlet(RestServlet):
|
class BlockRoomRestServlet(RestServlet):
|
||||||
|
@ -11,6 +11,7 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Awaitable, Optional, Tuple
|
from typing import TYPE_CHECKING, Awaitable, Optional, Tuple
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes
|
from synapse.api.constants import EventTypes
|
||||||
@ -82,11 +83,15 @@ class SendServerNoticeServlet(RestServlet):
|
|||||||
# but worker processes still need to initialise SendServerNoticeServlet (as it is part of the
|
# but worker processes still need to initialise SendServerNoticeServlet (as it is part of the
|
||||||
# admin api).
|
# admin api).
|
||||||
if not self.server_notices_manager.is_enabled():
|
if not self.server_notices_manager.is_enabled():
|
||||||
raise SynapseError(400, "Server notices are not enabled on this server")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Server notices are not enabled on this server"
|
||||||
|
)
|
||||||
|
|
||||||
target_user = UserID.from_string(body["user_id"])
|
target_user = UserID.from_string(body["user_id"])
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Server notices can only be sent to local users")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Server notices can only be sent to local users"
|
||||||
|
)
|
||||||
|
|
||||||
if not await self.admin_handler.get_user(target_user):
|
if not await self.admin_handler.get_user(target_user):
|
||||||
raise NotFoundError("User not found")
|
raise NotFoundError("User not found")
|
||||||
@ -99,7 +104,7 @@ class SendServerNoticeServlet(RestServlet):
|
|||||||
txn_id=txn_id,
|
txn_id=txn_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {"event_id": event.event_id}
|
return HTTPStatus.OK, {"event_id": event.event_id}
|
||||||
|
|
||||||
def on_PUT(
|
def on_PUT(
|
||||||
self, request: SynapseRequest, txn_id: str
|
self, request: SynapseRequest, txn_id: str
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, SynapseError
|
||||||
@ -53,7 +54,7 @@ class UserMediaStatisticsRestServlet(RestServlet):
|
|||||||
UserSortOrder.DISPLAYNAME.value,
|
UserSortOrder.DISPLAYNAME.value,
|
||||||
):
|
):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Unknown value for order_by: %s" % (order_by,),
|
"Unknown value for order_by: %s" % (order_by,),
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -61,7 +62,7 @@ class UserMediaStatisticsRestServlet(RestServlet):
|
|||||||
start = parse_integer(request, "from", default=0)
|
start = parse_integer(request, "from", default=0)
|
||||||
if start < 0:
|
if start < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter from must be a string representing a positive integer.",
|
"Query parameter from must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -69,7 +70,7 @@ class UserMediaStatisticsRestServlet(RestServlet):
|
|||||||
limit = parse_integer(request, "limit", default=100)
|
limit = parse_integer(request, "limit", default=100)
|
||||||
if limit < 0:
|
if limit < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter limit must be a string representing a positive integer.",
|
"Query parameter limit must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -77,7 +78,7 @@ class UserMediaStatisticsRestServlet(RestServlet):
|
|||||||
from_ts = parse_integer(request, "from_ts", default=0)
|
from_ts = parse_integer(request, "from_ts", default=0)
|
||||||
if from_ts < 0:
|
if from_ts < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter from_ts must be a string representing a positive integer.",
|
"Query parameter from_ts must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -86,13 +87,13 @@ class UserMediaStatisticsRestServlet(RestServlet):
|
|||||||
if until_ts is not None:
|
if until_ts is not None:
|
||||||
if until_ts < 0:
|
if until_ts < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter until_ts must be a string representing a positive integer.",
|
"Query parameter until_ts must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
if until_ts <= from_ts:
|
if until_ts <= from_ts:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter until_ts must be greater than from_ts.",
|
"Query parameter until_ts must be greater than from_ts.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -100,7 +101,7 @@ class UserMediaStatisticsRestServlet(RestServlet):
|
|||||||
search_term = parse_string(request, "search_term")
|
search_term = parse_string(request, "search_term")
|
||||||
if search_term == "":
|
if search_term == "":
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter search_term cannot be an empty string.",
|
"Query parameter search_term cannot be an empty string.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -108,7 +109,9 @@ class UserMediaStatisticsRestServlet(RestServlet):
|
|||||||
direction = parse_string(request, "dir", default="f")
|
direction = parse_string(request, "dir", default="f")
|
||||||
if direction not in ("f", "b"):
|
if direction not in ("f", "b"):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "Unknown direction: %s" % (direction,), errcode=Codes.INVALID_PARAM
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Unknown direction: %s" % (direction,),
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
users_media, total = await self.store.get_users_media_usage_paginate(
|
users_media, total = await self.store.get_users_media_usage_paginate(
|
||||||
@ -118,4 +121,4 @@ class UserMediaStatisticsRestServlet(RestServlet):
|
|||||||
if (start + limit) < total:
|
if (start + limit) < total:
|
||||||
ret["next_token"] = start + len(users_media)
|
ret["next_token"] = start + len(users_media)
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
@ -79,14 +79,14 @@ class UsersRestServletV2(RestServlet):
|
|||||||
|
|
||||||
if start < 0:
|
if start < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter from must be a string representing a positive integer.",
|
"Query parameter from must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
if limit < 0:
|
if limit < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Query parameter limit must be a string representing a positive integer.",
|
"Query parameter limit must be a string representing a positive integer.",
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -122,7 +122,7 @@ class UsersRestServletV2(RestServlet):
|
|||||||
if (start + limit) < total:
|
if (start + limit) < total:
|
||||||
ret["next_token"] = str(start + len(users))
|
ret["next_token"] = str(start + len(users))
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
|
|
||||||
class UserRestServletV2(RestServlet):
|
class UserRestServletV2(RestServlet):
|
||||||
@ -172,14 +172,14 @@ class UserRestServletV2(RestServlet):
|
|||||||
|
|
||||||
target_user = UserID.from_string(user_id)
|
target_user = UserID.from_string(user_id)
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Can only look up local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only look up local users")
|
||||||
|
|
||||||
ret = await self.admin_handler.get_user(target_user)
|
ret = await self.admin_handler.get_user(target_user)
|
||||||
|
|
||||||
if not ret:
|
if not ret:
|
||||||
raise NotFoundError("User not found")
|
raise NotFoundError("User not found")
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
async def on_PUT(
|
async def on_PUT(
|
||||||
self, request: SynapseRequest, user_id: str
|
self, request: SynapseRequest, user_id: str
|
||||||
@ -191,7 +191,10 @@ class UserRestServletV2(RestServlet):
|
|||||||
body = parse_json_object_from_request(request)
|
body = parse_json_object_from_request(request)
|
||||||
|
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "This endpoint can only be used with local users")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"This endpoint can only be used with local users",
|
||||||
|
)
|
||||||
|
|
||||||
user = await self.admin_handler.get_user(target_user)
|
user = await self.admin_handler.get_user(target_user)
|
||||||
user_id = target_user.to_string()
|
user_id = target_user.to_string()
|
||||||
@ -210,7 +213,7 @@ class UserRestServletV2(RestServlet):
|
|||||||
|
|
||||||
user_type = body.get("user_type", None)
|
user_type = body.get("user_type", None)
|
||||||
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
|
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
|
||||||
raise SynapseError(400, "Invalid user type")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid user type")
|
||||||
|
|
||||||
set_admin_to = body.get("admin", False)
|
set_admin_to = body.get("admin", False)
|
||||||
if not isinstance(set_admin_to, bool):
|
if not isinstance(set_admin_to, bool):
|
||||||
@ -223,11 +226,13 @@ class UserRestServletV2(RestServlet):
|
|||||||
password = body.get("password", None)
|
password = body.get("password", None)
|
||||||
if password is not None:
|
if password is not None:
|
||||||
if not isinstance(password, str) or len(password) > 512:
|
if not isinstance(password, str) or len(password) > 512:
|
||||||
raise SynapseError(400, "Invalid password")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid password")
|
||||||
|
|
||||||
deactivate = body.get("deactivated", False)
|
deactivate = body.get("deactivated", False)
|
||||||
if not isinstance(deactivate, bool):
|
if not isinstance(deactivate, bool):
|
||||||
raise SynapseError(400, "'deactivated' parameter is not of type boolean")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "'deactivated' parameter is not of type boolean"
|
||||||
|
)
|
||||||
|
|
||||||
# convert List[Dict[str, str]] into List[Tuple[str, str]]
|
# convert List[Dict[str, str]] into List[Tuple[str, str]]
|
||||||
if external_ids is not None:
|
if external_ids is not None:
|
||||||
@ -282,7 +287,9 @@ class UserRestServletV2(RestServlet):
|
|||||||
user_id,
|
user_id,
|
||||||
)
|
)
|
||||||
except ExternalIDReuseException:
|
except ExternalIDReuseException:
|
||||||
raise SynapseError(409, "External id is already in use.")
|
raise SynapseError(
|
||||||
|
HTTPStatus.CONFLICT, "External id is already in use."
|
||||||
|
)
|
||||||
|
|
||||||
if "avatar_url" in body and isinstance(body["avatar_url"], str):
|
if "avatar_url" in body and isinstance(body["avatar_url"], str):
|
||||||
await self.profile_handler.set_avatar_url(
|
await self.profile_handler.set_avatar_url(
|
||||||
@ -293,7 +300,9 @@ class UserRestServletV2(RestServlet):
|
|||||||
if set_admin_to != user["admin"]:
|
if set_admin_to != user["admin"]:
|
||||||
auth_user = requester.user
|
auth_user = requester.user
|
||||||
if target_user == auth_user and not set_admin_to:
|
if target_user == auth_user and not set_admin_to:
|
||||||
raise SynapseError(400, "You may not demote yourself.")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "You may not demote yourself."
|
||||||
|
)
|
||||||
|
|
||||||
await self.store.set_server_admin(target_user, set_admin_to)
|
await self.store.set_server_admin(target_user, set_admin_to)
|
||||||
|
|
||||||
@ -319,7 +328,8 @@ class UserRestServletV2(RestServlet):
|
|||||||
and self.auth_handler.can_change_password()
|
and self.auth_handler.can_change_password()
|
||||||
):
|
):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "Must provide a password to re-activate an account."
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Must provide a password to re-activate an account.",
|
||||||
)
|
)
|
||||||
|
|
||||||
await self.deactivate_account_handler.activate_account(
|
await self.deactivate_account_handler.activate_account(
|
||||||
@ -332,7 +342,7 @@ class UserRestServletV2(RestServlet):
|
|||||||
user = await self.admin_handler.get_user(target_user)
|
user = await self.admin_handler.get_user(target_user)
|
||||||
assert user is not None
|
assert user is not None
|
||||||
|
|
||||||
return 200, user
|
return HTTPStatus.OK, user
|
||||||
|
|
||||||
else: # create user
|
else: # create user
|
||||||
displayname = body.get("displayname", None)
|
displayname = body.get("displayname", None)
|
||||||
@ -381,7 +391,9 @@ class UserRestServletV2(RestServlet):
|
|||||||
user_id,
|
user_id,
|
||||||
)
|
)
|
||||||
except ExternalIDReuseException:
|
except ExternalIDReuseException:
|
||||||
raise SynapseError(409, "External id is already in use.")
|
raise SynapseError(
|
||||||
|
HTTPStatus.CONFLICT, "External id is already in use."
|
||||||
|
)
|
||||||
|
|
||||||
if "avatar_url" in body and isinstance(body["avatar_url"], str):
|
if "avatar_url" in body and isinstance(body["avatar_url"], str):
|
||||||
await self.profile_handler.set_avatar_url(
|
await self.profile_handler.set_avatar_url(
|
||||||
@ -429,51 +441,61 @@ class UserRegisterServlet(RestServlet):
|
|||||||
|
|
||||||
nonce = secrets.token_hex(64)
|
nonce = secrets.token_hex(64)
|
||||||
self.nonces[nonce] = int(self.reactor.seconds())
|
self.nonces[nonce] = int(self.reactor.seconds())
|
||||||
return 200, {"nonce": nonce}
|
return HTTPStatus.OK, {"nonce": nonce}
|
||||||
|
|
||||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
self._clear_old_nonces()
|
self._clear_old_nonces()
|
||||||
|
|
||||||
if not self.hs.config.registration.registration_shared_secret:
|
if not self.hs.config.registration.registration_shared_secret:
|
||||||
raise SynapseError(400, "Shared secret registration is not enabled")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Shared secret registration is not enabled"
|
||||||
|
)
|
||||||
|
|
||||||
body = parse_json_object_from_request(request)
|
body = parse_json_object_from_request(request)
|
||||||
|
|
||||||
if "nonce" not in body:
|
if "nonce" not in body:
|
||||||
raise SynapseError(400, "nonce must be specified", errcode=Codes.BAD_JSON)
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"nonce must be specified",
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
|
)
|
||||||
|
|
||||||
nonce = body["nonce"]
|
nonce = body["nonce"]
|
||||||
|
|
||||||
if nonce not in self.nonces:
|
if nonce not in self.nonces:
|
||||||
raise SynapseError(400, "unrecognised nonce")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "unrecognised nonce")
|
||||||
|
|
||||||
# Delete the nonce, so it can't be reused, even if it's invalid
|
# Delete the nonce, so it can't be reused, even if it's invalid
|
||||||
del self.nonces[nonce]
|
del self.nonces[nonce]
|
||||||
|
|
||||||
if "username" not in body:
|
if "username" not in body:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "username must be specified", errcode=Codes.BAD_JSON
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"username must be specified",
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
if not isinstance(body["username"], str) or len(body["username"]) > 512:
|
if not isinstance(body["username"], str) or len(body["username"]) > 512:
|
||||||
raise SynapseError(400, "Invalid username")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid username")
|
||||||
|
|
||||||
username = body["username"].encode("utf-8")
|
username = body["username"].encode("utf-8")
|
||||||
if b"\x00" in username:
|
if b"\x00" in username:
|
||||||
raise SynapseError(400, "Invalid username")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid username")
|
||||||
|
|
||||||
if "password" not in body:
|
if "password" not in body:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "password must be specified", errcode=Codes.BAD_JSON
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"password must be specified",
|
||||||
|
errcode=Codes.BAD_JSON,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
password = body["password"]
|
password = body["password"]
|
||||||
if not isinstance(password, str) or len(password) > 512:
|
if not isinstance(password, str) or len(password) > 512:
|
||||||
raise SynapseError(400, "Invalid password")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid password")
|
||||||
|
|
||||||
password_bytes = password.encode("utf-8")
|
password_bytes = password.encode("utf-8")
|
||||||
if b"\x00" in password_bytes:
|
if b"\x00" in password_bytes:
|
||||||
raise SynapseError(400, "Invalid password")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid password")
|
||||||
|
|
||||||
password_hash = await self.auth_handler.hash(password)
|
password_hash = await self.auth_handler.hash(password)
|
||||||
|
|
||||||
@ -482,10 +504,12 @@ class UserRegisterServlet(RestServlet):
|
|||||||
displayname = body.get("displayname", None)
|
displayname = body.get("displayname", None)
|
||||||
|
|
||||||
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
|
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
|
||||||
raise SynapseError(400, "Invalid user type")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid user type")
|
||||||
|
|
||||||
if "mac" not in body:
|
if "mac" not in body:
|
||||||
raise SynapseError(400, "mac must be specified", errcode=Codes.BAD_JSON)
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "mac must be specified", errcode=Codes.BAD_JSON
|
||||||
|
)
|
||||||
|
|
||||||
got_mac = body["mac"]
|
got_mac = body["mac"]
|
||||||
|
|
||||||
@ -507,7 +531,7 @@ class UserRegisterServlet(RestServlet):
|
|||||||
want_mac = want_mac_builder.hexdigest()
|
want_mac = want_mac_builder.hexdigest()
|
||||||
|
|
||||||
if not hmac.compare_digest(want_mac.encode("ascii"), got_mac.encode("ascii")):
|
if not hmac.compare_digest(want_mac.encode("ascii"), got_mac.encode("ascii")):
|
||||||
raise SynapseError(403, "HMAC incorrect")
|
raise SynapseError(HTTPStatus.FORBIDDEN, "HMAC incorrect")
|
||||||
|
|
||||||
# Reuse the parts of RegisterRestServlet to reduce code duplication
|
# Reuse the parts of RegisterRestServlet to reduce code duplication
|
||||||
from synapse.rest.client.register import RegisterRestServlet
|
from synapse.rest.client.register import RegisterRestServlet
|
||||||
@ -524,7 +548,7 @@ class UserRegisterServlet(RestServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
result = await register._create_registration_details(user_id, body)
|
result = await register._create_registration_details(user_id, body)
|
||||||
return 200, result
|
return HTTPStatus.OK, result
|
||||||
|
|
||||||
|
|
||||||
class WhoisRestServlet(RestServlet):
|
class WhoisRestServlet(RestServlet):
|
||||||
@ -552,11 +576,11 @@ class WhoisRestServlet(RestServlet):
|
|||||||
await assert_user_is_admin(self.auth, auth_user)
|
await assert_user_is_admin(self.auth, auth_user)
|
||||||
|
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Can only whois a local user")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only whois a local user")
|
||||||
|
|
||||||
ret = await self.admin_handler.get_whois(target_user)
|
ret = await self.admin_handler.get_whois(target_user)
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
|
|
||||||
class DeactivateAccountRestServlet(RestServlet):
|
class DeactivateAccountRestServlet(RestServlet):
|
||||||
@ -575,7 +599,9 @@ class DeactivateAccountRestServlet(RestServlet):
|
|||||||
await assert_user_is_admin(self.auth, requester.user)
|
await assert_user_is_admin(self.auth, requester.user)
|
||||||
|
|
||||||
if not self.is_mine(UserID.from_string(target_user_id)):
|
if not self.is_mine(UserID.from_string(target_user_id)):
|
||||||
raise SynapseError(400, "Can only deactivate local users")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Can only deactivate local users"
|
||||||
|
)
|
||||||
|
|
||||||
if not await self.store.get_user_by_id(target_user_id):
|
if not await self.store.get_user_by_id(target_user_id):
|
||||||
raise NotFoundError("User not found")
|
raise NotFoundError("User not found")
|
||||||
@ -597,7 +623,7 @@ class DeactivateAccountRestServlet(RestServlet):
|
|||||||
else:
|
else:
|
||||||
id_server_unbind_result = "no-support"
|
id_server_unbind_result = "no-support"
|
||||||
|
|
||||||
return 200, {"id_server_unbind_result": id_server_unbind_result}
|
return HTTPStatus.OK, {"id_server_unbind_result": id_server_unbind_result}
|
||||||
|
|
||||||
|
|
||||||
class AccountValidityRenewServlet(RestServlet):
|
class AccountValidityRenewServlet(RestServlet):
|
||||||
@ -620,7 +646,7 @@ class AccountValidityRenewServlet(RestServlet):
|
|||||||
|
|
||||||
if "user_id" not in body:
|
if "user_id" not in body:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"Missing property 'user_id' in the request body",
|
"Missing property 'user_id' in the request body",
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -631,7 +657,7 @@ class AccountValidityRenewServlet(RestServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
res = {"expiration_ts": expiration_ts}
|
res = {"expiration_ts": expiration_ts}
|
||||||
return 200, res
|
return HTTPStatus.OK, res
|
||||||
|
|
||||||
|
|
||||||
class ResetPasswordRestServlet(RestServlet):
|
class ResetPasswordRestServlet(RestServlet):
|
||||||
@ -678,7 +704,7 @@ class ResetPasswordRestServlet(RestServlet):
|
|||||||
await self._set_password_handler.set_password(
|
await self._set_password_handler.set_password(
|
||||||
target_user_id, new_password_hash, logout_devices, requester
|
target_user_id, new_password_hash, logout_devices, requester
|
||||||
)
|
)
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
|
|
||||||
class SearchUsersRestServlet(RestServlet):
|
class SearchUsersRestServlet(RestServlet):
|
||||||
@ -712,16 +738,16 @@ class SearchUsersRestServlet(RestServlet):
|
|||||||
|
|
||||||
# To allow all users to get the users list
|
# To allow all users to get the users list
|
||||||
# if not is_admin and target_user != auth_user:
|
# if not is_admin and target_user != auth_user:
|
||||||
# raise AuthError(403, "You are not a server admin")
|
# raise AuthError(HTTPStatus.FORBIDDEN, "You are not a server admin")
|
||||||
|
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Can only users a local user")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only users a local user")
|
||||||
|
|
||||||
term = parse_string(request, "term", required=True)
|
term = parse_string(request, "term", required=True)
|
||||||
logger.info("term: %s ", term)
|
logger.info("term: %s ", term)
|
||||||
|
|
||||||
ret = await self.store.search_users(term)
|
ret = await self.store.search_users(term)
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
|
|
||||||
class UserAdminServlet(RestServlet):
|
class UserAdminServlet(RestServlet):
|
||||||
@ -765,11 +791,14 @@ class UserAdminServlet(RestServlet):
|
|||||||
target_user = UserID.from_string(user_id)
|
target_user = UserID.from_string(user_id)
|
||||||
|
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Only local users can be admins of this homeserver")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Only local users can be admins of this homeserver",
|
||||||
|
)
|
||||||
|
|
||||||
is_admin = await self.store.is_server_admin(target_user)
|
is_admin = await self.store.is_server_admin(target_user)
|
||||||
|
|
||||||
return 200, {"admin": is_admin}
|
return HTTPStatus.OK, {"admin": is_admin}
|
||||||
|
|
||||||
async def on_PUT(
|
async def on_PUT(
|
||||||
self, request: SynapseRequest, user_id: str
|
self, request: SynapseRequest, user_id: str
|
||||||
@ -785,16 +814,19 @@ class UserAdminServlet(RestServlet):
|
|||||||
assert_params_in_dict(body, ["admin"])
|
assert_params_in_dict(body, ["admin"])
|
||||||
|
|
||||||
if not self.hs.is_mine(target_user):
|
if not self.hs.is_mine(target_user):
|
||||||
raise SynapseError(400, "Only local users can be admins of this homeserver")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST,
|
||||||
|
"Only local users can be admins of this homeserver",
|
||||||
|
)
|
||||||
|
|
||||||
set_admin_to = bool(body["admin"])
|
set_admin_to = bool(body["admin"])
|
||||||
|
|
||||||
if target_user == auth_user and not set_admin_to:
|
if target_user == auth_user and not set_admin_to:
|
||||||
raise SynapseError(400, "You may not demote yourself.")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "You may not demote yourself.")
|
||||||
|
|
||||||
await self.store.set_server_admin(target_user, set_admin_to)
|
await self.store.set_server_admin(target_user, set_admin_to)
|
||||||
|
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
|
|
||||||
class UserMembershipRestServlet(RestServlet):
|
class UserMembershipRestServlet(RestServlet):
|
||||||
@ -816,7 +848,7 @@ class UserMembershipRestServlet(RestServlet):
|
|||||||
|
|
||||||
room_ids = await self.store.get_rooms_for_user(user_id)
|
room_ids = await self.store.get_rooms_for_user(user_id)
|
||||||
ret = {"joined_rooms": list(room_ids), "total": len(room_ids)}
|
ret = {"joined_rooms": list(room_ids), "total": len(room_ids)}
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
|
|
||||||
class PushersRestServlet(RestServlet):
|
class PushersRestServlet(RestServlet):
|
||||||
@ -845,7 +877,7 @@ class PushersRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if not self.is_mine(UserID.from_string(user_id)):
|
if not self.is_mine(UserID.from_string(user_id)):
|
||||||
raise SynapseError(400, "Can only look up local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only look up local users")
|
||||||
|
|
||||||
if not await self.store.get_user_by_id(user_id):
|
if not await self.store.get_user_by_id(user_id):
|
||||||
raise NotFoundError("User not found")
|
raise NotFoundError("User not found")
|
||||||
@ -854,7 +886,10 @@ class PushersRestServlet(RestServlet):
|
|||||||
|
|
||||||
filtered_pushers = [p.as_dict() for p in pushers]
|
filtered_pushers = [p.as_dict() for p in pushers]
|
||||||
|
|
||||||
return 200, {"pushers": filtered_pushers, "total": len(filtered_pushers)}
|
return HTTPStatus.OK, {
|
||||||
|
"pushers": filtered_pushers,
|
||||||
|
"total": len(filtered_pushers),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
class UserTokenRestServlet(RestServlet):
|
class UserTokenRestServlet(RestServlet):
|
||||||
@ -887,16 +922,22 @@ class UserTokenRestServlet(RestServlet):
|
|||||||
auth_user = requester.user
|
auth_user = requester.user
|
||||||
|
|
||||||
if not self.hs.is_mine_id(user_id):
|
if not self.hs.is_mine_id(user_id):
|
||||||
raise SynapseError(400, "Only local users can be logged in as")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Only local users can be logged in as"
|
||||||
|
)
|
||||||
|
|
||||||
body = parse_json_object_from_request(request, allow_empty_body=True)
|
body = parse_json_object_from_request(request, allow_empty_body=True)
|
||||||
|
|
||||||
valid_until_ms = body.get("valid_until_ms")
|
valid_until_ms = body.get("valid_until_ms")
|
||||||
if valid_until_ms and not isinstance(valid_until_ms, int):
|
if valid_until_ms and not isinstance(valid_until_ms, int):
|
||||||
raise SynapseError(400, "'valid_until_ms' parameter must be an int")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "'valid_until_ms' parameter must be an int"
|
||||||
|
)
|
||||||
|
|
||||||
if auth_user.to_string() == user_id:
|
if auth_user.to_string() == user_id:
|
||||||
raise SynapseError(400, "Cannot use admin API to login as self")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Cannot use admin API to login as self"
|
||||||
|
)
|
||||||
|
|
||||||
token = await self.auth_handler.create_access_token_for_user_id(
|
token = await self.auth_handler.create_access_token_for_user_id(
|
||||||
user_id=auth_user.to_string(),
|
user_id=auth_user.to_string(),
|
||||||
@ -905,7 +946,7 @@ class UserTokenRestServlet(RestServlet):
|
|||||||
puppets_user_id=user_id,
|
puppets_user_id=user_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, {"access_token": token}
|
return HTTPStatus.OK, {"access_token": token}
|
||||||
|
|
||||||
|
|
||||||
class ShadowBanRestServlet(RestServlet):
|
class ShadowBanRestServlet(RestServlet):
|
||||||
@ -947,11 +988,13 @@ class ShadowBanRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if not self.hs.is_mine_id(user_id):
|
if not self.hs.is_mine_id(user_id):
|
||||||
raise SynapseError(400, "Only local users can be shadow-banned")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Only local users can be shadow-banned"
|
||||||
|
)
|
||||||
|
|
||||||
await self.store.set_shadow_banned(UserID.from_string(user_id), True)
|
await self.store.set_shadow_banned(UserID.from_string(user_id), True)
|
||||||
|
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
async def on_DELETE(
|
async def on_DELETE(
|
||||||
self, request: SynapseRequest, user_id: str
|
self, request: SynapseRequest, user_id: str
|
||||||
@ -959,11 +1002,13 @@ class ShadowBanRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if not self.hs.is_mine_id(user_id):
|
if not self.hs.is_mine_id(user_id):
|
||||||
raise SynapseError(400, "Only local users can be shadow-banned")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Only local users can be shadow-banned"
|
||||||
|
)
|
||||||
|
|
||||||
await self.store.set_shadow_banned(UserID.from_string(user_id), False)
|
await self.store.set_shadow_banned(UserID.from_string(user_id), False)
|
||||||
|
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
|
||||||
|
|
||||||
class RateLimitRestServlet(RestServlet):
|
class RateLimitRestServlet(RestServlet):
|
||||||
@ -995,7 +1040,7 @@ class RateLimitRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if not self.hs.is_mine_id(user_id):
|
if not self.hs.is_mine_id(user_id):
|
||||||
raise SynapseError(400, "Can only look up local users")
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only look up local users")
|
||||||
|
|
||||||
if not await self.store.get_user_by_id(user_id):
|
if not await self.store.get_user_by_id(user_id):
|
||||||
raise NotFoundError("User not found")
|
raise NotFoundError("User not found")
|
||||||
@ -1016,7 +1061,7 @@ class RateLimitRestServlet(RestServlet):
|
|||||||
else:
|
else:
|
||||||
ret = {}
|
ret = {}
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
async def on_POST(
|
async def on_POST(
|
||||||
self, request: SynapseRequest, user_id: str
|
self, request: SynapseRequest, user_id: str
|
||||||
@ -1024,7 +1069,9 @@ class RateLimitRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if not self.hs.is_mine_id(user_id):
|
if not self.hs.is_mine_id(user_id):
|
||||||
raise SynapseError(400, "Only local users can be ratelimited")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Only local users can be ratelimited"
|
||||||
|
)
|
||||||
|
|
||||||
if not await self.store.get_user_by_id(user_id):
|
if not await self.store.get_user_by_id(user_id):
|
||||||
raise NotFoundError("User not found")
|
raise NotFoundError("User not found")
|
||||||
@ -1036,14 +1083,14 @@ class RateLimitRestServlet(RestServlet):
|
|||||||
|
|
||||||
if not isinstance(messages_per_second, int) or messages_per_second < 0:
|
if not isinstance(messages_per_second, int) or messages_per_second < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"%r parameter must be a positive int" % (messages_per_second,),
|
"%r parameter must be a positive int" % (messages_per_second,),
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
|
|
||||||
if not isinstance(burst_count, int) or burst_count < 0:
|
if not isinstance(burst_count, int) or burst_count < 0:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
HTTPStatus.BAD_REQUEST,
|
||||||
"%r parameter must be a positive int" % (burst_count,),
|
"%r parameter must be a positive int" % (burst_count,),
|
||||||
errcode=Codes.INVALID_PARAM,
|
errcode=Codes.INVALID_PARAM,
|
||||||
)
|
)
|
||||||
@ -1059,7 +1106,7 @@ class RateLimitRestServlet(RestServlet):
|
|||||||
"burst_count": ratelimit.burst_count,
|
"burst_count": ratelimit.burst_count,
|
||||||
}
|
}
|
||||||
|
|
||||||
return 200, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
async def on_DELETE(
|
async def on_DELETE(
|
||||||
self, request: SynapseRequest, user_id: str
|
self, request: SynapseRequest, user_id: str
|
||||||
@ -1067,11 +1114,13 @@ class RateLimitRestServlet(RestServlet):
|
|||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
if not self.hs.is_mine_id(user_id):
|
if not self.hs.is_mine_id(user_id):
|
||||||
raise SynapseError(400, "Only local users can be ratelimited")
|
raise SynapseError(
|
||||||
|
HTTPStatus.BAD_REQUEST, "Only local users can be ratelimited"
|
||||||
|
)
|
||||||
|
|
||||||
if not await self.store.get_user_by_id(user_id):
|
if not await self.store.get_user_by_id(user_id):
|
||||||
raise NotFoundError("User not found")
|
raise NotFoundError("User not found")
|
||||||
|
|
||||||
await self.store.delete_ratelimit_for_user(user_id)
|
await self.store.delete_ratelimit_for_user(user_id)
|
||||||
|
|
||||||
return 200, {}
|
return HTTPStatus.OK, {}
|
||||||
|
@ -14,7 +14,17 @@
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Dict, List, Optional, Tuple
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Any,
|
||||||
|
Awaitable,
|
||||||
|
Callable,
|
||||||
|
Dict,
|
||||||
|
List,
|
||||||
|
Optional,
|
||||||
|
Tuple,
|
||||||
|
Union,
|
||||||
|
)
|
||||||
|
|
||||||
from typing_extensions import TypedDict
|
from typing_extensions import TypedDict
|
||||||
|
|
||||||
@ -28,7 +38,6 @@ from synapse.http.server import HttpServer, finish_request
|
|||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
RestServlet,
|
RestServlet,
|
||||||
assert_params_in_dict,
|
assert_params_in_dict,
|
||||||
parse_boolean,
|
|
||||||
parse_bytes_from_args,
|
parse_bytes_from_args,
|
||||||
parse_json_object_from_request,
|
parse_json_object_from_request,
|
||||||
parse_string,
|
parse_string,
|
||||||
@ -63,7 +72,7 @@ class LoginRestServlet(RestServlet):
|
|||||||
JWT_TYPE_DEPRECATED = "m.login.jwt"
|
JWT_TYPE_DEPRECATED = "m.login.jwt"
|
||||||
APPSERVICE_TYPE = "m.login.application_service"
|
APPSERVICE_TYPE = "m.login.application_service"
|
||||||
APPSERVICE_TYPE_UNSTABLE = "uk.half-shot.msc2778.login.application_service"
|
APPSERVICE_TYPE_UNSTABLE = "uk.half-shot.msc2778.login.application_service"
|
||||||
REFRESH_TOKEN_PARAM = "org.matrix.msc2918.refresh_token"
|
REFRESH_TOKEN_PARAM = "refresh_token"
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
@ -81,7 +90,7 @@ class LoginRestServlet(RestServlet):
|
|||||||
self.saml2_enabled = hs.config.saml2.saml2_enabled
|
self.saml2_enabled = hs.config.saml2.saml2_enabled
|
||||||
self.cas_enabled = hs.config.cas.cas_enabled
|
self.cas_enabled = hs.config.cas.cas_enabled
|
||||||
self.oidc_enabled = hs.config.oidc.oidc_enabled
|
self.oidc_enabled = hs.config.oidc.oidc_enabled
|
||||||
self._msc2918_enabled = (
|
self._refresh_tokens_enabled = (
|
||||||
hs.config.registration.refreshable_access_token_lifetime is not None
|
hs.config.registration.refreshable_access_token_lifetime is not None
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -154,14 +163,16 @@ class LoginRestServlet(RestServlet):
|
|||||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, LoginResponse]:
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, LoginResponse]:
|
||||||
login_submission = parse_json_object_from_request(request)
|
login_submission = parse_json_object_from_request(request)
|
||||||
|
|
||||||
if self._msc2918_enabled:
|
# Check to see if the client requested a refresh token.
|
||||||
# Check if this login should also issue a refresh token, as per
|
client_requested_refresh_token = login_submission.get(
|
||||||
# MSC2918
|
LoginRestServlet.REFRESH_TOKEN_PARAM, False
|
||||||
should_issue_refresh_token = parse_boolean(
|
)
|
||||||
request, name=LoginRestServlet.REFRESH_TOKEN_PARAM, default=False
|
if not isinstance(client_requested_refresh_token, bool):
|
||||||
)
|
raise SynapseError(400, "`refresh_token` should be true or false.")
|
||||||
else:
|
|
||||||
should_issue_refresh_token = False
|
should_issue_refresh_token = (
|
||||||
|
self._refresh_tokens_enabled and client_requested_refresh_token
|
||||||
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if login_submission["type"] in (
|
if login_submission["type"] in (
|
||||||
@ -291,6 +302,7 @@ class LoginRestServlet(RestServlet):
|
|||||||
ratelimit: bool = True,
|
ratelimit: bool = True,
|
||||||
auth_provider_id: Optional[str] = None,
|
auth_provider_id: Optional[str] = None,
|
||||||
should_issue_refresh_token: bool = False,
|
should_issue_refresh_token: bool = False,
|
||||||
|
auth_provider_session_id: Optional[str] = None,
|
||||||
) -> LoginResponse:
|
) -> LoginResponse:
|
||||||
"""Called when we've successfully authed the user and now need to
|
"""Called when we've successfully authed the user and now need to
|
||||||
actually login them in (e.g. create devices). This gets called on
|
actually login them in (e.g. create devices). This gets called on
|
||||||
@ -306,10 +318,10 @@ class LoginRestServlet(RestServlet):
|
|||||||
create_non_existent_users: Whether to create the user if they don't
|
create_non_existent_users: Whether to create the user if they don't
|
||||||
exist. Defaults to False.
|
exist. Defaults to False.
|
||||||
ratelimit: Whether to ratelimit the login request.
|
ratelimit: Whether to ratelimit the login request.
|
||||||
auth_provider_id: The SSO IdP the user used, if any (just used for the
|
auth_provider_id: The SSO IdP the user used, if any.
|
||||||
prometheus metrics).
|
|
||||||
should_issue_refresh_token: True if this login should issue
|
should_issue_refresh_token: True if this login should issue
|
||||||
a refresh token alongside the access token.
|
a refresh token alongside the access token.
|
||||||
|
auth_provider_session_id: The session ID got during login from the SSO IdP.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
result: Dictionary of account information after successful login.
|
result: Dictionary of account information after successful login.
|
||||||
@ -342,6 +354,7 @@ class LoginRestServlet(RestServlet):
|
|||||||
initial_display_name,
|
initial_display_name,
|
||||||
auth_provider_id=auth_provider_id,
|
auth_provider_id=auth_provider_id,
|
||||||
should_issue_refresh_token=should_issue_refresh_token,
|
should_issue_refresh_token=should_issue_refresh_token,
|
||||||
|
auth_provider_session_id=auth_provider_session_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
result = LoginResponse(
|
result = LoginResponse(
|
||||||
@ -387,6 +400,7 @@ class LoginRestServlet(RestServlet):
|
|||||||
self.auth_handler._sso_login_callback,
|
self.auth_handler._sso_login_callback,
|
||||||
auth_provider_id=res.auth_provider_id,
|
auth_provider_id=res.auth_provider_id,
|
||||||
should_issue_refresh_token=should_issue_refresh_token,
|
should_issue_refresh_token=should_issue_refresh_token,
|
||||||
|
auth_provider_session_id=res.auth_provider_session_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _do_jwt_login(
|
async def _do_jwt_login(
|
||||||
@ -448,9 +462,7 @@ def _get_auth_flow_dict_for_idp(idp: SsoIdentityProvider) -> JsonDict:
|
|||||||
|
|
||||||
|
|
||||||
class RefreshTokenServlet(RestServlet):
|
class RefreshTokenServlet(RestServlet):
|
||||||
PATTERNS = client_patterns(
|
PATTERNS = (re.compile("^/_matrix/client/v1/refresh$"),)
|
||||||
"/org.matrix.msc2918.refresh_token/refresh$", releases=(), unstable=True
|
|
||||||
)
|
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self._auth_handler = hs.get_auth_handler()
|
self._auth_handler = hs.get_auth_handler()
|
||||||
@ -458,6 +470,7 @@ class RefreshTokenServlet(RestServlet):
|
|||||||
self.refreshable_access_token_lifetime = (
|
self.refreshable_access_token_lifetime = (
|
||||||
hs.config.registration.refreshable_access_token_lifetime
|
hs.config.registration.refreshable_access_token_lifetime
|
||||||
)
|
)
|
||||||
|
self.refresh_token_lifetime = hs.config.registration.refresh_token_lifetime
|
||||||
|
|
||||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
refresh_submission = parse_json_object_from_request(request)
|
refresh_submission = parse_json_object_from_request(request)
|
||||||
@ -467,29 +480,40 @@ class RefreshTokenServlet(RestServlet):
|
|||||||
if not isinstance(token, str):
|
if not isinstance(token, str):
|
||||||
raise SynapseError(400, "Invalid param: refresh_token", Codes.INVALID_PARAM)
|
raise SynapseError(400, "Invalid param: refresh_token", Codes.INVALID_PARAM)
|
||||||
|
|
||||||
valid_until_ms = (
|
now = self._clock.time_msec()
|
||||||
self._clock.time_msec() + self.refreshable_access_token_lifetime
|
access_valid_until_ms = None
|
||||||
)
|
if self.refreshable_access_token_lifetime is not None:
|
||||||
access_token, refresh_token = await self._auth_handler.refresh_token(
|
access_valid_until_ms = now + self.refreshable_access_token_lifetime
|
||||||
token, valid_until_ms
|
refresh_valid_until_ms = None
|
||||||
)
|
if self.refresh_token_lifetime is not None:
|
||||||
expires_in_ms = valid_until_ms - self._clock.time_msec()
|
refresh_valid_until_ms = now + self.refresh_token_lifetime
|
||||||
return (
|
|
||||||
200,
|
(
|
||||||
{
|
access_token,
|
||||||
"access_token": access_token,
|
refresh_token,
|
||||||
"refresh_token": refresh_token,
|
actual_access_token_expiry,
|
||||||
"expires_in_ms": expires_in_ms,
|
) = await self._auth_handler.refresh_token(
|
||||||
},
|
token, access_valid_until_ms, refresh_valid_until_ms
|
||||||
)
|
)
|
||||||
|
|
||||||
|
response: Dict[str, Union[str, int]] = {
|
||||||
|
"access_token": access_token,
|
||||||
|
"refresh_token": refresh_token,
|
||||||
|
}
|
||||||
|
|
||||||
|
# expires_in_ms is only present if the token expires
|
||||||
|
if actual_access_token_expiry is not None:
|
||||||
|
response["expires_in_ms"] = actual_access_token_expiry - now
|
||||||
|
|
||||||
|
return 200, response
|
||||||
|
|
||||||
|
|
||||||
class SsoRedirectServlet(RestServlet):
|
class SsoRedirectServlet(RestServlet):
|
||||||
PATTERNS = list(client_patterns("/login/(cas|sso)/redirect$", v1=True)) + [
|
PATTERNS = list(client_patterns("/login/(cas|sso)/redirect$", v1=True)) + [
|
||||||
re.compile(
|
re.compile(
|
||||||
"^"
|
"^"
|
||||||
+ CLIENT_API_PREFIX
|
+ CLIENT_API_PREFIX
|
||||||
+ "/r0/login/sso/redirect/(?P<idp_id>[A-Za-z0-9_.~-]+)$"
|
+ "/(r0|v3)/login/sso/redirect/(?P<idp_id>[A-Za-z0-9_.~-]+)$"
|
||||||
)
|
)
|
||||||
]
|
]
|
||||||
|
|
||||||
|
@ -41,7 +41,6 @@ from synapse.http.server import HttpServer, finish_request, respond_with_html
|
|||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
RestServlet,
|
RestServlet,
|
||||||
assert_params_in_dict,
|
assert_params_in_dict,
|
||||||
parse_boolean,
|
|
||||||
parse_json_object_from_request,
|
parse_json_object_from_request,
|
||||||
parse_string,
|
parse_string,
|
||||||
)
|
)
|
||||||
@ -420,7 +419,7 @@ class RegisterRestServlet(RestServlet):
|
|||||||
self.password_policy_handler = hs.get_password_policy_handler()
|
self.password_policy_handler = hs.get_password_policy_handler()
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
self._registration_enabled = self.hs.config.registration.enable_registration
|
self._registration_enabled = self.hs.config.registration.enable_registration
|
||||||
self._msc2918_enabled = (
|
self._refresh_tokens_enabled = (
|
||||||
hs.config.registration.refreshable_access_token_lifetime is not None
|
hs.config.registration.refreshable_access_token_lifetime is not None
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -446,14 +445,15 @@ class RegisterRestServlet(RestServlet):
|
|||||||
f"Do not understand membership kind: {kind}",
|
f"Do not understand membership kind: {kind}",
|
||||||
)
|
)
|
||||||
|
|
||||||
if self._msc2918_enabled:
|
# Check if the clients wishes for this registration to issue a refresh
|
||||||
# Check if this registration should also issue a refresh token, as
|
# token.
|
||||||
# per MSC2918
|
client_requested_refresh_tokens = body.get("refresh_token", False)
|
||||||
should_issue_refresh_token = parse_boolean(
|
if not isinstance(client_requested_refresh_tokens, bool):
|
||||||
request, name="org.matrix.msc2918.refresh_token", default=False
|
raise SynapseError(400, "`refresh_token` should be true or false.")
|
||||||
)
|
|
||||||
else:
|
should_issue_refresh_token = (
|
||||||
should_issue_refresh_token = False
|
self._refresh_tokens_enabled and client_requested_refresh_tokens
|
||||||
|
)
|
||||||
|
|
||||||
# Pull out the provided username and do basic sanity checks early since
|
# Pull out the provided username and do basic sanity checks early since
|
||||||
# the auth layer will store these in sessions.
|
# the auth layer will store these in sessions.
|
||||||
|
@ -224,18 +224,14 @@ class RelationPaginationServlet(RestServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
# We set bundle_relations to False when retrieving the original
|
# Do not bundle aggregations when retrieving the original event because
|
||||||
# event because we want the content before relations were applied to
|
# we want the content before relations are applied to it.
|
||||||
# it.
|
|
||||||
original_event = await self._event_serializer.serialize_event(
|
original_event = await self._event_serializer.serialize_event(
|
||||||
event, now, bundle_relations=False
|
event, now, bundle_aggregations=False
|
||||||
)
|
|
||||||
# Similarly, we don't allow relations to be applied to relations, so we
|
|
||||||
# return the original relations without any aggregations on top of them
|
|
||||||
# here.
|
|
||||||
serialized_events = await self._event_serializer.serialize_events(
|
|
||||||
events, now, bundle_relations=False
|
|
||||||
)
|
)
|
||||||
|
# The relations returned for the requested event do include their
|
||||||
|
# bundled aggregations.
|
||||||
|
serialized_events = await self._event_serializer.serialize_events(events, now)
|
||||||
|
|
||||||
return_value = pagination_chunk.to_dict()
|
return_value = pagination_chunk.to_dict()
|
||||||
return_value["chunk"] = serialized_events
|
return_value["chunk"] = serialized_events
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user