Merge remote-tracking branch 'upstream/release-v1.43'

This commit is contained in:
Tulir Asokan 2021-09-14 10:00:40 -04:00
commit 1e11b73441
185 changed files with 4254 additions and 2303 deletions

View file

@ -1,10 +1,2 @@
# This file serves as a blacklist for SyTest tests that we expect will fail in
# Synapse when run under worker mode. For more details, see sytest-blacklist.
Can re-join room if re-invited
# new failures as of https://github.com/matrix-org/sytest/pull/732
Device list doesn't change if remote server is down
# https://buildkite.com/matrix-dot-org/synapse/builds/6134#6f67bf47-e234-474d-80e8-c6e1868b15c5
Server correctly handles incoming m.device_list_update

View file

@ -1,3 +1,72 @@
Synapse 1.43.0rc1 (2021-09-14)
==============================
This release drops support for the deprecated, unstable API for [MSC2858](https://github.com/matrix-org/matrix-doc/blob/master/proposals/2858-Multiple-SSO-Identity-Providers.md#unstable-prefix), as well as the undocumented `experimental.msc2858_enabled` config option. Client authors should update their clients to use the stable API, available since Synapse 1.30.
Features
--------
- Allow room creators to send historical events specified by [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) in existing room versions. ([\#10566](https://github.com/matrix-org/synapse/issues/10566))
- Add config option to use non-default manhole password and keys. ([\#10643](https://github.com/matrix-org/synapse/issues/10643))
- Skip final GC at shutdown to improve restart performance. ([\#10712](https://github.com/matrix-org/synapse/issues/10712))
- Allow configuration of the oEmbed URLs used for URL previews. ([\#10714](https://github.com/matrix-org/synapse/issues/10714), [\#10759](https://github.com/matrix-org/synapse/issues/10759))
- Prefer [room version 9](https://github.com/matrix-org/matrix-doc/pull/3375) for restricted rooms per the [room version capabilities](https://github.com/matrix-org/matrix-doc/pull/3244) API. ([\#10772](https://github.com/matrix-org/synapse/issues/10772))
Bugfixes
--------
- Fix a long-standing bug where room avatars were not included in email notifications. ([\#10658](https://github.com/matrix-org/synapse/issues/10658))
- Fix a bug where the ordering algorithm was skipping the `origin_server_ts` step in the spaces summary resulting in unstable room orderings. ([\#10730](https://github.com/matrix-org/synapse/issues/10730))
- Fix edge case when persisting events into a room where there are multiple events we previously hadn't calculated auth chains for (and hadn't marked as needing to be calculated). ([\#10743](https://github.com/matrix-org/synapse/issues/10743))
- Fix a bug which prevented calls to `/createRoom` that included the `room_alias_name` parameter from being handled by worker processes. ([\#10757](https://github.com/matrix-org/synapse/issues/10757))
- Fix a bug which prevented user registration via SSO to require consent tracking for SSO mapping providers that don't prompt for Matrix ID selection. Contributed by @AndrewFerr. ([\#10733](https://github.com/matrix-org/synapse/issues/10733))
- Only return the stripped state events for the `m.space.child` events in a room for the spaces summary from [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946). ([\#10760](https://github.com/matrix-org/synapse/issues/10760))
- Properly handle room upgrades of spaces. ([\#10774](https://github.com/matrix-org/synapse/issues/10774))
- Fix a bug which generated invalid homeserver config when the `frontend_proxy` worker type was passed to the Synapse Worker-based Complement image. ([\#10783](https://github.com/matrix-org/synapse/issues/10783))
Improved Documentation
----------------------
- Minor fix to the `media_repository` developer documentation. Contributed by @cuttingedge1109. ([\#10556](https://github.com/matrix-org/synapse/issues/10556))
- Update the documentation to note that the `/spaces` and `/hierarchy` endpoints can be routed to workers. ([\#10648](https://github.com/matrix-org/synapse/issues/10648))
- Clarify admin API documentation on undoing room deletions. ([\#10735](https://github.com/matrix-org/synapse/issues/10735))
- Split up the modules documentation and add examples for module developers. ([\#10758](https://github.com/matrix-org/synapse/issues/10758))
- Correct 2 typographical errors in the [Log Contexts documentation](https://matrix-org.github.io/synapse/latest/log_contexts.html). ([\#10795](https://github.com/matrix-org/synapse/issues/10795))
- Fix a wording mistake in the sample configuration. Contributed by @bramvdnheuvel:nltrix.net. ([\#10804](https://github.com/matrix-org/synapse/issues/10804))
Deprecations and Removals
-------------------------
- Remove the [unstable MSC2858 API](https://github.com/matrix-org/matrix-doc/blob/master/proposals/2858-Multiple-SSO-Identity-Providers.md#unstable-prefix), including the undocumented `experimental.msc2858_enabled` config option. The unstable API has been deprecated since Synapse 1.35. Client authors should update their clients to use the stable API introduced in Synapse 1.30 if they have not already done so. ([\#10693](https://github.com/matrix-org/synapse/issues/10693))
Internal Changes
----------------
- Add OpenTracing logging to help debug stuck messages (as described by issue [#9424](https://github.com/matrix-org/synapse/issues/9424)). ([\#10704](https://github.com/matrix-org/synapse/issues/10704))
- Add type annotations to the `synapse.util` package. ([\#10601](https://github.com/matrix-org/synapse/issues/10601))
- Ensure `rooms.creator` field is always populated for easy lookup in [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) usage later. ([\#10697](https://github.com/matrix-org/synapse/issues/10697))
- Add missing type hints to REST servlets. ([\#10707](https://github.com/matrix-org/synapse/issues/10707), [\#10728](https://github.com/matrix-org/synapse/issues/10728), [\#10736](https://github.com/matrix-org/synapse/issues/10736))
- Do not include rooms with unknown room versions in the spaces summary results. ([\#10727](https://github.com/matrix-org/synapse/issues/10727))
- Additional error checking for the `preset` field when creating a room. ([\#10738](https://github.com/matrix-org/synapse/issues/10738))
- Clean up some of the federation event authentication code for clarity. ([\#10744](https://github.com/matrix-org/synapse/issues/10744), [\#10745](https://github.com/matrix-org/synapse/issues/10745), [\#10746](https://github.com/matrix-org/synapse/issues/10746), [\#10771](https://github.com/matrix-org/synapse/issues/10771), [\#10773](https://github.com/matrix-org/synapse/issues/10773), [\#10781](https://github.com/matrix-org/synapse/issues/10781))
- Add an index to `presence_stream` to hopefully speed up startups a little. ([\#10748](https://github.com/matrix-org/synapse/issues/10748))
- Refactor event size checking code to simplify searching the codebase for the origins of certain error strings that are occasionally emitted. ([\#10750](https://github.com/matrix-org/synapse/issues/10750))
- Move tests relating to rooms having encryption out of the user directory tests. ([\#10752](https://github.com/matrix-org/synapse/issues/10752))
- Use `attrs` internally for the URL preview code & update documentation. ([\#10753](https://github.com/matrix-org/synapse/issues/10753))
- Minor speed ups when joining large rooms over federation. ([\#10754](https://github.com/matrix-org/synapse/issues/10754), [\#10755](https://github.com/matrix-org/synapse/issues/10755), [\#10756](https://github.com/matrix-org/synapse/issues/10756), [\#10780](https://github.com/matrix-org/synapse/issues/10780), [\#10784](https://github.com/matrix-org/synapse/issues/10784))
- Add a constant for `m.federate`. ([\#10775](https://github.com/matrix-org/synapse/issues/10775))
- Add a script to update the Debian changelog in a Docker container for systems that are not Debian-based. ([\#10778](https://github.com/matrix-org/synapse/issues/10778))
- Change the format of authenticated users in logs when a user is being puppeted by and admin user. ([\#10779](https://github.com/matrix-org/synapse/issues/10779))
- Remove fixed and flakey tests from the Sytest blacklist. ([\#10788](https://github.com/matrix-org/synapse/issues/10788))
- Improve internal details of the user directory code. ([\#10789](https://github.com/matrix-org/synapse/issues/10789))
- Use direct references to config flags. ([\#10798](https://github.com/matrix-org/synapse/issues/10798))
- Ensure the Rust reporter passes type checking with jaeger-client 4.7's type annotations. ([\#10799](https://github.com/matrix-org/synapse/issues/10799))
Synapse 1.42.0 (2021-09-07)
===========================

6
debian/changelog vendored
View file

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.43.0~rc1) stable; urgency=medium
* New synapse release 1.43.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 14 Sep 2021 11:39:46 +0100
matrix-synapse-py3 (1.42.0) stable; urgency=medium
* New synapse release 1.42.0.

View file

@ -162,7 +162,7 @@ WORKERS_CONFIG = {
"shared_extra_conf": {},
"worker_extra_conf": (
"worker_main_http_uri: http://127.0.0.1:%d"
% (MAIN_PROCESS_HTTP_LISTENER_PORT,),
% (MAIN_PROCESS_HTTP_LISTENER_PORT,)
),
},
}

View file

@ -34,14 +34,16 @@
- [Application Services](application_services.md)
- [Server Notices](server_notices.md)
- [Consent Tracking](consent_tracking.md)
- [URL Previews](url_previews.md)
- [URL Previews](development/url_previews.md)
- [User Directory](user_directory.md)
- [Message Retention Policies](message_retention_policies.md)
- [Pluggable Modules](modules.md)
- [Third Party Rules]()
- [Spam Checker](spam_checker.md)
- [Presence Router](presence_router_module.md)
- [Media Storage Providers]()
- [Pluggable Modules](modules/index.md)
- [Writing a module](modules/writing_a_module.md)
- [Spam checker callbacks](modules/spam_checker_callbacks.md)
- [Third-party rules callbacks](modules/third_party_rules_callbacks.md)
- [Presence router callbacks](modules/presence_router_callbacks.md)
- [Account validity callbacks](modules/account_validity_callbacks.md)
- [Porting a legacy module to the new interface](modules/porting_legacy_module.md)
- [Workers](workers.md)
- [Using `synctl` with Workers](synctl_workers.md)
- [Systemd](systemd-with-workers/README.md)

View file

@ -481,32 +481,44 @@ The following fields are returned in the JSON response body:
* `new_room_id` - A string representing the room ID of the new room.
## Undoing room shutdowns
## Undoing room deletions
*Note*: This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level,
*Note*: This guide may be outdated by the time you read it. By nature of room deletions being performed at the database level,
the structure can and does change without notice.
First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
First, it's important to understand that a room deletion is very destructive. Undoing a deletion is not as simple as pretending it
never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible
to recover at all:
* If the room was invite-only, your users will need to be re-invited.
* If the room no longer has any members at all, it'll be impossible to rejoin.
* The first user to rejoin will have to do so via an alias on a different server.
* The first user to rejoin will have to do so via an alias on a different
server (or receive an invite from a user on a different server).
With all that being said, if you still want to try and recover the room:
1. For safety reasons, shut down Synapse.
2. In the database, run `DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';`
* For caution: it's recommended to run this in a transaction: `BEGIN; DELETE ...;`, verify you got 1 result, then `COMMIT;`.
* The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
3. Restart Synapse.
1. If the room was `block`ed, you must unblock it on your server. This can be
accomplished as follows:
You will have to manually handle, if you so choose, the following:
1. For safety reasons, shut down Synapse.
2. In the database, run `DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';`
* For caution: it's recommended to run this in a transaction: `BEGIN; DELETE ...;`, verify you got 1 result, then `COMMIT;`.
* The room ID is the same one supplied to the delete room API, not the Content Violation room.
3. Restart Synapse.
* Aliases that would have been redirected to the Content Violation room.
* Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
* Removal of the Content Violation room if desired.
This step is unnecessary if `block` was not set.
2. Any room aliases on your server that pointed to the deleted room may have
been deleted, or redirected to the Content Violation room. These will need
to be restored manually.
3. Users on your server that were in the deleted room will have been kicked
from the room. Consider whether you want to update their membership
(possibly via the [Edit Room Membership API](room_membership.md)) or let
them handle rejoining themselves.
4. If `new_room_user_id` was given, a 'Content Violation' will have been
created. Consider whether you want to delete that roomm.
## Deprecated endpoint
@ -536,7 +548,7 @@ POST /_synapse/admin/v1/rooms/<room_id_or_alias>/make_room_admin
# Forward Extremities Admin API
Enables querying and deleting forward extremities from rooms. When a lot of forward
extremities accumulate in a room, performance can become degraded. For details, see
extremities accumulate in a room, performance can become degraded. For details, see
[#1760](https://github.com/matrix-org/synapse/issues/1760).
## Check for forward extremities
@ -565,7 +577,7 @@ A response as follows will be returned:
## Deleting forward extremities
**WARNING**: Please ensure you know what you're doing and have read
**WARNING**: Please ensure you know what you're doing and have read
the related issue [#1760](https://github.com/matrix-org/synapse/issues/1760).
Under no situations should this API be executed as an automated maintenance task!

View file

@ -0,0 +1,51 @@
URL Previews
============
The `GET /_matrix/media/r0/preview_url` endpoint provides a generic preview API
for URLs which outputs [Open Graph](https://ogp.me/) responses (with some Matrix
specific additions).
This does have trade-offs compared to other designs:
* Pros:
* Simple and flexible; can be used by any clients at any point
* Cons:
* If each homeserver provides one of these independently, all the HSes in a
room may needlessly DoS the target URI
* The URL metadata must be stored somewhere, rather than just using Matrix
itself to store the media.
* Matrix cannot be used to distribute the metadata between homeservers.
When Synapse is asked to preview a URL it does the following:
1. Checks against a URL blacklist (defined as `url_preview_url_blacklist` in the
config).
2. Checks the in-memory cache by URLs and returns the result if it exists. (This
is also used to de-duplicate processing of multiple in-flight requests at once.)
3. Kicks off a background process to generate a preview:
1. Checks the database cache by URL and timestamp and returns the result if it
has not expired and was successful (a 2xx return code).
2. Checks if the URL matches an oEmbed pattern. If it does, fetch the oEmbed
response. If this is an image, replace the URL to fetch and continue. If
if it is HTML content, use the HTML as the document and continue.
3. If it doesn't match an oEmbed pattern, downloads the URL and stores it
into a file via the media storage provider and saves the local media
metadata.
5. If the media is an image:
1. Generates thumbnails.
2. Generates an Open Graph response based on image properties.
6. If the media is HTML:
1. Decodes the HTML via the stored file.
2. Generates an Open Graph response from the HTML.
3. If an image exists in the Open Graph response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.
3. Updates the Open Graph response based on image properties.
7. Stores the result in the database cache.
4. Returns the result.
The in-memory cache expires after 1 hour.
Expired entries in the database cache (and their associated media files) are
deleted every 10 seconds. The default expiration time is 1 hour from download.

View file

@ -10,7 +10,7 @@ Logcontexts are also used for CPU and database accounting, so that we
can track which requests were responsible for high CPU use or database
activity.
The `synapse.logging.context` module provides a facilities for managing
The `synapse.logging.context` module provides facilities for managing
the current log context (as well as providing the `LoggingContextFilter`
class).
@ -351,7 +351,7 @@ and the awaitable chain is now orphaned, and will be garbage-collected at
some point. Note that `await_something_interesting` is a coroutine,
which Python implements as a generator function. When Python
garbage-collects generator functions, it gives them a chance to
clean up by making the `async` (or `yield`) raise a `GeneratorExit`
clean up by making the `await` (or `yield`) raise a `GeneratorExit`
exception. In our case, that means that the `__exit__` handler of
`PreserveLoggingContext` will carefully restore the request context, but
there is now nothing waiting for its return, so the request context is

View file

@ -11,7 +11,7 @@ Note that this will give administrative access to synapse to **all users** with
shell access to the server. It should therefore **not** be enabled in
environments where untrusted users have shell access.
***
## Configuring the manhole
To enable it, first uncomment the `manhole` listener configuration in
`homeserver.yaml`. The configuration is slightly different if you're using docker.
@ -52,16 +52,37 @@ listeners:
type: manhole
```
#### Accessing synapse manhole
### Security settings
The following config options are available:
- `username` - The username for the manhole (defaults to `matrix`)
- `password` - The password for the manhole (defaults to `rabbithole`)
- `ssh_priv_key` - The path to a private SSH key (defaults to a hardcoded value)
- `ssh_pub_key` - The path to a public SSH key (defaults to a hardcoded value)
For example:
```yaml
manhole_settings:
username: manhole
password: mypassword
ssh_priv_key: "/home/synapse/manhole_keys/id_rsa"
ssh_pub_key: "/home/synapse/manhole_keys/id_rsa.pub"
```
## Accessing synapse manhole
Then restart synapse, and point an ssh client at port 9000 on localhost, using
the username `matrix`:
the username and password configured in `homeserver.yaml` - with the default
configuration, this would be:
```bash
ssh -p9000 matrix@localhost
```
The password is `rabbithole`.
Then enter the password when prompted (the default is `rabbithole`).
This gives a Python REPL in which `hs` gives access to the
`synapse.server.HomeServer` object - which in turn gives access to many other

View file

@ -27,4 +27,4 @@ Remote content is cached under `"remote_content"` directory. Each item of
remote content is assigned a local `"filesystem_id"` to ensure that the
directory structure `"remote_content/server_name/aa/bb/ccccccccdddddddddddd"`
is appropriate. Thumbnails for remote content are stored under
`"remote_thumbnails/server_name/..."`
`"remote_thumbnail/server_name/..."`

View file

@ -1,399 +0,0 @@
# Modules
Synapse supports extending its functionality by configuring external modules.
## Using modules
To use a module on Synapse, add it to the `modules` section of the configuration file:
```yaml
modules:
- module: my_super_module.MySuperClass
config:
do_thing: true
- module: my_other_super_module.SomeClass
config: {}
```
Each module is defined by a path to a Python class as well as a configuration. This
information for a given module should be available in the module's own documentation.
**Note**: When using third-party modules, you effectively allow someone else to run
custom code on your Synapse homeserver. Server admins are encouraged to verify the
provenance of the modules they use on their homeserver and make sure the modules aren't
running malicious code on their instance.
Also note that we are currently in the process of migrating module interfaces to this
system. While some interfaces might be compatible with it, others still require
configuring modules in another part of Synapse's configuration file. Currently, only the
spam checker interface is compatible with this new system.
## Writing a module
A module is a Python class that uses Synapse's module API to interact with the
homeserver. It can register callbacks that Synapse will call on specific operations, as
well as web resources to attach to Synapse's web server.
When instantiated, a module is given its parsed configuration as well as an instance of
the `synapse.module_api.ModuleApi` class. The configuration is a dictionary, and is
either the output of the module's `parse_config` static method (see below), or the
configuration associated with the module in Synapse's configuration file.
See the documentation for the `ModuleApi` class
[here](https://github.com/matrix-org/synapse/blob/master/synapse/module_api/__init__.py).
### Handling the module's configuration
A module can implement the following static method:
```python
@staticmethod
def parse_config(config: dict) -> dict
```
This method is given a dictionary resulting from parsing the YAML configuration for the
module. It may modify it (for example by parsing durations expressed as strings (e.g.
"5d") into milliseconds, etc.), and return the modified dictionary. It may also verify
that the configuration is correct, and raise an instance of
`synapse.module_api.errors.ConfigError` if not.
### Registering a web resource
Modules can register web resources onto Synapse's web server using the following module
API method:
```python
def ModuleApi.register_web_resource(path: str, resource: IResource) -> None
```
The path is the full absolute path to register the resource at. For example, if you
register a resource for the path `/_synapse/client/my_super_module/say_hello`, Synapse
will serve it at `http(s)://[HS_URL]/_synapse/client/my_super_module/say_hello`. Note
that Synapse does not allow registering resources for several sub-paths in the `/_matrix`
namespace (such as anything under `/_matrix/client` for example). It is strongly
recommended that modules register their web resources under the `/_synapse/client`
namespace.
The provided resource is a Python class that implements Twisted's [IResource](https://twistedmatrix.com/documents/current/api/twisted.web.resource.IResource.html)
interface (such as [Resource](https://twistedmatrix.com/documents/current/api/twisted.web.resource.Resource.html)).
Only one resource can be registered for a given path. If several modules attempt to
register a resource for the same path, the module that appears first in Synapse's
configuration file takes priority.
Modules **must** register their web resources in their `__init__` method.
### Registering a callback
Modules can use Synapse's module API to register callbacks. Callbacks are functions that
Synapse will call when performing specific actions. Callbacks must be asynchronous, and
are split in categories. A single module may implement callbacks from multiple categories,
and is under no obligation to implement all callbacks from the categories it registers
callbacks for.
Modules can register callbacks using one of the module API's `register_[...]_callbacks`
methods. The callback functions are passed to these methods as keyword arguments, with
the callback name as the argument name and the function as its value. This is demonstrated
in the example below. A `register_[...]_callbacks` method exists for each module type
documented in this section.
#### Spam checker callbacks
Spam checker callbacks allow module developers to implement spam mitigation actions for
Synapse instances. Spam checker callbacks can be registered using the module API's
`register_spam_checker_callbacks` method.
The available spam checker callbacks are:
```python
async def check_event_for_spam(event: "synapse.events.EventBase") -> Union[bool, str]
```
Called when receiving an event from a client or via federation. The module can return
either a `bool` to indicate whether the event must be rejected because of spam, or a `str`
to indicate the event must be rejected because of spam and to give a rejection reason to
forward to clients.
```python
async def user_may_invite(inviter: str, invitee: str, room_id: str) -> bool
```
Called when processing an invitation. The module must return a `bool` indicating whether
the inviter can invite the invitee to the given room. Both inviter and invitee are
represented by their Matrix user ID (e.g. `@alice:example.com`).
```python
async def user_may_create_room(user: str) -> bool
```
Called when processing a room creation request. The module must return a `bool` indicating
whether the given user (represented by their Matrix user ID) is allowed to create a room.
```python
async def user_may_create_room_alias(user: str, room_alias: "synapse.types.RoomAlias") -> bool
```
Called when trying to associate an alias with an existing room. The module must return a
`bool` indicating whether the given user (represented by their Matrix user ID) is allowed
to set the given alias.
```python
async def user_may_publish_room(user: str, room_id: str) -> bool
```
Called when trying to publish a room to the homeserver's public rooms directory. The
module must return a `bool` indicating whether the given user (represented by their
Matrix user ID) is allowed to publish the given room.
```python
async def check_username_for_spam(user_profile: Dict[str, str]) -> bool
```
Called when computing search results in the user directory. The module must return a
`bool` indicating whether the given user profile can appear in search results. The profile
is represented as a dictionary with the following keys:
* `user_id`: The Matrix ID for this user.
* `display_name`: The user's display name.
* `avatar_url`: The `mxc://` URL to the user's avatar.
The module is given a copy of the original dictionary, so modifying it from within the
module cannot modify a user's profile when included in user directory search results.
```python
async def check_registration_for_spam(
email_threepid: Optional[dict],
username: Optional[str],
request_info: Collection[Tuple[str, str]],
auth_provider_id: Optional[str] = None,
) -> "synapse.spam_checker_api.RegistrationBehaviour"
```
Called when registering a new user. The module must return a `RegistrationBehaviour`
indicating whether the registration can go through or must be denied, or whether the user
may be allowed to register but will be shadow banned.
The arguments passed to this callback are:
* `email_threepid`: The email address used for registering, if any.
* `username`: The username the user would like to register. Can be `None`, meaning that
Synapse will generate one later.
* `request_info`: A collection of tuples, which first item is a user agent, and which
second item is an IP address. These user agents and IP addresses are the ones that were
used during the registration process.
* `auth_provider_id`: The identifier of the SSO authentication provider, if any.
```python
async def check_media_file_for_spam(
file_wrapper: "synapse.rest.media.v1.media_storage.ReadableFileWrapper",
file_info: "synapse.rest.media.v1._base.FileInfo",
) -> bool
```
Called when storing a local or remote file. The module must return a boolean indicating
whether the given file can be stored in the homeserver's media store.
#### Account validity callbacks
Account validity callbacks allow module developers to add extra steps to verify the
validity on an account, i.e. see if a user can be granted access to their account on the
Synapse instance. Account validity callbacks can be registered using the module API's
`register_account_validity_callbacks` method.
The available account validity callbacks are:
```python
async def is_user_expired(user: str) -> Optional[bool]
```
Called when processing any authenticated request (except for logout requests). The module
can return a `bool` to indicate whether the user has expired and should be locked out of
their account, or `None` if the module wasn't able to figure it out. The user is
represented by their Matrix user ID (e.g. `@alice:example.com`).
If the module returns `True`, the current request will be denied with the error code
`ORG_MATRIX_EXPIRED_ACCOUNT` and the HTTP status code 403. Note that this doesn't
invalidate the user's access token.
```python
async def on_user_registration(user: str) -> None
```
Called after successfully registering a user, in case the module needs to perform extra
operations to keep track of them. (e.g. add them to a database table). The user is
represented by their Matrix user ID.
#### Third party rules callbacks
Third party rules callbacks allow module developers to add extra checks to verify the
validity of incoming events. Third party event rules callbacks can be registered using
the module API's `register_third_party_rules_callbacks` method.
The available third party rules callbacks are:
```python
async def check_event_allowed(
event: "synapse.events.EventBase",
state_events: "synapse.types.StateMap",
) -> Tuple[bool, Optional[dict]]
```
**<span style="color:red">
This callback is very experimental and can and will break without notice. Module developers
are encouraged to implement `check_event_for_spam` from the spam checker category instead.
</span>**
Called when processing any incoming event, with the event and a `StateMap`
representing the current state of the room the event is being sent into. A `StateMap` is
a dictionary that maps tuples containing an event type and a state key to the
corresponding state event. For example retrieving the room's `m.room.create` event from
the `state_events` argument would look like this: `state_events.get(("m.room.create", ""))`.
The module must return a boolean indicating whether the event can be allowed.
Note that this callback function processes incoming events coming via federation
traffic (on top of client traffic). This means denying an event might cause the local
copy of the room's history to diverge from that of remote servers. This may cause
federation issues in the room. It is strongly recommended to only deny events using this
callback function if the sender is a local user, or in a private federation in which all
servers are using the same module, with the same configuration.
If the boolean returned by the module is `True`, it may also tell Synapse to replace the
event with new data by returning the new event's data as a dictionary. In order to do
that, it is recommended the module calls `event.get_dict()` to get the current event as a
dictionary, and modify the returned dictionary accordingly.
Note that replacing the event only works for events sent by local users, not for events
received over federation.
```python
async def on_create_room(
requester: "synapse.types.Requester",
request_content: dict,
is_requester_admin: bool,
) -> None
```
Called when processing a room creation request, with the `Requester` object for the user
performing the request, a dictionary representing the room creation request's JSON body
(see [the spec](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-createroom)
for a list of possible parameters), and a boolean indicating whether the user performing
the request is a server admin.
Modules can modify the `request_content` (by e.g. adding events to its `initial_state`),
or deny the room's creation by raising a `module_api.errors.SynapseError`.
#### Presence router callbacks
Presence router callbacks allow module developers to specify additional users (local or remote)
to receive certain presence updates from local users. Presence router callbacks can be
registered using the module API's `register_presence_router_callbacks` method.
The available presence router callbacks are:
```python
async def get_users_for_states(
self,
state_updates: Iterable["synapse.api.UserPresenceState"],
) -> Dict[str, Set["synapse.api.UserPresenceState"]]:
```
**Requires** `get_interested_users` to also be registered
Called when processing updates to the presence state of one or more users. This callback can
be used to instruct the server to forward that presence state to specific users. The module
must return a dictionary that maps from Matrix user IDs (which can be local or remote) to the
`UserPresenceState` changes that they should be forwarded.
Synapse will then attempt to send the specified presence updates to each user when possible.
```python
async def get_interested_users(
self,
user_id: str
) -> Union[Set[str], "synapse.module_api.PRESENCE_ALL_USERS"]
```
**Requires** `get_users_for_states` to also be registered
Called when determining which users someone should be able to see the presence state of. This
callback should return complementary results to `get_users_for_state` or the presence information
may not be properly forwarded.
The callback is given the Matrix user ID for a local user that is requesting presence data and
should return the Matrix user IDs of the users whose presence state they are allowed to
query. The returned users can be local or remote.
Alternatively the callback can return `synapse.module_api.PRESENCE_ALL_USERS`
to indicate that the user should receive updates from all known users.
For example, if the user `@alice:example.org` is passed to this method, and the Set
`{"@bob:example.com", "@charlie:somewhere.org"}` is returned, this signifies that Alice
should receive presence updates sent by Bob and Charlie, regardless of whether these users
share a room.
### Porting an existing module that uses the old interface
In order to port a module that uses Synapse's old module interface, its author needs to:
* ensure the module's callbacks are all asynchronous.
* register their callbacks using one or more of the `register_[...]_callbacks` methods
from the `ModuleApi` class in the module's `__init__` method (see [this section](#registering-a-callback)
for more info).
Additionally, if the module is packaged with an additional web resource, the module
should register this resource in its `__init__` method using the `register_web_resource`
method from the `ModuleApi` class (see [this section](#registering-a-web-resource) for
more info).
The module's author should also update any example in the module's configuration to only
use the new `modules` section in Synapse's configuration file (see [this section](#using-modules)
for more info).
### Example
The example below is a module that implements the spam checker callback
`user_may_create_room` to deny room creation to user `@evilguy:example.com`, and registers
a web resource to the path `/_synapse/client/demo/hello` that returns a JSON object.
```python
import json
from twisted.web.resource import Resource
from twisted.web.server import Request
from synapse.module_api import ModuleApi
class DemoResource(Resource):
def __init__(self, config):
super(DemoResource, self).__init__()
self.config = config
def render_GET(self, request: Request):
name = request.args.get(b"name")[0]
request.setHeader(b"Content-Type", b"application/json")
return json.dumps({"hello": name})
class DemoModule:
def __init__(self, config: dict, api: ModuleApi):
self.config = config
self.api = api
self.api.register_web_resource(
path="/_synapse/client/demo/hello",
resource=DemoResource(self.config),
)
self.api.register_spam_checker_callbacks(
user_may_create_room=self.user_may_create_room,
)
@staticmethod
def parse_config(config):
return config
async def user_may_create_room(self, user: str) -> bool:
if user == "@evilguy:example.com":
return False
return True
```

View file

@ -0,0 +1,33 @@
# Account validity callbacks
Account validity callbacks allow module developers to add extra steps to verify the
validity on an account, i.e. see if a user can be granted access to their account on the
Synapse instance. Account validity callbacks can be registered using the module API's
`register_account_validity_callbacks` method.
The available account validity callbacks are:
### `is_user_expired`
```python
async def is_user_expired(user: str) -> Optional[bool]
```
Called when processing any authenticated request (except for logout requests). The module
can return a `bool` to indicate whether the user has expired and should be locked out of
their account, or `None` if the module wasn't able to figure it out. The user is
represented by their Matrix user ID (e.g. `@alice:example.com`).
If the module returns `True`, the current request will be denied with the error code
`ORG_MATRIX_EXPIRED_ACCOUNT` and the HTTP status code 403. Note that this doesn't
invalidate the user's access token.
### `on_user_registration`
```python
async def on_user_registration(user: str) -> None
```
Called after successfully registering a user, in case the module needs to perform extra
operations to keep track of them. (e.g. add them to a database table). The user is
represented by their Matrix user ID.

34
docs/modules/index.md Normal file
View file

@ -0,0 +1,34 @@
# Modules
Synapse supports extending its functionality by configuring external modules.
## Using modules
To use a module on Synapse, add it to the `modules` section of the configuration file:
```yaml
modules:
- module: my_super_module.MySuperClass
config:
do_thing: true
- module: my_other_super_module.SomeClass
config: {}
```
Each module is defined by a path to a Python class as well as a configuration. This
information for a given module should be available in the module's own documentation.
**Note**: When using third-party modules, you effectively allow someone else to run
custom code on your Synapse homeserver. Server admins are encouraged to verify the
provenance of the modules they use on their homeserver and make sure the modules aren't
running malicious code on their instance.
Also note that we are currently in the process of migrating module interfaces to this
system. While some interfaces might be compatible with it, others still require
configuring modules in another part of Synapse's configuration file.
Currently, only the following pre-existing interfaces are compatible with this new system:
* spam checker
* third-party rules
* presence router

View file

@ -0,0 +1,17 @@
# Porting an existing module that uses the old interface
In order to port a module that uses Synapse's old module interface, its author needs to:
* ensure the module's callbacks are all asynchronous.
* register their callbacks using one or more of the `register_[...]_callbacks` methods
from the `ModuleApi` class in the module's `__init__` method (see [this section](writing_a_module.html#registering-a-callback)
for more info).
Additionally, if the module is packaged with an additional web resource, the module
should register this resource in its `__init__` method using the `register_web_resource`
method from the `ModuleApi` class (see [this section](writing_a_module.html#registering-a-web-resource) for
more info).
The module's author should also update any example in the module's configuration to only
use the new `modules` section in Synapse's configuration file (see [this section](index.html#using-modules)
for more info).

View file

@ -0,0 +1,90 @@
# Presence router callbacks
Presence router callbacks allow module developers to specify additional users (local or remote)
to receive certain presence updates from local users. Presence router callbacks can be
registered using the module API's `register_presence_router_callbacks` method.
## Callbacks
The available presence router callbacks are:
### `get_users_for_states`
```python
async def get_users_for_states(
state_updates: Iterable["synapse.api.UserPresenceState"],
) -> Dict[str, Set["synapse.api.UserPresenceState"]]
```
**Requires** `get_interested_users` to also be registered
Called when processing updates to the presence state of one or more users. This callback can
be used to instruct the server to forward that presence state to specific users. The module
must return a dictionary that maps from Matrix user IDs (which can be local or remote) to the
`UserPresenceState` changes that they should be forwarded.
Synapse will then attempt to send the specified presence updates to each user when possible.
### `get_interested_users`
```python
async def get_interested_users(
user_id: str
) -> Union[Set[str], "synapse.module_api.PRESENCE_ALL_USERS"]
```
**Requires** `get_users_for_states` to also be registered
Called when determining which users someone should be able to see the presence state of. This
callback should return complementary results to `get_users_for_state` or the presence information
may not be properly forwarded.
The callback is given the Matrix user ID for a local user that is requesting presence data and
should return the Matrix user IDs of the users whose presence state they are allowed to
query. The returned users can be local or remote.
Alternatively the callback can return `synapse.module_api.PRESENCE_ALL_USERS`
to indicate that the user should receive updates from all known users.
## Example
The example below is a module that implements both presence router callbacks, and ensures
that `@alice:example.org` receives all presence updates from `@bob:example.com` and
`@charlie:somewhere.org`, regardless of whether Alice shares a room with any of them.
```python
from typing import Dict, Iterable, Set, Union
from synapse.module_api import ModuleApi
class CustomPresenceRouter:
def __init__(self, config: dict, api: ModuleApi):
self.api = api
self.api.register_presence_router_callbacks(
get_users_for_states=self.get_users_for_states,
get_interested_users=self.get_interested_users,
)
async def get_users_for_states(
self,
state_updates: Iterable["synapse.api.UserPresenceState"],
) -> Dict[str, Set["synapse.api.UserPresenceState"]]:
res = {}
for update in state_updates:
if (
update.user_id == "@bob:example.com"
or update.user_id == "@charlie:somewhere.org"
):
res.setdefault("@alice:example.com", set()).add(update)
return res
async def get_interested_users(
self,
user_id: str,
) -> Union[Set[str], "synapse.module_api.PRESENCE_ALL_USERS"]:
if user_id == "@alice:example.com":
return {"@bob:example.com", "@charlie:somewhere.org"}
return set()
```

View file

@ -0,0 +1,160 @@
# Spam checker callbacks
Spam checker callbacks allow module developers to implement spam mitigation actions for
Synapse instances. Spam checker callbacks can be registered using the module API's
`register_spam_checker_callbacks` method.
## Callbacks
The available spam checker callbacks are:
### `check_event_for_spam`
```python
async def check_event_for_spam(event: "synapse.events.EventBase") -> Union[bool, str]
```
Called when receiving an event from a client or via federation. The module can return
either a `bool` to indicate whether the event must be rejected because of spam, or a `str`
to indicate the event must be rejected because of spam and to give a rejection reason to
forward to clients.
### `user_may_invite`
```python
async def user_may_invite(inviter: str, invitee: str, room_id: str) -> bool
```
Called when processing an invitation. The module must return a `bool` indicating whether
the inviter can invite the invitee to the given room. Both inviter and invitee are
represented by their Matrix user ID (e.g. `@alice:example.com`).
### `user_may_create_room`
```python
async def user_may_create_room(user: str) -> bool
```
Called when processing a room creation request. The module must return a `bool` indicating
whether the given user (represented by their Matrix user ID) is allowed to create a room.
### `user_may_create_room_alias`
```python
async def user_may_create_room_alias(user: str, room_alias: "synapse.types.RoomAlias") -> bool
```
Called when trying to associate an alias with an existing room. The module must return a
`bool` indicating whether the given user (represented by their Matrix user ID) is allowed
to set the given alias.
### `user_may_publish_room`
```python
async def user_may_publish_room(user: str, room_id: str) -> bool
```
Called when trying to publish a room to the homeserver's public rooms directory. The
module must return a `bool` indicating whether the given user (represented by their
Matrix user ID) is allowed to publish the given room.
### `check_username_for_spam`
```python
async def check_username_for_spam(user_profile: Dict[str, str]) -> bool
```
Called when computing search results in the user directory. The module must return a
`bool` indicating whether the given user profile can appear in search results. The profile
is represented as a dictionary with the following keys:
* `user_id`: The Matrix ID for this user.
* `display_name`: The user's display name.
* `avatar_url`: The `mxc://` URL to the user's avatar.
The module is given a copy of the original dictionary, so modifying it from within the
module cannot modify a user's profile when included in user directory search results.
### `check_registration_for_spam`
```python
async def check_registration_for_spam(
email_threepid: Optional[dict],
username: Optional[str],
request_info: Collection[Tuple[str, str]],
auth_provider_id: Optional[str] = None,
) -> "synapse.spam_checker_api.RegistrationBehaviour"
```
Called when registering a new user. The module must return a `RegistrationBehaviour`
indicating whether the registration can go through or must be denied, or whether the user
may be allowed to register but will be shadow banned.
The arguments passed to this callback are:
* `email_threepid`: The email address used for registering, if any.
* `username`: The username the user would like to register. Can be `None`, meaning that
Synapse will generate one later.
* `request_info`: A collection of tuples, which first item is a user agent, and which
second item is an IP address. These user agents and IP addresses are the ones that were
used during the registration process.
* `auth_provider_id`: The identifier of the SSO authentication provider, if any.
### `check_media_file_for_spam`
```python
async def check_media_file_for_spam(
file_wrapper: "synapse.rest.media.v1.media_storage.ReadableFileWrapper",
file_info: "synapse.rest.media.v1._base.FileInfo",
) -> bool
```
Called when storing a local or remote file. The module must return a boolean indicating
whether the given file can be stored in the homeserver's media store.
## Example
The example below is a module that implements the spam checker callback
`check_event_for_spam` to deny any message sent by users whose Matrix user IDs are
mentioned in a configured list, and registers a web resource to the path
`/_synapse/client/list_spam_checker/is_evil` that returns a JSON object indicating
whether the provided user appears in that list.
```python
import json
from typing import Union
from twisted.web.resource import Resource
from twisted.web.server import Request
from synapse.module_api import ModuleApi
class IsUserEvilResource(Resource):
def __init__(self, config):
super(IsUserEvilResource, self).__init__()
self.evil_users = config.get("evil_users") or []
def render_GET(self, request: Request):
user = request.args.get(b"user")[0]
request.setHeader(b"Content-Type", b"application/json")
return json.dumps({"evil": user in self.evil_users})
class ListSpamChecker:
def __init__(self, config: dict, api: ModuleApi):
self.api = api
self.evil_users = config.get("evil_users") or []
self.api.register_spam_checker_callbacks(
check_event_for_spam=self.check_event_for_spam,
)
self.api.register_web_resource(
path="/_synapse/client/list_spam_checker/is_evil",
resource=IsUserEvilResource(config),
)
async def check_event_for_spam(self, event: "synapse.events.EventBase") -> Union[bool, str]:
return event.sender not in self.evil_users
```

View file

@ -0,0 +1,125 @@
# Third party rules callbacks
Third party rules callbacks allow module developers to add extra checks to verify the
validity of incoming events. Third party event rules callbacks can be registered using
the module API's `register_third_party_rules_callbacks` method.
## Callbacks
The available third party rules callbacks are:
### `check_event_allowed`
```python
async def check_event_allowed(
event: "synapse.events.EventBase",
state_events: "synapse.types.StateMap",
) -> Tuple[bool, Optional[dict]]
```
**<span style="color:red">
This callback is very experimental and can and will break without notice. Module developers
are encouraged to implement `check_event_for_spam` from the spam checker category instead.
</span>**
Called when processing any incoming event, with the event and a `StateMap`
representing the current state of the room the event is being sent into. A `StateMap` is
a dictionary that maps tuples containing an event type and a state key to the
corresponding state event. For example retrieving the room's `m.room.create` event from
the `state_events` argument would look like this: `state_events.get(("m.room.create", ""))`.
The module must return a boolean indicating whether the event can be allowed.
Note that this callback function processes incoming events coming via federation
traffic (on top of client traffic). This means denying an event might cause the local
copy of the room's history to diverge from that of remote servers. This may cause
federation issues in the room. It is strongly recommended to only deny events using this
callback function if the sender is a local user, or in a private federation in which all
servers are using the same module, with the same configuration.
If the boolean returned by the module is `True`, it may also tell Synapse to replace the
event with new data by returning the new event's data as a dictionary. In order to do
that, it is recommended the module calls `event.get_dict()` to get the current event as a
dictionary, and modify the returned dictionary accordingly.
Note that replacing the event only works for events sent by local users, not for events
received over federation.
### `on_create_room`
```python
async def on_create_room(
requester: "synapse.types.Requester",
request_content: dict,
is_requester_admin: bool,
) -> None
```
Called when processing a room creation request, with the `Requester` object for the user
performing the request, a dictionary representing the room creation request's JSON body
(see [the spec](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-createroom)
for a list of possible parameters), and a boolean indicating whether the user performing
the request is a server admin.
Modules can modify the `request_content` (by e.g. adding events to its `initial_state`),
or deny the room's creation by raising a `module_api.errors.SynapseError`.
### `check_threepid_can_be_invited`
```python
async def check_threepid_can_be_invited(
medium: str,
address: str,
state_events: "synapse.types.StateMap",
) -> bool:
```
Called when processing an invite via a third-party identifier (i.e. email or phone number).
The module must return a boolean indicating whether the invite can go through.
### `check_visibility_can_be_modified`
```python
async def check_visibility_can_be_modified(
room_id: str,
state_events: "synapse.types.StateMap",
new_visibility: str,
) -> bool:
```
Called when changing the visibility of a room in the local public room directory. The
visibility is a string that's either "public" or "private". The module must return a
boolean indicating whether the change can go through.
## Example
The example below is a module that implements the third-party rules callback
`check_event_allowed` to censor incoming messages as dictated by a third-party service.
```python
from typing import Optional, Tuple
from synapse.module_api import ModuleApi
_DEFAULT_CENSOR_ENDPOINT = "https://my-internal-service.local/censor-event"
class EventCensorer:
def __init__(self, config: dict, api: ModuleApi):
self.api = api
self._endpoint = config.get("endpoint", _DEFAULT_CENSOR_ENDPOINT)
self.api.register_third_party_rules_callbacks(
check_event_allowed=self.check_event_allowed,
)
async def check_event_allowed(
self,
event: "synapse.events.EventBase",
state_events: "synapse.types.StateMap",
) -> Tuple[bool, Optional[dict]]:
event_dict = event.get_dict()
new_event_content = await self.api.http_client.post_json_get_json(
uri=self._endpoint, post_json=event_dict,
)
event_dict["content"] = new_event_content
return event_dict
```

View file

@ -0,0 +1,70 @@
# Writing a module
A module is a Python class that uses Synapse's module API to interact with the
homeserver. It can register callbacks that Synapse will call on specific operations, as
well as web resources to attach to Synapse's web server.
When instantiated, a module is given its parsed configuration as well as an instance of
the `synapse.module_api.ModuleApi` class. The configuration is a dictionary, and is
either the output of the module's `parse_config` static method (see below), or the
configuration associated with the module in Synapse's configuration file.
See the documentation for the `ModuleApi` class
[here](https://github.com/matrix-org/synapse/blob/master/synapse/module_api/__init__.py).
## Handling the module's configuration
A module can implement the following static method:
```python
@staticmethod
def parse_config(config: dict) -> dict
```
This method is given a dictionary resulting from parsing the YAML configuration for the
module. It may modify it (for example by parsing durations expressed as strings (e.g.
"5d") into milliseconds, etc.), and return the modified dictionary. It may also verify
that the configuration is correct, and raise an instance of
`synapse.module_api.errors.ConfigError` if not.
## Registering a web resource
Modules can register web resources onto Synapse's web server using the following module
API method:
```python
def ModuleApi.register_web_resource(path: str, resource: IResource) -> None
```
The path is the full absolute path to register the resource at. For example, if you
register a resource for the path `/_synapse/client/my_super_module/say_hello`, Synapse
will serve it at `http(s)://[HS_URL]/_synapse/client/my_super_module/say_hello`. Note
that Synapse does not allow registering resources for several sub-paths in the `/_matrix`
namespace (such as anything under `/_matrix/client` for example). It is strongly
recommended that modules register their web resources under the `/_synapse/client`
namespace.
The provided resource is a Python class that implements Twisted's [IResource](https://twistedmatrix.com/documents/current/api/twisted.web.resource.IResource.html)
interface (such as [Resource](https://twistedmatrix.com/documents/current/api/twisted.web.resource.Resource.html)).
Only one resource can be registered for a given path. If several modules attempt to
register a resource for the same path, the module that appears first in Synapse's
configuration file takes priority.
Modules **must** register their web resources in their `__init__` method.
## Registering a callback
Modules can use Synapse's module API to register callbacks. Callbacks are functions that
Synapse will call when performing specific actions. Callbacks must be asynchronous, and
are split in categories. A single module may implement callbacks from multiple categories,
and is under no obligation to implement all callbacks from the categories it registers
callbacks for.
Modules can register callbacks using one of the module API's `register_[...]_callbacks`
methods. The callback functions are passed to these methods as keyword arguments, with
the callback name as the argument name and the function as its value. This is demonstrated
in the example below. A `register_[...]_callbacks` method exists for each category.
Callbacks for each category can be found on their respective page of the
[Synapse documentation website](https://matrix-org.github.io/synapse).

View file

@ -335,6 +335,24 @@ listeners:
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
# Connection settings for the manhole
#
manhole_settings:
# The username for the manhole. This defaults to 'matrix'.
#
#username: manhole
# The password for the manhole. This defaults to 'rabbithole'.
#
#password: mypassword
# The private and public SSH key pair used to encrypt the manhole traffic.
# If these are left unset, then hardcoded and non-secret keys are used,
# which could allow traffic to be intercepted if sent over a public network.
#
#ssh_priv_key_path: CONFDIR/id_rsa
#ssh_pub_key_path: CONFDIR/id_rsa.pub
# Forward extremities can build up in a room due to networking delays between
# homeservers. Once this happens in a large room, calculation of the state of
# that room can become quite expensive. To mitigate this, once the number of
@ -1075,6 +1093,27 @@ url_preview_accept_language:
# - en
# oEmbed allows for easier embedding content from a website. It can be
# used for generating URLs previews of services which support it.
#
oembed:
# A default list of oEmbed providers is included with Synapse.
#
# Uncomment the following to disable using these default oEmbed URLs.
# Defaults to 'false'.
#
#disable_default_providers: true
# Additional files with oEmbed configuration (each should be in the
# form of providers.json).
#
# By default, this list is empty (so only the default providers.json
# is used).
#
#additional_providers:
# - oembed/my_providers.json
## Captcha ##
# See docs/CAPTCHA_SETUP.md for full details of configuring this.
@ -2047,7 +2086,7 @@ password_config:
#
#require_lowercase: true
# Whether a password must contain at least one lowercase letter.
# Whether a password must contain at least one uppercase letter.
# Defaults to 'false'.
#
#require_uppercase: true

View file

@ -85,6 +85,15 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.43.0
## The spaces summary APIs can now be handled by workers
The [available worker applications documentation](https://matrix-org.github.io/synapse/latest/workers.html#available-worker-applications)
has been updated to reflect that calls to the `/spaces`, `/hierarchy`, and
`/summary` endpoints can now be routed to workers for both client API and
federation requests.
# Upgrading to v1.42.0
## Removal of old Room Admin API
@ -112,7 +121,6 @@ process failed. See the default templates linked above for an example.
Users will stop receiving message updates via email for addresses that were
once, but not still, linked to their account.
# Upgrading to v1.41.0
## Add support for routing outbound HTTP requests via a proxy for federation

View file

@ -1,76 +0,0 @@
URL Previews
============
Design notes on a URL previewing service for Matrix:
Options are:
1. Have an AS which listens for URLs, downloads them, and inserts an event that describes their metadata.
* Pros:
* Decouples the implementation entirely from Synapse.
* Uses existing Matrix events & content repo to store the metadata.
* Cons:
* Which AS should provide this service for a room, and why should you trust it?
* Doesn't work well with E2E; you'd have to cut the AS into every room
* the AS would end up subscribing to every room anyway.
2. Have a generic preview API (nothing to do with Matrix) that provides a previewing service:
* Pros:
* Simple and flexible; can be used by any clients at any point
* Cons:
* If each HS provides one of these independently, all the HSes in a room may needlessly DoS the target URI
* We need somewhere to store the URL metadata rather than just using Matrix itself
* We can't piggyback on matrix to distribute the metadata between HSes.
3. Make the synapse of the sending user responsible for spidering the URL and inserting an event asynchronously which describes the metadata.
* Pros:
* Works transparently for all clients
* Piggy-backs nicely on using Matrix for distributing the metadata.
* No confusion as to which AS
* Cons:
* Doesn't work with E2E
* We might want to decouple the implementation of the spider from the HS, given spider behaviour can be quite complicated and evolve much more rapidly than the HS. It's more like a bot than a core part of the server.
4. Make the sending client use the preview API and insert the event itself when successful.
* Pros:
* Works well with E2E
* No custom server functionality
* Lets the client customise the preview that they send (like on FB)
* Cons:
* Entirely specific to the sending client, whereas it'd be nice if /any/ URL was correctly previewed if clients support it.
5. Have the option of specifying a shared (centralised) previewing service used by a room, to avoid all the different HSes in the room DoSing the target.
Best solution is probably a combination of both 2 and 4.
* Sending clients do their best to create and send a preview at the point of sending the message, perhaps delaying the message until the preview is computed? (This also lets the user validate the preview before sending)
* Receiving clients have the option of going and creating their own preview if one doesn't arrive soon enough (or if the original sender didn't create one)
This is a bit magical though in that the preview could come from two entirely different sources - the sending HS or your local one. However, this can always be exposed to users: "Generate your own URL previews if none are available?"
This is tantamount also to senders calculating their own thumbnails for sending in advance of the main content - we are trusting the sender not to lie about the content in the thumbnail. Whereas currently thumbnails are calculated by the receiving homeserver to avoid this attack.
However, this kind of phishing attack does exist whether we let senders pick their thumbnails or not, in that a malicious sender can send normal text messages around the attachment claiming it to be legitimate. We could rely on (future) reputation/abuse management to punish users who phish (be it with bogus metadata or bogus descriptions). Bogus metadata is particularly bad though, especially if it's avoidable.
As a first cut, let's do #2 and have the receiver hit the API to calculate its own previews (as it does currently for image thumbnails). We can then extend/optimise this to option 4 as a special extra if needed.
API
---
```
GET /_matrix/media/r0/preview_url?url=http://wherever.com
200 OK
{
"og:type" : "article"
"og:url" : "https://twitter.com/matrixdotorg/status/684074366691356672"
"og:title" : "Matrix on Twitter"
"og:image" : "https://pbs.twimg.com/profile_images/500400952029888512/yI0qtFi7_400x400.png"
"og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp;amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”"
"og:site_name" : "Twitter"
}
```
* Downloads the URL
* If HTML, just stores it in RAM and parses it for OG meta tags
* Download any media OG meta tags to the media repo, and refer to them in the OG via mxc:// URIs.
* If a media filetype we know we can thumbnail: store it on disk, and hand it to the thumbnailer. Generate OG meta tags from the thumbnailer contents.
* Otherwise, don't bother downloading further.

View file

@ -10,3 +10,40 @@ DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to execute the SQL [here](https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/main/delta/53/user_dir_populate.sql)
and then restart synapse. This should then start a background task to
flush the current tables and regenerate the directory.
Data model
----------
There are five relevant tables that collectively form the "user directory".
Three of them track a master list of all the users we could search for.
The last two (collectively called the "search tables") track who can
see who.
From all of these tables we exclude three types of local user:
- support users
- appservice users
- deactivated users
* `user_directory`. This contains the user_id, display name and avatar we'll
return when you search the directory.
- Because there's only one directory entry per user, it's important that we only
ever put publicly visible names here. Otherwise we might leak a private
nickname or avatar used in a private room.
- Indexed on rooms. Indexed on users.
* `user_directory_search`. To be joined to `user_directory`. It contains an extra
column that enables full text search based on user ids and display names.
Different schemas for SQLite and Postgres with different code paths to match.
- Indexed on the full text search data. Indexed on users.
* `user_directory_stream_pos`. When the initial background update to populate
the directory is complete, we record a stream position here. This indicates
that synapse should now listen for room changes and incrementally update
the directory where necessary.
* `users_in_public_rooms`. Contains associations between users and the public rooms they're in.
Used to determine which users are in public rooms and should be publicly visible in the directory.
* `users_who_share_private_rooms`. Rows are triples `(L, M, room id)` where `L`
is a local user and `M` is a local or remote user. `L` and `M` should be
different, but this isn't enforced by a constraint.

View file

@ -209,6 +209,8 @@ expressions:
^/_matrix/federation/v1/user/devices/
^/_matrix/federation/v1/get_groups_publicised$
^/_matrix/key/v2/query
^/_matrix/federation/unstable/org.matrix.msc2946/spaces/
^/_matrix/federation/unstable/org.matrix.msc2946/hierarchy/
# Inbound federation transaction request
^/_matrix/federation/v1/send/
@ -220,6 +222,9 @@ expressions:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|unstable)/devices$
^/_matrix/client/(api/v1|r0|unstable)/keys/query$

View file

@ -74,22 +74,13 @@ files =
synapse/storage/util,
synapse/streams,
synapse/types.py,
synapse/util/async_helpers.py,
synapse/util/caches,
synapse/util/daemonize.py,
synapse/util/hash.py,
synapse/util/iterutils.py,
synapse/util/linked_list.py,
synapse/util/metrics.py,
synapse/util/macaroons.py,
synapse/util/module_loader.py,
synapse/util/msisdn.py,
synapse/util/stringutils.py,
synapse/util,
synapse/visibility.py,
tests/replication,
tests/test_event_auth.py,
tests/test_utils,
tests/handlers/test_password_providers.py,
tests/handlers/test_room.py,
tests/handlers/test_room_summary.py,
tests/handlers/test_send_email.py,
tests/handlers/test_sync.py,
@ -98,6 +89,72 @@ files =
tests/util/test_itertools.py,
tests/util/test_stream_change_cache.py
[mypy-synapse.rest.client.*]
disallow_untyped_defs = True
[mypy-synapse.util.batching_queue]
disallow_untyped_defs = True
[mypy-synapse.util.caches.dictionary_cache]
disallow_untyped_defs = True
[mypy-synapse.util.file_consumer]
disallow_untyped_defs = True
[mypy-synapse.util.frozenutils]
disallow_untyped_defs = True
[mypy-synapse.util.hash]
disallow_untyped_defs = True
[mypy-synapse.util.httpresourcetree]
disallow_untyped_defs = True
[mypy-synapse.util.iterutils]
disallow_untyped_defs = True
[mypy-synapse.util.linked_list]
disallow_untyped_defs = True
[mypy-synapse.util.logcontext]
disallow_untyped_defs = True
[mypy-synapse.util.logformatter]
disallow_untyped_defs = True
[mypy-synapse.util.macaroons]
disallow_untyped_defs = True
[mypy-synapse.util.manhole]
disallow_untyped_defs = True
[mypy-synapse.util.module_loader]
disallow_untyped_defs = True
[mypy-synapse.util.msisdn]
disallow_untyped_defs = True
[mypy-synapse.util.ratelimitutils]
disallow_untyped_defs = True
[mypy-synapse.util.retryutils]
disallow_untyped_defs = True
[mypy-synapse.util.rlimit]
disallow_untyped_defs = True
[mypy-synapse.util.stringutils]
disallow_untyped_defs = True
[mypy-synapse.util.templates]
disallow_untyped_defs = True
[mypy-synapse.util.threepids]
disallow_untyped_defs = True
[mypy-synapse.util.wheel_timer]
disallow_untyped_defs = True
[mypy-pymacaroons.*]
ignore_missing_imports = True

View file

@ -0,0 +1,64 @@
#!/bin/bash -e
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script is meant to be used inside a Docker container to run the `dch` incantations
# needed to release Synapse. This is useful on systems like macOS where such scripts are
# not easily accessible.
#
# Running it (when if the current working directory is the root of the Synapse checkout):
# docker run --rm -v $PWD:/synapse ubuntu:latest /synapse/scripts-dev/docker_update_debian_changelog.sh VERSION
#
# The image can be replaced by any other Debian-based image (as long as the `devscripts`
# package exists in the default repository).
# `VERSION` is the version of Synapse being released without the leading "v" (e.g. 1.42.0).
# Check if a version was provided.
if [ "$#" -ne 1 ]; then
echo "Usage: update_debian_changelog.sh VERSION"
echo "VERSION is the version of Synapse being released in the form 1.42.0 (without the leading \"v\")"
exit 1
fi
# Check that apt-get is available on the system.
if ! which apt-get > /dev/null 2>&1; then
echo "\"apt-get\" isn't available on this system. This script needs to be run in a Docker container using a Debian-based image."
exit 1
fi
# Check if devscripts is available in the default repos for this distro.
# Update the apt package list cache.
# We need to do this before we can search the apt cache or install devscripts.
apt-get update || exit 1
if ! apt-cache search devscripts | grep -E "^devscripts \-" > /dev/null; then
echo "The package \"devscripts\" needs to exist in the default repositories for this distribution."
exit 1
fi
# We set -x here rather than in the shebang so that if we need to exit early because no
# version was provided, the message doesn't get drowned in useless output.
set -x
# Make the root of the Synapse checkout the current working directory.
cd /synapse
# Install devscripts (which provides dch). We need to make the Debian frontend
# noninteractive because installing devscripts otherwise asks for the machine's location.
DEBIAN_FRONTEND=noninteractive apt-get install -y devscripts
# Update the Debian changelog.
ver=${1}
dch -M -v $(sed -Ee 's/(rc|a|b|c)/~\1/' <<<$ver) "New synapse release $ver."
dch -M -r -D stable ""

View file

@ -46,6 +46,7 @@ from synapse.storage.databases.main.events_bg_updates import (
from synapse.storage.databases.main.media_repository import (
MediaRepositoryBackgroundUpdateStore,
)
from synapse.storage.databases.main.presence import PresenceBackgroundUpdateStore
from synapse.storage.databases.main.pusher import PusherWorkerStore
from synapse.storage.databases.main.registration import (
RegistrationBackgroundUpdateStore,
@ -179,6 +180,7 @@ class Store(
EndToEndKeyBackgroundStore,
StatsStore,
PusherWorkerStore,
PresenceBackgroundUpdateStore,
):
def execute(self, f, *args, **kwargs):
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)

View file

@ -1,5 +1,6 @@
from .sorteddict import SortedDict, SortedItemsView, SortedKeysView, SortedValuesView
from .sortedlist import SortedKeyList, SortedList, SortedListWithKey
from .sortedset import SortedSet
__all__ = [
"SortedDict",
@ -9,4 +10,5 @@ __all__ = [
"SortedKeyList",
"SortedList",
"SortedListWithKey",
"SortedSet",
]

View file

@ -0,0 +1,118 @@
# stub for SortedSet. This is a lightly edited copy of
# https://github.com/grantjenks/python-sortedcontainers/blob/d0a225d7fd0fb4c54532b8798af3cbeebf97e2d5/sortedcontainers/sortedset.pyi
# (from https://github.com/grantjenks/python-sortedcontainers/pull/107)
from typing import (
AbstractSet,
Any,
Callable,
Generic,
Hashable,
Iterable,
Iterator,
List,
MutableSet,
Optional,
Sequence,
Set,
Tuple,
Type,
TypeVar,
Union,
overload,
)
# --- Global
_T = TypeVar("_T", bound=Hashable)
_S = TypeVar("_S", bound=Hashable)
_SS = TypeVar("_SS", bound=SortedSet)
_Key = Callable[[_T], Any]
class SortedSet(MutableSet[_T], Sequence[_T]):
def __init__(
self,
iterable: Optional[Iterable[_T]] = ...,
key: Optional[_Key[_T]] = ...,
) -> None: ...
@classmethod
def _fromset(
cls, values: Set[_T], key: Optional[_Key[_T]] = ...
) -> SortedSet[_T]: ...
@property
def key(self) -> Optional[_Key[_T]]: ...
def __contains__(self, value: Any) -> bool: ...
@overload
def __getitem__(self, index: int) -> _T: ...
@overload
def __getitem__(self, index: slice) -> List[_T]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...
def __eq__(self, other: Any) -> bool: ...
def __ne__(self, other: Any) -> bool: ...
def __lt__(self, other: Iterable[_T]) -> bool: ...
def __gt__(self, other: Iterable[_T]) -> bool: ...
def __le__(self, other: Iterable[_T]) -> bool: ...
def __ge__(self, other: Iterable[_T]) -> bool: ...
def __len__(self) -> int: ...
def __iter__(self) -> Iterator[_T]: ...
def __reversed__(self) -> Iterator[_T]: ...
def add(self, value: _T) -> None: ...
def _add(self, value: _T) -> None: ...
def clear(self) -> None: ...
def copy(self: _SS) -> _SS: ...
def __copy__(self: _SS) -> _SS: ...
def count(self, value: _T) -> int: ...
def discard(self, value: _T) -> None: ...
def _discard(self, value: _T) -> None: ...
def pop(self, index: int = ...) -> _T: ...
def remove(self, value: _T) -> None: ...
def difference(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __sub__(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def difference_update(
self, *iterables: Iterable[_S]
) -> SortedSet[Union[_T, _S]]: ...
def __isub__(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def intersection(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __and__(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __rand__(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def intersection_update(
self, *iterables: Iterable[_S]
) -> SortedSet[Union[_T, _S]]: ...
def __iand__(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def symmetric_difference(self, other: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __xor__(self, other: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __rxor__(self, other: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def symmetric_difference_update(
self, other: Iterable[_S]
) -> SortedSet[Union[_T, _S]]: ...
def __ixor__(self, other: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def union(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __or__(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __ror__(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def update(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __ior__(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def _update(self, *iterables: Iterable[_S]) -> SortedSet[Union[_T, _S]]: ...
def __reduce__(
self,
) -> Tuple[Type[SortedSet[_T]], Set[_T], Callable[[_T], Any]]: ...
def __repr__(self) -> str: ...
def _check(self) -> None: ...
def bisect_left(self, value: _T) -> int: ...
def bisect_right(self, value: _T) -> int: ...
def islice(
self,
start: Optional[int] = ...,
stop: Optional[int] = ...,
reverse=bool,
) -> Iterator[_T]: ...
def irange(
self,
minimum: Optional[_T] = ...,
maximum: Optional[_T] = ...,
inclusive: Tuple[bool, bool] = ...,
reverse: bool = ...,
) -> Iterator[_T]: ...
def index(
self, value: _T, start: Optional[int] = ..., stop: Optional[int] = ...
) -> int: ...
def _reset(self, load: int) -> None: ...

View file

@ -73,4 +73,4 @@ class RedisFactory(protocol.ReconnectingClientFactory):
def buildProtocol(self, addr) -> RedisProtocol: ...
class SubscriberFactory(RedisFactory):
def __init__(self): ...
def __init__(self) -> None: ...

View file

@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.42.0"
__version__ = "1.43.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View file

@ -198,6 +198,15 @@ class EventContentFields:
# cf https://github.com/matrix-org/matrix-doc/pull/1772
ROOM_TYPE = "type"
# Whether a room can federate.
FEDERATE = "m.federate"
# The creator of the room, as used in `m.room.create` events.
ROOM_CREATOR = "creator"
# Used in m.room.guest_access events.
GUEST_ACCESS = "guest_access"
# Used on normal messages to indicate they were historically imported after the fact
MSC2716_HISTORICAL = "org.matrix.msc2716.historical"
# For "insertion" events to indicate what the next chunk ID should be in
@ -232,5 +241,11 @@ class HistoryVisibility:
WORLD_READABLE = "world_readable"
class GuestAccess:
CAN_JOIN = "can_join"
# anything that is not "can_join" is considered "forbidden", but for completeness:
FORBIDDEN = "forbidden"
class ReadReceiptEventFields:
MSC2285_HIDDEN = "org.matrix.msc2285.hidden"

View file

@ -46,7 +46,7 @@ class Ratelimiter:
# * How many times an action has occurred since a point in time
# * The point in time
# * The rate_hz of this particular entry. This can vary per request
self.actions: OrderedDict[Hashable, Tuple[float, int, float]] = OrderedDict()
self.actions: OrderedDict[Hashable, Tuple[float, float, float]] = OrderedDict()
async def can_do_action(
self,
@ -56,7 +56,7 @@ class Ratelimiter:
burst_count: Optional[int] = None,
update: bool = True,
n_actions: int = 1,
_time_now_s: Optional[int] = None,
_time_now_s: Optional[float] = None,
) -> Tuple[bool, float]:
"""Can the entity (e.g. user or IP address) perform the action?
@ -160,7 +160,7 @@ class Ratelimiter:
return allowed, time_allowed
def _prune_message_counts(self, time_now_s: int):
def _prune_message_counts(self, time_now_s: float):
"""Remove message count entries that have not exceeded their defined
rate_hz limit
@ -188,7 +188,7 @@ class Ratelimiter:
burst_count: Optional[int] = None,
update: bool = True,
n_actions: int = 1,
_time_now_s: Optional[int] = None,
_time_now_s: Optional[float] = None,
):
"""Checks if an action can be performed. If not, raises a LimitExceededError

View file

@ -324,7 +324,7 @@ MSC3244_CAPABILITIES = {
),
RoomVersionCapability(
"restricted",
RoomVersions.V8,
RoomVersions.V9,
lambda room_version: room_version.msc3083_join_rules,
),
)

View file

@ -41,11 +41,11 @@ class ConsentURIBuilder:
"""
if hs_config.form_secret is None:
raise ConfigError("form_secret not set in config")
if hs_config.public_baseurl is None:
if hs_config.server.public_baseurl is None:
raise ConfigError("public_baseurl not set in config")
self._hmac_secret = hs_config.form_secret.encode("utf-8")
self._public_baseurl = hs_config.public_baseurl
self._public_baseurl = hs_config.server.public_baseurl
def build_user_consent_uri(self, user_id):
"""Build a URI which we can give to the user to do their privacy

View file

@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import atexit
import gc
import logging
import os
@ -36,6 +37,7 @@ from synapse.api.constants import MAX_PDU_SIZE
from synapse.app import check_bind_error
from synapse.app.phone_stats_home import start_phone_stats_home
from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ManholeConfig
from synapse.crypto import context_factory
from synapse.events.presence_router import load_legacy_presence_router
from synapse.events.spamcheck import load_legacy_spam_checkers
@ -80,7 +82,7 @@ def start_worker_reactor(appname, config, run_command=reactor.run):
run_command (Callable[]): callable that actually runs the reactor
"""
logger = logging.getLogger(config.worker_app)
logger = logging.getLogger(config.worker.worker_app)
start_reactor(
appname,
@ -229,7 +231,12 @@ def listen_metrics(bind_addresses, port):
start_http_server(port, addr=host, registry=RegistryProxy)
def listen_manhole(bind_addresses: Iterable[str], port: int, manhole_globals: dict):
def listen_manhole(
bind_addresses: Iterable[str],
port: int,
manhole_settings: ManholeConfig,
manhole_globals: dict,
):
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
# suppress the warning for now.
@ -244,7 +251,7 @@ def listen_manhole(bind_addresses: Iterable[str], port: int, manhole_globals: di
listen_tcp(
bind_addresses,
port,
manhole(username="matrix", password="rabbithole", globals=manhole_globals),
manhole(settings=manhole_settings, globals=manhole_globals),
)
@ -391,7 +398,7 @@ async def start(hs: "HomeServer"):
# If background tasks are running on the main process, start collecting the
# phone home stats.
if hs.config.run_background_tasks:
if hs.config.worker.run_background_tasks:
start_phone_stats_home(hs)
# We now freeze all allocated objects in the hopes that (almost)
@ -403,6 +410,12 @@ async def start(hs: "HomeServer"):
gc.collect()
gc.freeze()
# Speed up shutdowns by freezing all allocated objects. This moves everything
# into the permanent generation and excludes them from the final GC.
# Unfortunately only works on Python 3.7
if platform.python_implementation() == "CPython" and sys.version_info >= (3, 7):
atexit.register(gc.freeze)
def setup_sentry(hs):
"""Enable sentry integration, if enabled in configuration
@ -420,9 +433,13 @@ def setup_sentry(hs):
# We set some default tags that give some context to this instance
with sentry_sdk.configure_scope() as scope:
scope.set_tag("matrix_server_name", hs.config.server_name)
scope.set_tag("matrix_server_name", hs.config.server.server_name)
app = hs.config.worker_app if hs.config.worker_app else "synapse.app.homeserver"
app = (
hs.config.worker.worker_app
if hs.config.worker.worker_app
else "synapse.app.homeserver"
)
name = hs.get_instance_name()
scope.set_tag("worker_app", app)
scope.set_tag("worker_name", name)

View file

@ -178,12 +178,12 @@ def start(config_options):
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
if config.worker_app is not None:
assert config.worker_app == "synapse.app.admin_cmd"
if config.worker.worker_app is not None:
assert config.worker.worker_app == "synapse.app.admin_cmd"
# Update the config with some basic overrides so that don't have to specify
# a full worker config.
config.worker_app = "synapse.app.admin_cmd"
config.worker.worker_app = "synapse.app.admin_cmd"
if (
not config.worker_daemonize
@ -196,7 +196,7 @@ def start(config_options):
# Explicitly disable background processes
config.update_user_directory = False
config.run_background_tasks = False
config.worker.run_background_tasks = False
config.start_pushers = False
config.pusher_shard_config.instances = []
config.send_federation = False
@ -205,7 +205,7 @@ def start(config_options):
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
ss = AdminCmdServer(
config.server_name,
config.server.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)

View file

@ -69,39 +69,34 @@ from synapse.rest.client import (
account_data,
events,
groups,
initial_sync,
login,
presence,
profile,
push_rule,
read_marker,
receipts,
room,
room_keys,
sendtodevice,
sync,
tags,
user_directory,
versions,
voip,
)
from synapse.rest.client._base import client_patterns
from synapse.rest.client.account import ThreepidRestServlet
from synapse.rest.client.account_data import AccountDataServlet, RoomAccountDataServlet
from synapse.rest.client.devices import DevicesRestServlet
from synapse.rest.client.initial_sync import InitialSyncRestServlet
from synapse.rest.client.keys import (
KeyChangesServlet,
KeyQueryServlet,
OneTimeKeyServlet,
)
from synapse.rest.client.profile import (
ProfileAvatarURLRestServlet,
ProfileDisplaynameRestServlet,
ProfileRestServlet,
)
from synapse.rest.client.push_rule import PushRuleRestServlet
from synapse.rest.client.register import (
RegisterRestServlet,
RegistrationTokenValidityRestServlet,
)
from synapse.rest.client.sendtodevice import SendToDeviceRestServlet
from synapse.rest.client.versions import VersionsRestServlet
from synapse.rest.client.voip import VoipRestServlet
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
@ -288,32 +283,31 @@ class GenericWorkerServer(HomeServer):
login.register_servlets(self, resource)
ThreepidRestServlet(self).register(resource)
DevicesRestServlet(self).register(resource)
KeyQueryServlet(self).register(resource)
OneTimeKeyServlet(self).register(resource)
KeyChangesServlet(self).register(resource)
VoipRestServlet(self).register(resource)
PushRuleRestServlet(self).register(resource)
VersionsRestServlet(self).register(resource)
ProfileAvatarURLRestServlet(self).register(resource)
ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource)
# Read-only
KeyUploadServlet(self).register(resource)
AccountDataServlet(self).register(resource)
RoomAccountDataServlet(self).register(resource)
KeyQueryServlet(self).register(resource)
KeyChangesServlet(self).register(resource)
OneTimeKeyServlet(self).register(resource)
voip.register_servlets(self, resource)
push_rule.register_servlets(self, resource)
versions.register_servlets(self, resource)
profile.register_servlets(self, resource)
sync.register_servlets(self, resource)
events.register_servlets(self, resource)
room.register_servlets(self, resource, True)
room.register_servlets(self, resource, is_worker=True)
room.register_deprecated_servlets(self, resource)
InitialSyncRestServlet(self).register(resource)
initial_sync.register_servlets(self, resource)
room_keys.register_servlets(self, resource)
tags.register_servlets(self, resource)
account_data.register_servlets(self, resource)
receipts.register_servlets(self, resource)
read_marker.register_servlets(self, resource)
SendToDeviceRestServlet(self).register(resource)
sendtodevice.register_servlets(self, resource)
user_directory.register_servlets(self, resource)
@ -395,7 +389,10 @@ class GenericWorkerServer(HomeServer):
self._listen_http(listener)
elif listener.type == "manhole":
_base.listen_manhole(
listener.bind_addresses, listener.port, manhole_globals={"hs": self}
listener.bind_addresses,
listener.port,
manhole_settings=self.config.server.manhole_settings,
manhole_globals={"hs": self},
)
elif listener.type == "metrics":
if not self.config.enable_metrics:
@ -419,7 +416,7 @@ def start(config_options):
sys.exit(1)
# For backwards compatibility let any of the old app names.
assert config.worker_app in (
assert config.worker.worker_app in (
"synapse.app.appservice",
"synapse.app.client_reader",
"synapse.app.event_creator",
@ -433,7 +430,7 @@ def start(config_options):
"synapse.app.user_dir",
)
if config.worker_app == "synapse.app.appservice":
if config.worker.worker_app == "synapse.app.appservice":
if config.appservice.notify_appservices:
sys.stderr.write(
"\nThe appservices must be disabled in the main synapse process"
@ -449,7 +446,7 @@ def start(config_options):
# For other worker types we force this to off.
config.appservice.notify_appservices = False
if config.worker_app == "synapse.app.user_dir":
if config.worker.worker_app == "synapse.app.user_dir":
if config.server.update_user_directory:
sys.stderr.write(
"\nThe update_user_directory must be disabled in the main synapse process"
@ -472,7 +469,7 @@ def start(config_options):
synapse.metrics.MIN_TIME_BETWEEN_GCS = config.server.gc_seconds
hs = GenericWorkerServer(
config.server_name,
config.server.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)

View file

@ -291,7 +291,10 @@ class SynapseHomeServer(HomeServer):
)
elif listener.type == "manhole":
_base.listen_manhole(
listener.bind_addresses, listener.port, manhole_globals={"hs": self}
listener.bind_addresses,
listener.port,
manhole_settings=self.config.server.manhole_settings,
manhole_globals={"hs": self},
)
elif listener.type == "replication":
services = listen_tcp(
@ -347,7 +350,7 @@ def setup(config_options):
synapse.metrics.MIN_TIME_BETWEEN_GCS = config.server.gc_seconds
hs = SynapseHomeServer(
config.server_name,
config.server.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)

View file

@ -73,7 +73,7 @@ async def phone_stats_home(hs, stats, stats_process=_stats_process):
store = hs.get_datastore()
stats["homeserver"] = hs.config.server_name
stats["homeserver"] = hs.config.server.server_name
stats["server_context"] = hs.config.server_context
stats["timestamp"] = now
stats["uptime_seconds"] = uptime

View file

@ -88,7 +88,7 @@ class AuthConfig(Config):
#
#require_lowercase: true
# Whether a password must contain at least one lowercase letter.
# Whether a password must contain at least one uppercase letter.
# Defaults to 'false'.
#
#require_uppercase: true

View file

@ -24,9 +24,6 @@ class ExperimentalConfig(Config):
def read_config(self, config: JsonDict, **kwargs):
experimental = config.get("experimental_features") or {}
# MSC2858 (multiple SSO identity providers)
self.msc2858_enabled: bool = experimental.get("msc2858_enabled", False)
# MSC3026 (busy presence state)
self.msc3026_enabled: bool = experimental.get("msc3026_enabled", False)

View file

@ -31,6 +31,7 @@ from .key import KeyConfig
from .logger import LoggingConfig
from .metrics import MetricsConfig
from .modules import ModulesConfig
from .oembed import OembedConfig
from .oidc import OIDCConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .push import PushConfig
@ -67,6 +68,7 @@ class HomeServerConfig(RootConfig):
LoggingConfig,
RatelimitConfig,
ContentRepositoryConfig,
OembedConfig,
CaptchaConfig,
VoipConfig,
RegistrationConfig,

View file

@ -223,7 +223,7 @@ def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) ->
# writes.
log_context_filter = LoggingContextFilter()
log_metadata_filter = MetadataFilter({"server_name": config.server_name})
log_metadata_filter = MetadataFilter({"server_name": config.server.server_name})
old_factory = logging.getLogRecordFactory()
def factory(*args, **kwargs):
@ -335,5 +335,5 @@ def setup_logging(
# Log immediately so we can grep backwards.
logging.warning("***** STARTING SERVER *****")
logging.warning("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.info("Server hostname: %s", config.server_name)
logging.info("Server hostname: %s", config.server.server_name)
logging.info("Instance name: %s", hs.get_instance_name())

196
synapse/config/oembed.py Normal file
View file

@ -0,0 +1,196 @@
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import re
from typing import Any, Dict, Iterable, List, Optional, Pattern
from urllib import parse as urlparse
import attr
import pkg_resources
from synapse.types import JsonDict
from ._base import Config, ConfigError
from ._util import validate_config
@attr.s(slots=True, frozen=True, auto_attribs=True)
class OEmbedEndpointConfig:
# The API endpoint to fetch.
api_endpoint: str
# The patterns to match.
url_patterns: List[Pattern]
# The supported formats.
formats: Optional[List[str]]
class OembedConfig(Config):
"""oEmbed Configuration"""
section = "oembed"
def read_config(self, config, **kwargs):
oembed_config: Dict[str, Any] = config.get("oembed") or {}
# A list of patterns which will be used.
self.oembed_patterns: List[OEmbedEndpointConfig] = list(
self._parse_and_validate_providers(oembed_config)
)
def _parse_and_validate_providers(
self, oembed_config: dict
) -> Iterable[OEmbedEndpointConfig]:
"""Extract and parse the oEmbed providers from the given JSON file.
Returns a generator which yields the OidcProviderConfig objects
"""
# Whether to use the packaged providers.json file.
if not oembed_config.get("disable_default_providers") or False:
providers = json.load(
pkg_resources.resource_stream("synapse", "res/providers.json")
)
yield from self._parse_and_validate_provider(
providers, config_path=("oembed",)
)
# The JSON files which includes additional provider information.
for i, file in enumerate(oembed_config.get("additional_providers") or []):
# TODO Error checking.
with open(file) as f:
providers = json.load(f)
yield from self._parse_and_validate_provider(
providers,
config_path=(
"oembed",
"additional_providers",
f"<item {i}>",
),
)
def _parse_and_validate_provider(
self, providers: List[JsonDict], config_path: Iterable[str]
) -> Iterable[OEmbedEndpointConfig]:
# Ensure it is the proper form.
validate_config(
_OEMBED_PROVIDER_SCHEMA,
providers,
config_path=config_path,
)
# Parse it and yield each result.
for provider in providers:
# Each provider might have multiple API endpoints, each which
# might have multiple patterns to match.
for endpoint in provider["endpoints"]:
api_endpoint = endpoint["url"]
# The API endpoint must be an HTTP(S) URL.
results = urlparse.urlparse(api_endpoint)
if results.scheme not in {"http", "https"}:
raise ConfigError(
f"Unsupported oEmbed scheme ({results.scheme}) for endpoint {api_endpoint}",
config_path,
)
patterns = [
self._glob_to_pattern(glob, config_path)
for glob in endpoint["schemes"]
]
yield OEmbedEndpointConfig(
api_endpoint, patterns, endpoint.get("formats")
)
def _glob_to_pattern(self, glob: str, config_path: Iterable[str]) -> Pattern:
"""
Convert the glob into a sane regular expression to match against. The
rules followed will be slightly different for the domain portion vs.
the rest.
1. The scheme must be one of HTTP / HTTPS (and have no globs).
2. The domain can have globs, but we limit it to characters that can
reasonably be a domain part.
TODO: This does not attempt to handle Unicode domain names.
TODO: The domain should not allow wildcard TLDs.
3. Other parts allow a glob to be any one, or more, characters.
"""
results = urlparse.urlparse(glob)
# The scheme must be HTTP(S) (and cannot contain wildcards).
if results.scheme not in {"http", "https"}:
raise ConfigError(
f"Unsupported oEmbed scheme ({results.scheme}) for pattern: {glob}",
config_path,
)
pattern = urlparse.urlunparse(
[
results.scheme,
re.escape(results.netloc).replace("\\*", "[a-zA-Z0-9_-]+"),
]
+ [re.escape(part).replace("\\*", ".+") for part in results[2:]]
)
return re.compile(pattern)
def generate_config_section(self, **kwargs):
return """\
# oEmbed allows for easier embedding content from a website. It can be
# used for generating URLs previews of services which support it.
#
oembed:
# A default list of oEmbed providers is included with Synapse.
#
# Uncomment the following to disable using these default oEmbed URLs.
# Defaults to 'false'.
#
#disable_default_providers: true
# Additional files with oEmbed configuration (each should be in the
# form of providers.json).
#
# By default, this list is empty (so only the default providers.json
# is used).
#
#additional_providers:
# - oembed/my_providers.json
"""
_OEMBED_PROVIDER_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"provider_name": {"type": "string"},
"provider_url": {"type": "string"},
"endpoints": {
"type": "array",
"items": {
"type": "object",
"properties": {
"schemes": {
"type": "array",
"items": {"type": "string"},
},
"url": {"type": "string"},
"formats": {"type": "array", "items": {"type": "string"}},
"discovery": {"type": "boolean"},
},
"required": ["schemes", "url"],
},
},
},
"required": ["provider_name", "provider_url", "endpoints"],
},
}

View file

@ -277,12 +277,6 @@ OIDC_PROVIDER_CONFIG_SCHEMA = {
"maxLength": 255,
"pattern": "^[a-z][a-z0-9_.-]*$",
},
"idp_unstable_brand": {
"type": "string",
"minLength": 1,
"maxLength": 255,
"pattern": "^[a-z][a-z0-9_.-]*$",
},
"discover": {"type": "boolean"},
"issuer": {"type": "string"},
"client_id": {"type": "string"},
@ -483,7 +477,6 @@ def _parse_oidc_config_dict(
idp_name=oidc_config.get("idp_name", "OIDC"),
idp_icon=idp_icon,
idp_brand=oidc_config.get("idp_brand"),
unstable_idp_brand=oidc_config.get("unstable_idp_brand"),
discover=oidc_config.get("discover", True),
issuer=oidc_config["issuer"],
client_id=oidc_config["client_id"],
@ -531,9 +524,6 @@ class OidcProviderConfig:
# Optional brand identifier for this IdP.
idp_brand = attr.ib(type=Optional[str])
# Optional brand identifier for the unstable API (see MSC2858).
unstable_idp_brand = attr.ib(type=Optional[str])
# whether the OIDC discovery mechanism is used to discover endpoints
discover = attr.ib(type=bool)

View file

@ -14,6 +14,8 @@
from typing import Dict, Optional
import attr
from ._base import Config
@ -29,18 +31,13 @@ class RateLimitConfig:
self.burst_count = int(config.get("burst_count", defaults["burst_count"]))
@attr.s(auto_attribs=True)
class FederationRateLimitConfig:
_items_and_default = {
"window_size": 1000,
"sleep_limit": 10,
"sleep_delay": 500,
"reject_limit": 50,
"concurrent": 3,
}
def __init__(self, **kwargs):
for i in self._items_and_default.keys():
setattr(self, i, kwargs.get(i) or self._items_and_default[i])
window_size: int = 1000
sleep_limit: int = 10
sleep_delay: int = 500
reject_limit: int = 50
concurrent: int = 3
class RatelimitConfig(Config):
@ -69,11 +66,15 @@ class RatelimitConfig(Config):
else:
self.rc_federation = FederationRateLimitConfig(
**{
"window_size": config.get("federation_rc_window_size"),
"sleep_limit": config.get("federation_rc_sleep_limit"),
"sleep_delay": config.get("federation_rc_sleep_delay"),
"reject_limit": config.get("federation_rc_reject_limit"),
"concurrent": config.get("federation_rc_concurrent"),
k: v
for k, v in {
"window_size": config.get("federation_rc_window_size"),
"sleep_limit": config.get("federation_rc_sleep_limit"),
"sleep_delay": config.get("federation_rc_sleep_delay"),
"reject_limit": config.get("federation_rc_reject_limit"),
"concurrent": config.get("federation_rc_concurrent"),
}.items()
if v is not None
}
)

View file

@ -25,11 +25,14 @@ import attr
import yaml
from netaddr import AddrFormatError, IPNetwork, IPSet
from twisted.conch.ssh.keys import Key
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.util.module_loader import load_module
from synapse.util.stringutils import parse_and_validate_server_name
from ._base import Config, ConfigError
from ._util import validate_config
logger = logging.Logger(__name__)
@ -216,6 +219,16 @@ class ListenerConfig:
http_options = attr.ib(type=Optional[HttpListenerConfig], default=None)
@attr.s(frozen=True)
class ManholeConfig:
"""Object describing the configuration of the manhole"""
username = attr.ib(type=str, validator=attr.validators.instance_of(str))
password = attr.ib(type=str, validator=attr.validators.instance_of(str))
priv_key = attr.ib(type=Optional[Key])
pub_key = attr.ib(type=Optional[Key])
class ServerConfig(Config):
section = "server"
@ -649,6 +662,41 @@ class ServerConfig(Config):
)
)
manhole_settings = config.get("manhole_settings") or {}
validate_config(
_MANHOLE_SETTINGS_SCHEMA, manhole_settings, ("manhole_settings",)
)
manhole_username = manhole_settings.get("username", "matrix")
manhole_password = manhole_settings.get("password", "rabbithole")
manhole_priv_key_path = manhole_settings.get("ssh_priv_key_path")
manhole_pub_key_path = manhole_settings.get("ssh_pub_key_path")
manhole_priv_key = None
if manhole_priv_key_path is not None:
try:
manhole_priv_key = Key.fromFile(manhole_priv_key_path)
except Exception as e:
raise ConfigError(
f"Failed to read manhole private key file {manhole_priv_key_path}"
) from e
manhole_pub_key = None
if manhole_pub_key_path is not None:
try:
manhole_pub_key = Key.fromFile(manhole_pub_key_path)
except Exception as e:
raise ConfigError(
f"Failed to read manhole public key file {manhole_pub_key_path}"
) from e
self.manhole_settings = ManholeConfig(
username=manhole_username,
password=manhole_password,
priv_key=manhole_priv_key,
pub_key=manhole_pub_key,
)
metrics_port = config.get("metrics_port")
if metrics_port:
logger.warning(METRICS_PORT_WARNING)
@ -715,7 +763,7 @@ class ServerConfig(Config):
if not isinstance(templates_config, dict):
raise ConfigError("The 'templates' section must be a dictionary")
self.custom_template_directory = templates_config.get(
self.custom_template_directory: Optional[str] = templates_config.get(
"custom_template_directory"
)
if self.custom_template_directory is not None and not isinstance(
@ -727,7 +775,13 @@ class ServerConfig(Config):
return any(listener.tls for listener in self.listeners)
def generate_config_section(
self, server_name, data_dir_path, open_private_ports, listeners, **kwargs
self,
server_name,
data_dir_path,
open_private_ports,
listeners,
config_dir_path,
**kwargs,
):
ip_range_blacklist = "\n".join(
" # - '%s'" % ip for ip in DEFAULT_IP_RANGE_BLACKLIST
@ -1068,6 +1122,24 @@ class ServerConfig(Config):
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
# Connection settings for the manhole
#
manhole_settings:
# The username for the manhole. This defaults to 'matrix'.
#
#username: manhole
# The password for the manhole. This defaults to 'rabbithole'.
#
#password: mypassword
# The private and public SSH key pair used to encrypt the manhole traffic.
# If these are left unset, then hardcoded and non-secret keys are used,
# which could allow traffic to be intercepted if sent over a public network.
#
#ssh_priv_key_path: %(config_dir_path)s/id_rsa
#ssh_pub_key_path: %(config_dir_path)s/id_rsa.pub
# Forward extremities can build up in a room due to networking delays between
# homeservers. Once this happens in a large room, calculation of the state of
# that room can become quite expensive. To mitigate this, once the number of
@ -1436,3 +1508,14 @@ def _warn_if_webclient_configured(listeners: Iterable[ListenerConfig]) -> None:
if name == "webclient":
logger.warning(NO_MORE_WEB_CLIENT_WARNING)
return
_MANHOLE_SETTINGS_SCHEMA = {
"type": "object",
"properties": {
"username": {"type": "string"},
"password": {"type": "string"},
"ssh_priv_key_path": {"type": "string"},
"ssh_pub_key_path": {"type": "string"},
},
}

View file

@ -21,7 +21,13 @@ from signedjson.key import decode_verify_key_bytes
from signedjson.sign import SignatureVerifyException, verify_signed_json
from unpaddedbase64 import decode_base64
from synapse.api.constants import MAX_PDU_SIZE, EventTypes, JoinRules, Membership
from synapse.api.constants import (
MAX_PDU_SIZE,
EventContentFields,
EventTypes,
JoinRules,
Membership,
)
from synapse.api.errors import AuthError, EventSizeError, SynapseError
from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS,
@ -216,21 +222,18 @@ def check(
def _check_size_limits(event: EventBase) -> None:
def too_big(field):
raise EventSizeError("%s too large" % (field,))
if len(event.user_id) > 255:
too_big("user_id")
raise EventSizeError("'user_id' too large")
if len(event.room_id) > 255:
too_big("room_id")
raise EventSizeError("'room_id' too large")
if event.is_state() and len(event.state_key) > 255:
too_big("state_key")
raise EventSizeError("'state_key' too large")
if len(event.type) > 255:
too_big("type")
raise EventSizeError("'type' too large")
if len(event.event_id) > 255:
too_big("event_id")
raise EventSizeError("'event_id' too large")
if len(encode_canonical_json(event.get_pdu_json())) > MAX_PDU_SIZE:
too_big("event")
raise EventSizeError("event too large")
def _can_federate(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
@ -239,7 +242,7 @@ def _can_federate(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
if not creation_event:
return False
return creation_event.content.get("m.federate", True) is True
return creation_event.content.get(EventContentFields.FEDERATE, True) is True
def _is_membership_change_allowed(

View file

@ -94,7 +94,7 @@ class EventValidator:
self._validate_retention(event)
if event.type == EventTypes.ServerACL:
if not server_matches_acl_event(config.server_name, event):
if not server_matches_acl_event(config.server.server_name, event):
raise SynapseError(
400, "Can't create an ACL event that denies the local server"
)

View file

@ -22,6 +22,7 @@ from prometheus_client import Counter
from typing_extensions import Literal
from twisted.internet import defer
from twisted.internet.interfaces import IDelayedCall
import synapse.metrics
from synapse.api.presence import UserPresenceState
@ -280,11 +281,14 @@ class FederationSender(AbstractFederationSender):
self._queues_awaiting_rr_flush_by_room: Dict[str, Set[PerDestinationQueue]] = {}
self._rr_txn_interval_per_room_ms = (
1000.0 / hs.config.federation_rr_transactions_per_room_per_second
1000.0
/ hs.config.ratelimiting.federation_rr_transactions_per_room_per_second
)
# wake up destinations that have outstanding PDUs to be caught up
self._catchup_after_startup_timer = self.clock.call_later(
self._catchup_after_startup_timer: Optional[
IDelayedCall
] = self.clock.call_later(
CATCH_UP_STARTUP_DELAY_SEC,
run_as_background_process,
"wake_destinations_needing_catchup",
@ -406,7 +410,7 @@ class FederationSender(AbstractFederationSender):
now = self.clock.time_msec()
ts = await self.store.get_received_ts(event.event_id)
assert ts is not None
synapse.metrics.event_processing_lag_by_event.labels(
"federation_sender"
).observe((now - ts) / 1000)
@ -435,6 +439,7 @@ class FederationSender(AbstractFederationSender):
if events:
now = self.clock.time_msec()
ts = await self.store.get_received_ts(events[-1].event_id)
assert ts is not None
synapse.metrics.event_processing_lag.labels(
"federation_sender"

View file

@ -144,7 +144,7 @@ class GroupAttestionRenewer:
self.is_mine_id = hs.is_mine_id
self.attestations = hs.get_groups_attestation_signing()
if not hs.config.worker_app:
if not hs.config.worker.worker_app:
self._renew_attestations_loop = self.clock.looping_call(
self._start_renew_attestations, 30 * 60 * 1000
)

View file

@ -15,10 +15,7 @@
import logging
from typing import TYPE_CHECKING, Optional
import synapse.types
from synapse.api.constants import EventTypes, Membership
from synapse.api.ratelimiting import Ratelimiter
from synapse.types import UserID
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -48,16 +45,16 @@ class BaseHandler:
self.request_ratelimiter = Ratelimiter(
store=self.store, clock=self.clock, rate_hz=0, burst_count=0
)
self._rc_message = self.hs.config.rc_message
self._rc_message = self.hs.config.ratelimiting.rc_message
# Check whether ratelimiting room admin message redaction is enabled
# by the presence of rate limits in the config
if self.hs.config.rc_admin_redaction:
if self.hs.config.ratelimiting.rc_admin_redaction:
self.admin_redaction_ratelimiter: Optional[Ratelimiter] = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=self.hs.config.rc_admin_redaction.per_second,
burst_count=self.hs.config.rc_admin_redaction.burst_count,
rate_hz=self.hs.config.ratelimiting.rc_admin_redaction.per_second,
burst_count=self.hs.config.ratelimiting.rc_admin_redaction.burst_count,
)
else:
self.admin_redaction_ratelimiter = None
@ -115,68 +112,3 @@ class BaseHandler:
burst_count=burst_count,
update=update,
)
async def maybe_kick_guest_users(self, event, context=None):
# Technically this function invalidates current_state by changing it.
# Hopefully this isn't that important to the caller.
if event.type == EventTypes.GuestAccess:
guest_access = event.content.get("guest_access", "forbidden")
if guest_access != "can_join":
if context:
current_state_ids = await context.get_current_state_ids()
current_state_dict = await self.store.get_events(
list(current_state_ids.values())
)
current_state = list(current_state_dict.values())
else:
current_state_map = await self.state_handler.get_current_state(
event.room_id
)
current_state = list(current_state_map.values())
logger.info("maybe_kick_guest_users %r", current_state)
await self.kick_guest_users(current_state)
async def kick_guest_users(self, current_state):
for member_event in current_state:
try:
if member_event.type != EventTypes.Member:
continue
target_user = UserID.from_string(member_event.state_key)
if not self.hs.is_mine(target_user):
continue
if member_event.content["membership"] not in {
Membership.JOIN,
Membership.INVITE,
}:
continue
if (
"kind" not in member_event.content
or member_event.content["kind"] != "guest"
):
continue
# We make the user choose to leave, rather than have the
# event-sender kick them. This is partially because we don't
# need to worry about power levels, and partially because guest
# users are a concept which doesn't hugely work over federation,
# and having homeservers have their own users leave keeps more
# of that decision-making and control local to the guest-having
# homeserver.
requester = synapse.types.create_requester(
target_user, is_guest=True, authenticated_entity=self.server_name
)
handler = self.hs.get_room_member_handler()
await handler.update_membership(
requester,
target_user,
member_event.room_id,
"leave",
ratelimit=False,
require_consent=False,
)
except Exception as e:
logger.exception("Error kicking guest user: %s" % (e,))

View file

@ -78,7 +78,7 @@ class AccountValidityHandler:
)
# Check the renewal emails to send and send them every 30min.
if hs.config.run_background_tasks:
if hs.config.worker.run_background_tasks:
self.clock.looping_call(self._send_renewal_emails, 30 * 60 * 1000)
self._is_user_expired_callbacks: List[IS_USER_EXPIRED_CALLBACK] = []
@ -249,7 +249,7 @@ class AccountValidityHandler:
renewal_token = await self._get_renewal_token(user_id)
url = "%s_matrix/client/unstable/account_validity/renew?token=%s" % (
self.hs.config.public_baseurl,
self.hs.config.server.public_baseurl,
renewal_token,
)
@ -398,6 +398,7 @@ class AccountValidityHandler:
"""
now = self.clock.time_msec()
if expiration_ts is None:
assert self._account_validity_period is not None
expiration_ts = now + self._account_validity_period
await self.store.set_account_validity_for_user(

View file

@ -131,6 +131,8 @@ class ApplicationServicesHandler:
now = self.clock.time_msec()
ts = await self.store.get_received_ts(event.event_id)
assert ts is not None
synapse.metrics.event_processing_lag_by_event.labels(
"appservice_sender"
).observe((now - ts) / 1000)
@ -166,6 +168,7 @@ class ApplicationServicesHandler:
if events:
now = self.clock.time_msec()
ts = await self.store.get_received_ts(events[-1].event_id)
assert ts is not None
synapse.metrics.event_processing_lag.labels(
"appservice_sender"

View file

@ -244,8 +244,8 @@ class AuthHandler(BaseHandler):
self._failed_uia_attempts_ratelimiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
rate_hz=self.hs.config.ratelimiting.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.ratelimiting.rc_login_failed_attempts.burst_count,
)
# The number of seconds to keep a UI auth session active.
@ -255,14 +255,14 @@ class AuthHandler(BaseHandler):
self._failed_login_attempts_ratelimiter = Ratelimiter(
store=self.store,
clock=hs.get_clock(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
rate_hz=self.hs.config.ratelimiting.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.ratelimiting.rc_login_failed_attempts.burst_count,
)
self._clock = self.hs.get_clock()
# Expire old UI auth sessions after a period of time.
if hs.config.run_background_tasks:
if hs.config.worker.run_background_tasks:
self._clock.looping_call(
run_as_background_process,
5 * 60 * 1000,
@ -289,7 +289,7 @@ class AuthHandler(BaseHandler):
hs.config.sso_account_deactivated_template
)
self._server_name = hs.config.server_name
self._server_name = hs.config.server.server_name
# cast to tuple for use with str.startswith
self._whitelisted_sso_clients = tuple(hs.config.sso_client_whitelist)
@ -749,7 +749,7 @@ class AuthHandler(BaseHandler):
"name": self.hs.config.user_consent_policy_name,
"url": "%s_matrix/consent?v=%s"
% (
self.hs.config.public_baseurl,
self.hs.config.server.public_baseurl,
self.hs.config.user_consent_version,
),
},
@ -1799,7 +1799,7 @@ class MacaroonGenerator:
def _generate_base_macaroon(self, user_id: str) -> pymacaroons.Macaroon:
macaroon = pymacaroons.Macaroon(
location=self.hs.config.server_name,
location=self.hs.config.server.server_name,
identifier="key",
key=self.hs.config.macaroon_secret_key,
)

View file

@ -82,7 +82,6 @@ class CasHandler:
# the SsoIdentityProvider protocol type.
self.idp_icon = None
self.idp_brand = None
self.unstable_idp_brand = None
self._sso_handler = hs.get_sso_handler()

View file

@ -46,7 +46,7 @@ class DeactivateAccountHandler(BaseHandler):
# Start the user parter loop so it can resume parting users from rooms where
# it left off (if it has work left to do).
if hs.config.run_background_tasks:
if hs.config.worker.run_background_tasks:
hs.get_reactor().callWhenRunning(self._start_user_parting)
self._account_validity_enabled = (
@ -131,7 +131,7 @@ class DeactivateAccountHandler(BaseHandler):
await self.store.add_user_pending_deactivation(user_id)
# delete from user directory
await self.user_directory_handler.handle_user_deactivated(user_id)
await self.user_directory_handler.handle_local_user_deactivated(user_id)
# Mark the user as erased, if they asked for that
if erase_data:

View file

@ -84,8 +84,8 @@ class DeviceMessageHandler:
self._ratelimiter = Ratelimiter(
store=self.store,
clock=hs.get_clock(),
rate_hz=hs.config.rc_key_requests.per_second,
burst_count=hs.config.rc_key_requests.burst_count,
rate_hz=hs.config.ratelimiting.rc_key_requests.per_second,
burst_count=hs.config.ratelimiting.rc_key_requests.burst_count,
)
async def on_direct_to_device_edu(self, origin: str, content: JsonDict) -> None:

View file

@ -57,7 +57,7 @@ class E2eKeysHandler:
federation_registry = hs.get_federation_registry()
self._is_master = hs.config.worker_app is None
self._is_master = hs.config.worker.worker_app is None
if not self._is_master:
self._user_device_resync_client = (
ReplicationUserDevicesResyncRestServlet.make_client(hs)

View file

@ -101,7 +101,7 @@ class FederationHandler(BaseHandler):
hs
)
if hs.config.worker_app:
if hs.config.worker.worker_app:
self._maybe_store_room_on_outlier_membership = (
ReplicationStoreRoomOnOutlierMembershipRestServlet.make_client(hs)
)
@ -507,6 +507,7 @@ class FederationHandler(BaseHandler):
await self.store.upsert_room_on_join(
room_id=room_id,
room_version=room_version_obj,
auth_events=auth_chain,
)
max_stream_id = await self._persist_auth_tree(
@ -1613,7 +1614,7 @@ class FederationHandler(BaseHandler):
Args:
room_id
"""
if self.config.worker_app:
if self.config.worker.worker_app:
await self._clean_room_for_join_client(room_id)
else:
await self.store.clean_room_for_join(room_id)

View file

@ -36,6 +36,7 @@ from synapse import event_auth
from synapse.api.constants import (
EventContentFields,
EventTypes,
GuestAccess,
Membership,
RejectedReason,
RoomEncryptionAlgorithms,
@ -53,7 +54,6 @@ from synapse.event_auth import auth_types_for_event
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.federation.federation_client import InvalidResponseError
from synapse.handlers._base import BaseHandler
from synapse.logging.context import (
make_deferred_yieldable,
nested_logging_context,
@ -116,7 +116,7 @@ class _NewEventInfo:
claimed_auth_event_map: StateMap[EventBase]
class FederationEventHandler(BaseHandler):
class FederationEventHandler:
"""Handles events that originated from federation.
Responsible for handing incoming events and passing them on to the rest
@ -124,30 +124,32 @@ class FederationEventHandler(BaseHandler):
"""
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self._store = hs.get_datastore()
self._storage = hs.get_storage()
self._state_store = self._storage.state
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state_store = self.storage.state
self.state_handler = hs.get_state_handler()
self.event_creation_handler = hs.get_event_creation_handler()
self._state_handler = hs.get_state_handler()
self._event_creation_handler = hs.get_event_creation_handler()
self._event_auth_handler = hs.get_event_auth_handler()
self._message_handler = hs.get_message_handler()
self.action_generator = hs.get_action_generator()
self._action_generator = hs.get_action_generator()
self._state_resolution_handler = hs.get_state_resolution_handler()
# avoid a circular dependency by deferring execution here
self._get_room_member_handler = hs.get_room_member_handler
self.federation_client = hs.get_federation_client()
self.third_party_event_rules = hs.get_third_party_event_rules()
self._federation_client = hs.get_federation_client()
self._third_party_event_rules = hs.get_third_party_event_rules()
self._notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id
self._is_mine_id = hs.is_mine_id
self._server_name = hs.hostname
self._instance_name = hs.get_instance_name()
self.config = hs.config
self._config = hs.config
self._ephemeral_messages_enabled = hs.config.server.enable_ephemeral_messages
self._send_events = ReplicationFederationSendEventsRestServlet.make_client(hs)
if hs.config.worker_app:
if hs.config.worker.worker_app:
self._user_device_resync = (
ReplicationUserDevicesResyncRestServlet.make_client(hs)
)
@ -171,11 +173,14 @@ class FederationEventHandler(BaseHandler):
pdu: received PDU
"""
# We should never see any outliers here.
assert not pdu.internal_metadata.outlier
room_id = pdu.room_id
event_id = pdu.event_id
# We reprocess pdus when we have seen them only as outliers
existing = await self.store.get_event(
existing = await self._store.get_event(
event_id, allow_none=True, allow_rejected=True
)
@ -221,7 +226,7 @@ class FederationEventHandler(BaseHandler):
# Note that if we were never in the room then we would have already
# dropped the event, since we wouldn't know the room version.
is_in_room = await self._event_auth_handler.check_host_in_room(
room_id, self.server_name
room_id, self._server_name
)
if not is_in_room:
logger.info(
@ -230,77 +235,71 @@ class FederationEventHandler(BaseHandler):
)
return None
# Check that the event passes auth based on the state at the event. This is
# done for events that are to be added to the timeline (non-outliers).
#
# Get missing pdus if necessary:
# - Fetching any missing prev events to fill in gaps in the graph
# - Fetching state if we have a hole in the graph
if not pdu.internal_metadata.is_outlier():
prevs = set(pdu.prev_event_ids())
seen = await self.store.have_events_in_timeline(prevs)
missing_prevs = prevs - seen
# Try to fetch any missing prev events to fill in gaps in the graph
prevs = set(pdu.prev_event_ids())
seen = await self._store.have_events_in_timeline(prevs)
missing_prevs = prevs - seen
if missing_prevs:
# We only backfill backwards to the min depth.
min_depth = await self.get_min_depth_for_context(pdu.room_id)
logger.debug("min_depth: %d", min_depth)
if min_depth is not None and pdu.depth > min_depth:
# If we're missing stuff, ensure we only fetch stuff one
# at a time.
logger.info(
"Acquiring room lock to fetch %d missing prev_events: %s",
len(missing_prevs),
shortstr(missing_prevs),
)
with (await self._room_pdu_linearizer.queue(pdu.room_id)):
logger.info(
"Acquired room lock to fetch %d missing prev_events",
len(missing_prevs),
)
try:
await self._get_missing_events_for_pdu(
origin, pdu, prevs, min_depth
)
except Exception as e:
raise Exception(
"Error fetching missing prev_events for %s: %s"
% (event_id, e)
) from e
# Update the set of things we've seen after trying to
# fetch the missing stuff
seen = await self._store.have_events_in_timeline(prevs)
missing_prevs = prevs - seen
if not missing_prevs:
logger.info("Found all missing prev_events")
if missing_prevs:
# We only backfill backwards to the min depth.
min_depth = await self.get_min_depth_for_context(pdu.room_id)
logger.debug("min_depth: %d", min_depth)
if min_depth is not None and pdu.depth > min_depth:
# If we're missing stuff, ensure we only fetch stuff one
# at a time.
logger.info(
"Acquiring room lock to fetch %d missing prev_events: %s",
len(missing_prevs),
shortstr(missing_prevs),
)
with (await self._room_pdu_linearizer.queue(pdu.room_id)):
logger.info(
"Acquired room lock to fetch %d missing prev_events",
len(missing_prevs),
)
try:
await self._get_missing_events_for_pdu(
origin, pdu, prevs, min_depth
)
except Exception as e:
raise Exception(
"Error fetching missing prev_events for %s: %s"
% (event_id, e)
) from e
# Update the set of things we've seen after trying to
# fetch the missing stuff
seen = await self.store.have_events_in_timeline(prevs)
missing_prevs = prevs - seen
if not missing_prevs:
logger.info("Found all missing prev_events")
if missing_prevs:
# since this event was pushed to us, it is possible for it to
# become the only forward-extremity in the room, and we would then
# trust its state to be the state for the whole room. This is very
# bad. Further, if the event was pushed to us, there is no excuse
# for us not to have all the prev_events. (XXX: apart from
# min_depth?)
#
# We therefore reject any such events.
logger.warning(
"Rejecting: failed to fetch %d prev events: %s",
len(missing_prevs),
shortstr(missing_prevs),
)
raise FederationError(
"ERROR",
403,
(
"Your server isn't divulging details about prev_events "
"referenced in this event."
),
affected=pdu.event_id,
)
# since this event was pushed to us, it is possible for it to
# become the only forward-extremity in the room, and we would then
# trust its state to be the state for the whole room. This is very
# bad. Further, if the event was pushed to us, there is no excuse
# for us not to have all the prev_events. (XXX: apart from
# min_depth?)
#
# We therefore reject any such events.
logger.warning(
"Rejecting: failed to fetch %d prev events: %s",
len(missing_prevs),
shortstr(missing_prevs),
)
raise FederationError(
"ERROR",
403,
(
"Your server isn't divulging details about prev_events "
"referenced in this event."
),
affected=pdu.event_id,
)
await self._process_received_pdu(origin, pdu, state=None)
@ -361,7 +360,7 @@ class FederationEventHandler(BaseHandler):
# the room, so we send it on their behalf.
event.internal_metadata.send_on_behalf_of = origin
context = await self.state_handler.compute_event_context(event)
context = await self._state_handler.compute_event_context(event)
context = await self._check_event_auth(origin, event, context)
if context.rejected:
raise SynapseError(
@ -375,7 +374,7 @@ class FederationEventHandler(BaseHandler):
# for knock events, we run the third-party event rules. It's not entirely clear
# why we don't do this for other sorts of membership events.
if event.membership == Membership.KNOCK:
event_allowed, _ = await self.third_party_event_rules.check_event_allowed(
event_allowed, _ = await self._third_party_event_rules.check_event_allowed(
event, context
)
if not event_allowed:
@ -404,7 +403,7 @@ class FederationEventHandler(BaseHandler):
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
prev_member_event = None
if prev_member_event_id:
prev_member_event = await self.store.get_event(prev_member_event_id)
prev_member_event = await self._store.get_event(prev_member_event_id)
# Check if the member should be allowed access via membership in a space.
await self._event_auth_handler.check_restricted_join_rules(
@ -434,10 +433,10 @@ class FederationEventHandler(BaseHandler):
server from invalid events (there is probably no point in trying to
re-fetch invalid events from every other HS in the room.)
"""
if dest == self.server_name:
if dest == self._server_name:
raise SynapseError(400, "Can't backfill from self.")
events = await self.federation_client.backfill(
events = await self._federation_client.backfill(
dest, room_id, limit=limit, extremities=extremities
)
@ -469,12 +468,12 @@ class FederationEventHandler(BaseHandler):
room_id = pdu.room_id
event_id = pdu.event_id
seen = await self.store.have_events_in_timeline(prevs)
seen = await self._store.have_events_in_timeline(prevs)
if not prevs - seen:
return
latest_list = await self.store.get_latest_event_ids_in_room(room_id)
latest_list = await self._store.get_latest_event_ids_in_room(room_id)
# We add the prev events that we have seen to the latest
# list to ensure the remote server doesn't give them to us
@ -536,7 +535,7 @@ class FederationEventHandler(BaseHandler):
# All that said: Let's try increasing the timeout to 60s and see what happens.
try:
missing_events = await self.federation_client.get_missing_events(
missing_events = await self._federation_client.get_missing_events(
origin,
room_id,
earliest_events_ids=list(latest),
@ -609,7 +608,7 @@ class FederationEventHandler(BaseHandler):
event_id = event.event_id
existing = await self.store.get_event(
existing = await self._store.get_event(
event_id, allow_none=True, allow_rejected=True
)
if existing:
@ -674,7 +673,7 @@ class FederationEventHandler(BaseHandler):
event_id = event.event_id
prevs = set(event.prev_event_ids())
seen = await self.store.have_events_in_timeline(prevs)
seen = await self._store.have_events_in_timeline(prevs)
missing_prevs = prevs - seen
if not missing_prevs:
@ -691,7 +690,7 @@ class FederationEventHandler(BaseHandler):
event_map = {event_id: event}
try:
# Get the state of the events we know about
ours = await self.state_store.get_state_groups_ids(room_id, seen)
ours = await self._state_store.get_state_groups_ids(room_id, seen)
# state_maps is a list of mappings from (type, state_key) to event_id
state_maps: List[StateMap[str]] = list(ours.values())
@ -720,13 +719,13 @@ class FederationEventHandler(BaseHandler):
for x in remote_state:
event_map[x.event_id] = x
room_version = await self.store.get_room_version_id(room_id)
room_version = await self._store.get_room_version_id(room_id)
state_map = await self._state_resolution_handler.resolve_events_with_store(
room_id,
room_version,
state_maps,
event_map,
state_res_store=StateResolutionStore(self.store),
state_res_store=StateResolutionStore(self._store),
)
# We need to give _process_received_pdu the actual state events
@ -734,7 +733,7 @@ class FederationEventHandler(BaseHandler):
# First though we need to fetch all the events that are in
# state_map, so we can build up the state below.
evs = await self.store.get_events(
evs = await self._store.get_events(
list(state_map.values()),
get_prev_content=False,
redact_behaviour=EventRedactBehaviour.AS_IS,
@ -774,7 +773,7 @@ class FederationEventHandler(BaseHandler):
(
state_event_ids,
auth_event_ids,
) = await self.federation_client.get_room_state_ids(
) = await self._federation_client.get_room_state_ids(
destination, room_id, event_id=event_id
)
@ -788,7 +787,7 @@ class FederationEventHandler(BaseHandler):
desired_events = set(state_event_ids)
desired_events.add(event_id)
logger.debug("Fetching %i events from cache/store", len(desired_events))
fetched_events = await self.store.get_events(
fetched_events = await self._store.get_events(
desired_events, allow_rejected=True
)
@ -809,20 +808,20 @@ class FederationEventHandler(BaseHandler):
missing_auth_events = set(auth_event_ids) - fetched_events.keys()
missing_auth_events.difference_update(
await self.store.have_seen_events(room_id, missing_auth_events)
await self._store.have_seen_events(room_id, missing_auth_events)
)
logger.debug("We are also missing %i auth events", len(missing_auth_events))
missing_events = missing_desired_events | missing_auth_events
logger.debug("Fetching %i events from remote", len(missing_events))
await self._get_events_and_persist(
destination=destination, room_id=room_id, events=missing_events
destination=destination, room_id=room_id, event_ids=missing_events
)
# we need to make sure we re-load from the database to get the rejected
# state correct.
fetched_events.update(
await self.store.get_events(missing_desired_events, allow_rejected=True)
await self._store.get_events(missing_desired_events, allow_rejected=True)
)
# check for events which were in the wrong room.
@ -883,8 +882,13 @@ class FederationEventHandler(BaseHandler):
state: Optional[Iterable[EventBase]],
backfilled: bool = False,
) -> None:
"""Called when we have a new pdu. We need to do auth checks and put it
through the StateHandler.
"""Called when we have a new non-outlier event.
This is called when we have a new event to add to the room DAG - either directly
via a /send request, retrieved via get_missing_events after a /send request, or
backfilled after a client request.
We need to do auth checks and put it through the StateHandler.
Args:
origin: server sending the event
@ -899,17 +903,24 @@ class FederationEventHandler(BaseHandler):
notification to clients, and validation of device keys.)
"""
logger.debug("Processing event: %s", event)
assert not event.internal_metadata.outlier
try:
context = await self.state_handler.compute_event_context(
context = await self._state_handler.compute_event_context(
event, old_state=state
)
await self._auth_and_persist_event(
origin, event, context, state=state, backfilled=backfilled
context = await self._check_event_auth(
origin,
event,
context,
state=state,
backfilled=backfilled,
)
except AuthError as e:
raise FederationError("ERROR", e.code, e.msg, affected=event.event_id)
await self._run_push_actions_and_persist_event(event, context, backfilled)
if backfilled:
return
@ -919,7 +930,7 @@ class FederationEventHandler(BaseHandler):
device_id = event.content.get("device_id")
sender_key = event.content.get("sender_key")
cached_devices = await self.store.get_cached_devices_for_user(event.sender)
cached_devices = await self._store.get_cached_devices_for_user(event.sender)
resync = False # Whether we should resync device lists.
@ -995,10 +1006,10 @@ class FederationEventHandler(BaseHandler):
"""
try:
await self.store.mark_remote_user_device_cache_as_stale(sender)
await self._store.mark_remote_user_device_cache_as_stale(sender)
# Immediately attempt a resync in the background
if self.config.worker_app:
if self._config.worker.worker_app:
await self._user_device_resync(user_id=sender)
else:
await self._device_list_updater.user_device_resync(sender)
@ -1023,9 +1034,15 @@ class FederationEventHandler(BaseHandler):
return
# Skip processing a marker event if the room version doesn't
# support it.
room_version = await self.store.get_room_version(marker_event.room_id)
if not room_version.msc2716_historical:
# support it or the event is not from the room creator.
room_version = await self._store.get_room_version(marker_event.room_id)
create_event = await self._store.get_create_event_for_room(marker_event.room_id)
room_creator = create_event.content.get(EventContentFields.ROOM_CREATOR)
if (
not room_version.msc2716_historical
or not self._config.experimental.msc2716_enabled
or marker_event.sender != room_creator
):
return
logger.debug("_handle_marker_event: received %s", marker_event)
@ -1048,7 +1065,7 @@ class FederationEventHandler(BaseHandler):
[insertion_event_id],
)
insertion_event = await self.store.get_event(
insertion_event = await self._store.get_event(
insertion_event_id, allow_none=True
)
if insertion_event is None:
@ -1066,7 +1083,7 @@ class FederationEventHandler(BaseHandler):
marker_event,
)
await self.store.insert_insertion_extremity(
await self._store.insert_insertion_extremity(
insertion_event_id, marker_event.room_id
)
@ -1077,25 +1094,25 @@ class FederationEventHandler(BaseHandler):
)
async def _get_events_and_persist(
self, destination: str, room_id: str, events: Iterable[str]
self, destination: str, room_id: str, event_ids: Collection[str]
) -> None:
"""Fetch the given events from a server, and persist them as outliers.
This function *does not* recursively get missing auth events of the
newly fetched events. Callers must include in the `events` argument
newly fetched events. Callers must include in the `event_ids` argument
any missing events from the auth chain.
Logs a warning if we can't find the given event.
"""
room_version = await self.store.get_room_version(room_id)
room_version = await self._store.get_room_version(room_id)
event_map: Dict[str, EventBase] = {}
async def get_event(event_id: str):
with nested_logging_context(event_id):
try:
event = await self.federation_client.get_pdu(
event = await self._federation_client.get_pdu(
[destination],
event_id,
room_version,
@ -1119,28 +1136,78 @@ class FederationEventHandler(BaseHandler):
e,
)
await concurrently_execute(get_event, events, 5)
await concurrently_execute(get_event, event_ids, 5)
logger.info("Fetched %i events of %i requested", len(event_map), len(event_ids))
# Make a map of auth events for each event. We do this after fetching
# all the events as some of the events' auth events will be in the list
# of requested events.
# we now need to auth the events in an order which ensures that each event's
# auth_events are authed before the event itself.
#
# XXX: it might be possible to kick this process off in parallel with fetching
# the events.
while event_map:
# build a list of events whose auth events are not in the queue.
roots = tuple(
ev
for ev in event_map.values()
if not any(aid in event_map for aid in ev.auth_event_ids())
)
auth_events = [
aid
for event in event_map.values()
for aid in event.auth_event_ids()
if aid not in event_map
]
persisted_events = await self.store.get_events(
if not roots:
# if *none* of the remaining events are ready, that means
# we have a loop. This either means a bug in our logic, or that
# somebody has managed to create a loop (which requires finding a
# hash collision in room v2 and later).
logger.warning(
"Loop found in auth events while fetching missing state/auth "
"events: %s",
shortstr(event_map.keys()),
)
return
logger.info(
"Persisting %i of %i remaining events", len(roots), len(event_map)
)
await self._auth_and_persist_fetched_events(destination, room_id, roots)
for ev in roots:
del event_map[ev.event_id]
async def _auth_and_persist_fetched_events(
self, origin: str, room_id: str, fetched_events: Collection[EventBase]
) -> None:
"""Persist the events fetched by _get_events_and_persist.
The events should not depend on one another, e.g. this should be used to persist
a bunch of outliers, but not a chunk of individual events that depend
on each other for state calculations.
We also assume that all of the auth events for all of the events have already
been persisted.
Notifies about the events where appropriate.
Params:
origin: where the events came from
room_id: the room that the events are meant to be in (though this has
not yet been checked)
event_id: map from event_id -> event for the fetched events
"""
# get all the auth events for all the events in this batch. By now, they should
# have been persisted.
auth_events = {
aid for event in fetched_events for aid in event.auth_event_ids()
}
persisted_events = await self._store.get_events(
auth_events,
allow_rejected=True,
)
event_infos = []
for event in event_map.values():
for event in fetched_events:
auth = {}
for auth_event_id in event.auth_event_ids():
ae = persisted_events.get(auth_event_id) or event_map.get(auth_event_id)
ae = persisted_events.get(auth_event_id)
if ae:
auth[(ae.type, ae.state_key)] = ae
else:
@ -1148,34 +1215,13 @@ class FederationEventHandler(BaseHandler):
event_infos.append(_NewEventInfo(event, auth))
if event_infos:
await self._auth_and_persist_events(
destination,
room_id,
event_infos,
)
async def _auth_and_persist_events(
self,
origin: str,
room_id: str,
event_infos: Collection[_NewEventInfo],
) -> None:
"""Creates the appropriate contexts and persists events. The events
should not depend on one another, e.g. this should be used to persist
a bunch of outliers, but not a chunk of individual events that depend
on each other for state calculations.
Notifies about the events where appropriate.
"""
if not event_infos:
return
async def prep(ev_info: _NewEventInfo):
event = ev_info.event
with nested_logging_context(suffix=event.event_id):
res = await self.state_handler.compute_event_context(event)
res = await self._state_handler.compute_event_context(event)
res = await self._check_event_auth(
origin,
event,
@ -1199,49 +1245,6 @@ class FederationEventHandler(BaseHandler):
],
)
async def _auth_and_persist_event(
self,
origin: str,
event: EventBase,
context: EventContext,
state: Optional[Iterable[EventBase]] = None,
claimed_auth_event_map: Optional[StateMap[EventBase]] = None,
backfilled: bool = False,
) -> None:
"""
Process an event by performing auth checks and then persisting to the database.
Args:
origin: The host the event originates from.
event: The event itself.
context:
The event context.
state:
The state events used to check the event for soft-fail. If this is
not provided the current state events will be used.
claimed_auth_event_map:
A map of (type, state_key) => event for the event's claimed auth_events.
Possibly incomplete, and possibly including events that are not yet
persisted, or authed, or in the right room.
Only populated where we may not already have persisted these events -
for example, when populating outliers.
backfilled: True if the event was backfilled.
"""
context = await self._check_event_auth(
origin,
event,
context,
state=state,
claimed_auth_event_map=claimed_auth_event_map,
backfilled=backfilled,
)
await self._run_push_actions_and_persist_event(event, context, backfilled)
async def _check_event_auth(
self,
origin: str,
@ -1269,16 +1272,17 @@ class FederationEventHandler(BaseHandler):
Possibly incomplete, and possibly including events that are not yet
persisted, or authed, or in the right room.
Only populated where we may not already have persisted these events -
for example, when populating outliers, or the state for a backwards
extremity.
Only populated when populating outliers.
backfilled: True if the event was backfilled.
Returns:
The updated context object.
"""
room_version = await self.store.get_room_version_id(event.room_id)
# claimed_auth_event_map should be given iff the event is an outlier
assert bool(claimed_auth_event_map) == event.internal_metadata.outlier
room_version = await self._store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
if claimed_auth_event_map:
@ -1291,7 +1295,7 @@ class FederationEventHandler(BaseHandler):
auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events_x = await self.store.get_events(auth_events_ids)
auth_events_x = await self._store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_x.values()}
try:
@ -1321,19 +1325,29 @@ class FederationEventHandler(BaseHandler):
if not context.rejected:
await self._check_for_soft_fail(event, state, backfilled, origin=origin)
if event.type == EventTypes.GuestAccess and not context.rejected:
await self.maybe_kick_guest_users(event)
await self._maybe_kick_guest_users(event)
# If we are going to send this event over federation we precaclculate
# the joined hosts.
if event.internal_metadata.get_send_on_behalf_of():
await self.event_creation_handler.cache_joined_hosts_for_event(
await self._event_creation_handler.cache_joined_hosts_for_event(
event, context
)
return context
async def _maybe_kick_guest_users(self, event: EventBase) -> None:
if event.type != EventTypes.GuestAccess:
return
guest_access = event.content.get(EventContentFields.GUEST_ACCESS)
if guest_access == GuestAccess.CAN_JOIN:
return
current_state_map = await self._state_handler.get_current_state(event.room_id)
current_state = list(current_state_map.values())
await self._get_room_member_handler().kick_guest_users(current_state)
async def _check_for_soft_fail(
self,
event: EventBase,
@ -1356,7 +1370,7 @@ class FederationEventHandler(BaseHandler):
if backfilled or event.internal_metadata.is_outlier():
return
extrem_ids_list = await self.store.get_latest_event_ids_in_room(event.room_id)
extrem_ids_list = await self._store.get_latest_event_ids_in_room(event.room_id)
extrem_ids = set(extrem_ids_list)
prev_event_ids = set(event.prev_event_ids())
@ -1365,7 +1379,7 @@ class FederationEventHandler(BaseHandler):
# state at the event, so no point rechecking auth for soft fail.
return
room_version = await self.store.get_room_version_id(event.room_id)
room_version = await self._store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
# Calculate the "current state".
@ -1382,19 +1396,19 @@ class FederationEventHandler(BaseHandler):
# given state at the event. This should correctly handle cases
# like bans, especially with state res v2.
state_sets_d = await self.state_store.get_state_groups(
state_sets_d = await self._state_store.get_state_groups(
event.room_id, extrem_ids
)
state_sets: List[Iterable[EventBase]] = list(state_sets_d.values())
state_sets.append(state)
current_states = await self.state_handler.resolve_events(
current_states = await self._state_handler.resolve_events(
room_version, state_sets, event
)
current_state_ids: StateMap[str] = {
k: e.event_id for k, e in current_states.items()
}
else:
current_state_ids = await self.state_handler.get_current_state_ids(
current_state_ids = await self._state_handler.get_current_state_ids(
event.room_id, latest_event_ids=extrem_ids
)
@ -1410,7 +1424,7 @@ class FederationEventHandler(BaseHandler):
e for k, e in current_state_ids.items() if k in auth_types
]
auth_events_map = await self.store.get_events(current_state_ids_list)
auth_events_map = await self._store.get_events(current_state_ids_list)
current_auth_events = {
(e.type, e.state_key): e for e in auth_events_map.values()
}
@ -1481,7 +1495,9 @@ class FederationEventHandler(BaseHandler):
#
# we start by checking if they are in the store, and then try calling /event_auth/.
if missing_auth:
have_events = await self.store.have_seen_events(event.room_id, missing_auth)
have_events = await self._store.have_seen_events(
event.room_id, missing_auth
)
logger.debug("Events %s are in the store", have_events)
missing_auth.difference_update(have_events)
@ -1490,7 +1506,7 @@ class FederationEventHandler(BaseHandler):
logger.info("auth_events contains unknown events: %s", missing_auth)
try:
try:
remote_auth_chain = await self.federation_client.get_event_auth(
remote_auth_chain = await self._federation_client.get_event_auth(
origin, event.room_id, event.event_id
)
except RequestSendFailed as e1:
@ -1499,43 +1515,49 @@ class FederationEventHandler(BaseHandler):
logger.info("Failed to get event auth from remote: %s", e1)
return context, auth_events
seen_remotes = await self.store.have_seen_events(
seen_remotes = await self._store.have_seen_events(
event.room_id, [e.event_id for e in remote_auth_chain]
)
for e in remote_auth_chain:
if e.event_id in seen_remotes:
for auth_event in remote_auth_chain:
if auth_event.event_id in seen_remotes:
continue
if e.event_id == event.event_id:
if auth_event.event_id == event.event_id:
continue
try:
auth_ids = e.auth_event_ids()
auth_ids = auth_event.auth_event_ids()
auth = {
(e.type, e.state_key): e
for e in remote_auth_chain
if e.event_id in auth_ids or e.type == EventTypes.Create
}
e.internal_metadata.outlier = True
auth_event.internal_metadata.outlier = True
logger.debug(
"_check_event_auth %s missing_auth: %s",
event.event_id,
e.event_id,
auth_event.event_id,
)
missing_auth_event_context = (
await self.state_handler.compute_event_context(e)
await self._state_handler.compute_event_context(auth_event)
)
await self._auth_and_persist_event(
missing_auth_event_context = await self._check_event_auth(
origin,
e,
auth_event,
missing_auth_event_context,
claimed_auth_event_map=auth,
)
await self.persist_events_and_notify(
event.room_id, [(auth_event, missing_auth_event_context)]
)
if e.event_id in event_auth_events:
auth_events[(e.type, e.state_key)] = e
if auth_event.event_id in event_auth_events:
auth_events[
(auth_event.type, auth_event.state_key)
] = auth_event
except AuthError:
pass
@ -1566,7 +1588,7 @@ class FederationEventHandler(BaseHandler):
# XXX: currently this checks for redactions but I'm not convinced that is
# necessary?
different_events = await self.store.get_events_as_list(different_auth)
different_events = await self._store.get_events_as_list(different_auth)
for d in different_events:
if d.room_id != event.room_id:
@ -1592,8 +1614,8 @@ class FederationEventHandler(BaseHandler):
remote_auth_events.update({(d.type, d.state_key): d for d in different_events})
remote_state = remote_auth_events.values()
room_version = await self.store.get_room_version_id(event.room_id)
new_state = await self.state_handler.resolve_events(
room_version = await self._store.get_room_version_id(event.room_id)
new_state = await self._state_handler.resolve_events(
room_version, (local_state, remote_state), event
)
@ -1651,7 +1673,7 @@ class FederationEventHandler(BaseHandler):
# create a new state group as a delta from the existing one.
prev_group = context.state_group
state_group = await self.state_store.store_state_group(
state_group = await self._state_store.store_state_group(
event.event_id,
event.room_id,
prev_group=prev_group,
@ -1678,14 +1700,17 @@ class FederationEventHandler(BaseHandler):
context: The event context.
backfilled: True if the event was backfilled.
"""
# this method should not be called on outliers (those code paths call
# persist_events_and_notify directly.)
assert not event.internal_metadata.outlier
try:
if (
not event.internal_metadata.is_outlier()
and not backfilled
not backfilled
and not context.rejected
and (await self.store.get_min_depth(event.room_id)) <= event.depth
and (await self._store.get_min_depth(event.room_id)) <= event.depth
):
await self.action_generator.handle_push_actions_for_event(
await self._action_generator.handle_push_actions_for_event(
event, context
)
@ -1694,7 +1719,7 @@ class FederationEventHandler(BaseHandler):
)
except Exception:
run_in_background(
self.store.remove_push_actions_from_staging, event.event_id
self._store.remove_push_actions_from_staging, event.event_id
)
raise
@ -1719,27 +1744,27 @@ class FederationEventHandler(BaseHandler):
The stream ID after which all events have been persisted.
"""
if not event_and_contexts:
return self.store.get_current_events_token()
return self._store.get_current_events_token()
instance = self.config.worker.events_shard_config.get_instance(room_id)
instance = self._config.worker.events_shard_config.get_instance(room_id)
if instance != self._instance_name:
# Limit the number of events sent over replication. We choose 200
# here as that is what we default to in `max_request_body_size(..)`
for batch in batch_iter(event_and_contexts, 200):
result = await self._send_events(
instance_name=instance,
store=self.store,
store=self._store,
room_id=room_id,
event_and_contexts=batch,
backfilled=backfilled,
)
return result["max_stream_id"]
else:
assert self.storage.persistence
assert self._storage.persistence
# Note that this returns the events that were persisted, which may not be
# the same as were passed in if some were deduplicated due to transaction IDs.
events, max_stream_token = await self.storage.persistence.persist_events(
events, max_stream_token = await self._storage.persistence.persist_events(
event_and_contexts, backfilled=backfilled
)
@ -1773,7 +1798,7 @@ class FederationEventHandler(BaseHandler):
# users
if event.internal_metadata.is_outlier():
if event.membership != Membership.INVITE:
if not self.is_mine_id(target_user_id):
if not self._is_mine_id(target_user_id):
return
target_user = UserID.from_string(target_user_id)
@ -1787,7 +1812,7 @@ class FederationEventHandler(BaseHandler):
event_pos = PersistedEventPosition(
self._instance_name, event.internal_metadata.stream_ordering
)
self.notifier.on_new_room_event(
self._notifier.on_new_room_event(
event, event_pos, max_stream_token, extra_users=extra_users
)
@ -1822,4 +1847,4 @@ class FederationEventHandler(BaseHandler):
raise SynapseError(HTTPStatus.BAD_REQUEST, "Too many auth_events")
async def get_min_depth_for_context(self, context: str) -> int:
return await self.store.get_min_depth(context)
return await self._store.get_min_depth(context)

View file

@ -540,13 +540,13 @@ class IdentityHandler(BaseHandler):
# It is already checked that public_baseurl is configured since this code
# should only be used if account_threepid_delegate_msisdn is true.
assert self.hs.config.public_baseurl
assert self.hs.config.server.public_baseurl
# we need to tell the client to send the token back to us, since it doesn't
# otherwise know where to send it, so add submit_url response parameter
# (see also MSC2078)
data["submit_url"] = (
self.hs.config.public_baseurl
self.hs.config.server.public_baseurl
+ "_matrix/client/unstable/add_threepid/msisdn/submit_token"
)
return data

View file

@ -27,6 +27,7 @@ from synapse import event_auth
from synapse.api.constants import (
EventContentFields,
EventTypes,
GuestAccess,
Membership,
RelationTypes,
UserTypes,
@ -83,7 +84,7 @@ class MessageHandler:
# scheduled.
self._scheduled_expiry: Optional[IDelayedCall] = None
if not hs.config.worker_app:
if not hs.config.worker.worker_app:
run_as_background_process(
"_schedule_next_expiry", self._schedule_next_expiry
)
@ -426,7 +427,7 @@ class EventCreationHandler:
self.send_event = ReplicationSendEventRestServlet.make_client(hs)
# This is only used to get at ratelimit function, and maybe_kick_guest_users
# This is only used to get at ratelimit function
self.base_handler = BaseHandler(hs)
# We arbitrarily limit concurrent event creation for a room to 5.
@ -462,7 +463,7 @@ class EventCreationHandler:
self._dummy_events_threshold = hs.config.dummy_events_threshold
if (
self.config.run_background_tasks
self.config.worker.run_background_tasks
and self.config.cleanup_extremities_with_dummy_events
):
self.clock.looping_call(
@ -1308,7 +1309,7 @@ class EventCreationHandler:
requester, is_admin_redaction=is_admin_redaction
)
await self.base_handler.maybe_kick_guest_users(event, context)
await self._maybe_kick_guest_users(event, context)
validation_override = event.sender in self.config.meow.validation_override
if event.type == EventTypes.CanonicalAlias and not validation_override:
@ -1396,6 +1397,9 @@ class EventCreationHandler:
allow_none=True,
)
room_version = await self.store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
# we can make some additional checks now if we have the original event.
if original_event:
if original_event.type == EventTypes.Create:
@ -1407,6 +1411,28 @@ class EventCreationHandler:
if original_event.type == EventTypes.ServerACL:
raise AuthError(403, "Redacting server ACL events is not permitted")
# Add a little safety stop-gap to prevent people from trying to
# redact MSC2716 related events when they're in a room version
# which does not support it yet. We allow people to use MSC2716
# events in existing room versions but only from the room
# creator since it does not require any changes to the auth
# rules and in effect, the redaction algorithm . In the
# supported room version, we add the `historical` power level to
# auth the MSC2716 related events and adjust the redaction
# algorthim to keep the `historical` field around (redacting an
# event should only strip fields which don't affect the
# structural protocol level).
is_msc2716_event = (
original_event.type == EventTypes.MSC2716_INSERTION
or original_event.type == EventTypes.MSC2716_CHUNK
or original_event.type == EventTypes.MSC2716_MARKER
)
if not room_version_obj.msc2716_historical and is_msc2716_event:
raise AuthError(
403,
"Redacting MSC2716 events is not supported in this room version",
)
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=True
@ -1414,9 +1440,6 @@ class EventCreationHandler:
auth_events_map = await self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_map.values()}
room_version = await self.store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
if event_auth.check_redaction(
room_version_obj, event, auth_events=auth_events
):
@ -1474,6 +1497,28 @@ class EventCreationHandler:
return event
async def _maybe_kick_guest_users(
self, event: EventBase, context: EventContext
) -> None:
if event.type != EventTypes.GuestAccess:
return
guest_access = event.content.get(EventContentFields.GUEST_ACCESS)
if guest_access == GuestAccess.CAN_JOIN:
return
current_state_ids = await context.get_current_state_ids()
# since this is a client-generated event, it cannot be an outlier and we must
# therefore have the state ids.
assert current_state_ids is not None
current_state_dict = await self.store.get_events(
list(current_state_ids.values())
)
current_state = list(current_state_dict.values())
logger.info("maybe_kick_guest_users %r", current_state)
await self.hs.get_room_member_handler().kick_guest_users(current_state)
async def _bump_active_time(self, user: UserID) -> None:
try:
presence = self.hs.get_presence_handler()

View file

@ -324,7 +324,7 @@ class OidcProvider:
self._allow_existing_users = provider.allow_existing_users
self._http_client = hs.get_proxied_http_client()
self._server_name: str = hs.config.server_name
self._server_name: str = hs.config.server.server_name
# identifier for the external_ids table
self.idp_id = provider.idp_id
@ -338,9 +338,6 @@ class OidcProvider:
# optional brand identifier for this auth provider
self.idp_brand = provider.idp_brand
# Optional brand identifier for the unstable API (see MSC2858).
self.unstable_idp_brand = provider.unstable_idp_brand
self._sso_handler = hs.get_sso_handler()
self._sso_handler.register_identity_provider(self)

View file

@ -91,7 +91,7 @@ class PaginationHandler:
self._retention_allowed_lifetime_min = hs.config.retention_allowed_lifetime_min
self._retention_allowed_lifetime_max = hs.config.retention_allowed_lifetime_max
if hs.config.run_background_tasks and hs.config.retention_enabled:
if hs.config.worker.run_background_tasks and hs.config.retention_enabled:
# Run the purge jobs described in the configuration file.
for job in hs.config.retention_purge_jobs:
logger.info("Setting up purge job with config: %s", job)

View file

@ -28,6 +28,7 @@ from bisect import bisect
from contextlib import contextmanager
from typing import (
TYPE_CHECKING,
Any,
Callable,
Collection,
Dict,
@ -615,7 +616,7 @@ class PresenceHandler(BasePresenceHandler):
super().__init__(hs)
self.hs = hs
self.server_name = hs.hostname
self.wheel_timer = WheelTimer()
self.wheel_timer: WheelTimer[str] = WheelTimer()
self.notifier = hs.get_notifier()
self._presence_enabled = hs.config.use_presence
@ -924,7 +925,7 @@ class PresenceHandler(BasePresenceHandler):
prev_state = await self.current_state_for_user(user_id)
new_fields = {"last_active_ts": self.clock.time_msec()}
new_fields: Dict[str, Any] = {"last_active_ts": self.clock.time_msec()}
if prev_state.state == PresenceState.UNAVAILABLE:
new_fields["state"] = PresenceState.ONLINE

View file

@ -63,7 +63,7 @@ class ProfileHandler(BaseHandler):
self.user_directory_handler = hs.get_user_directory_handler()
if hs.config.run_background_tasks:
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
self._update_remote_profile_cache, self.PROFILE_UPDATE_MS
)

View file

@ -28,7 +28,7 @@ logger = logging.getLogger(__name__)
class ReadMarkerHandler(BaseHandler):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.server_name = hs.config.server_name
self.server_name = hs.config.server.server_name
self.store = hs.get_datastore()
self.account_data_handler = hs.get_account_data_handler()
self.read_marker_linearizer = Linearizer(name="read_marker")

View file

@ -29,7 +29,7 @@ class ReceiptsHandler(BaseHandler):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.server_name = hs.config.server_name
self.server_name = hs.config.server.server_name
self.store = hs.get_datastore()
self.event_auth_handler = hs.get_event_auth_handler()

View file

@ -21,7 +21,13 @@ from prometheus_client import Counter
from typing_extensions import TypedDict
from synapse import types
from synapse.api.constants import MAX_USERID_LENGTH, EventTypes, JoinRules, LoginType
from synapse.api.constants import (
MAX_USERID_LENGTH,
EventContentFields,
EventTypes,
JoinRules,
LoginType,
)
from synapse.api.errors import AuthError, Codes, ConsentNotGivenError, SynapseError
from synapse.appservice import ApplicationService
from synapse.config.server import is_threepid_reserved
@ -96,7 +102,7 @@ class RegistrationHandler(BaseHandler):
self.spam_checker = hs.get_spam_checker()
if hs.config.worker_app:
if hs.config.worker.worker_app:
self._register_client = ReplicationRegisterServlet.make_client(hs)
self._register_device_client = RegisterDeviceReplicationServlet.make_client(
hs
@ -412,7 +418,7 @@ class RegistrationHandler(BaseHandler):
# Choose whether to federate the new room.
if not self.hs.config.registration.autocreate_auto_join_rooms_federated:
stub_config["creation_content"] = {"m.federate": False}
stub_config["creation_content"] = {EventContentFields.FEDERATE: False}
for r in self.hs.config.registration.auto_join_rooms:
logger.info("Auto-joining %s to %s", user_id, r)
@ -697,7 +703,7 @@ class RegistrationHandler(BaseHandler):
address: the IP address used to perform the registration.
shadow_banned: Whether to shadow-ban the user
"""
if self.hs.config.worker_app:
if self.hs.config.worker.worker_app:
await self._register_client(
user_id=user_id,
password_hash=password_hash,
@ -787,7 +793,7 @@ class RegistrationHandler(BaseHandler):
Does the bits that need doing on the main process. Not for use outside this
class and RegisterDeviceReplicationServlet.
"""
assert not self.hs.config.worker_app
assert not self.hs.config.worker.worker_app
valid_until_ms = None
if self.session_lifetime is not None:
if is_guest:
@ -844,7 +850,7 @@ class RegistrationHandler(BaseHandler):
"""
# TODO: 3pid registration can actually happen on the workers. Consider
# refactoring it.
if self.hs.config.worker_app:
if self.hs.config.worker.worker_app:
await self._post_registration_client(
user_id=user_id, auth_result=auth_result, access_token=access_token
)

View file

@ -25,12 +25,15 @@ from collections import OrderedDict
from typing import TYPE_CHECKING, Any, Awaitable, Dict, List, Optional, Tuple
from synapse.api.constants import (
EventContentFields,
EventTypes,
GuestAccess,
HistoryVisibility,
JoinRules,
Membership,
RoomCreationPreset,
RoomEncryptionAlgorithms,
RoomTypes,
)
from synapse.api.errors import (
AuthError,
@ -388,14 +391,14 @@ class RoomCreationHandler(BaseHandler):
old_room_create_event = await self.store.get_create_event_for_room(old_room_id)
# Check if the create event specified a non-federatable room
if not old_room_create_event.content.get("m.federate", True):
if not old_room_create_event.content.get(EventContentFields.FEDERATE, True):
# If so, mark the new room as non-federatable as well
creation_content["m.federate"] = False
creation_content[EventContentFields.FEDERATE] = False
initial_state = {}
# Replicate relevant room events
types_to_copy = (
types_to_copy: List[Tuple[str, Optional[str]]] = [
(EventTypes.JoinRules, ""),
(EventTypes.Name, ""),
(EventTypes.Topic, ""),
@ -406,7 +409,16 @@ class RoomCreationHandler(BaseHandler):
(EventTypes.ServerACL, ""),
(EventTypes.RelatedGroups, ""),
(EventTypes.PowerLevels, ""),
)
]
# If the old room was a space, copy over the room type and the rooms in
# the space.
if (
old_room_create_event.content.get(EventContentFields.ROOM_TYPE)
== RoomTypes.SPACE
):
creation_content[EventContentFields.ROOM_TYPE] = RoomTypes.SPACE
types_to_copy.append((EventTypes.SpaceChild, None))
old_room_state_ids = await self.store.get_filtered_current_state_ids(
old_room_id, StateFilter.from_types(types_to_copy)
@ -417,6 +429,11 @@ class RoomCreationHandler(BaseHandler):
for k, old_event_id in old_room_state_ids.items():
old_event = old_room_state_events.get(old_event_id)
if old_event:
# If the event is an space child event with empty content, it was
# removed from the space and should be ignored.
if k[0] == EventTypes.SpaceChild and not old_event.content:
continue
initial_state[k] = old_event.content
# deep-copy the power-levels event before we start modifying it
@ -919,7 +936,12 @@ class RoomCreationHandler(BaseHandler):
)
return last_stream_id
config = self._presets_dict[preset_config]
try:
config = self._presets_dict[preset_config]
except KeyError:
raise SynapseError(
400, f"'{preset_config}' is not a valid preset", errcode=Codes.BAD_JSON
)
creation_content.update({"creator": creator_id})
await send(etype=EventTypes.Create, content=creation_content)
@ -998,7 +1020,8 @@ class RoomCreationHandler(BaseHandler):
if config["guest_can_join"]:
if (EventTypes.GuestAccess, "") not in initial_state:
last_sent_stream_id = await send(
etype=EventTypes.GuestAccess, content={"guest_access": "can_join"}
etype=EventTypes.GuestAccess,
content={EventContentFields.GUEST_ACCESS: GuestAccess.CAN_JOIN},
)
for (etype, state_key), content in initial_state.items():

View file

@ -19,7 +19,13 @@ from typing import TYPE_CHECKING, Optional, Tuple
import msgpack
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.constants import EventTypes, HistoryVisibility, JoinRules
from synapse.api.constants import (
EventContentFields,
EventTypes,
GuestAccess,
HistoryVisibility,
JoinRules,
)
from synapse.api.errors import (
Codes,
HttpResponseException,
@ -307,7 +313,9 @@ class RoomListHandler(BaseHandler):
# Return whether this room is open to federation users or not
create_event = current_state[EventTypes.Create, ""]
result["m.federate"] = create_event.content.get("m.federate", True)
result["m.federate"] = create_event.content.get(
EventContentFields.FEDERATE, True
)
name_event = current_state.get((EventTypes.Name, ""))
if name_event:
@ -336,8 +344,8 @@ class RoomListHandler(BaseHandler):
guest_event = current_state.get((EventTypes.GuestAccess, ""))
guest = None
if guest_event:
guest = guest_event.content.get("guest_access", None)
result["guest_can_join"] = guest == "can_join"
guest = guest_event.content.get(EventContentFields.GUEST_ACCESS)
result["guest_can_join"] = guest == GuestAccess.CAN_JOIN
avatar_event = current_state.get(("m.room.avatar", ""))
if avatar_event:

View file

@ -23,6 +23,7 @@ from synapse.api.constants import (
AccountDataTypes,
EventContentFields,
EventTypes,
GuestAccess,
Membership,
)
from synapse.api.errors import (
@ -44,6 +45,7 @@ from synapse.types import (
RoomID,
StateMap,
UserID,
create_requester,
get_domain_from_id,
)
from synapse.util.async_helpers import Linearizer
@ -70,6 +72,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self.auth = hs.get_auth()
self.state_handler = hs.get_state_handler()
self.config = hs.config
self._server_name = hs.hostname
self.federation_handler = hs.get_federation_handler()
self.directory_handler = hs.get_directory_handler()
@ -115,9 +118,8 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
burst_count=hs.config.ratelimiting.rc_invites_per_user.burst_count,
)
# This is only used to get at ratelimit function, and
# maybe_kick_guest_users. It's fine there are multiple of these as
# it doesn't store state.
# This is only used to get at the ratelimit function. It's fine there are
# multiple of these as it doesn't store state.
self.base_handler = BaseHandler(hs)
@abc.abstractmethod
@ -1095,10 +1097,62 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
return bool(
guest_access
and guest_access.content
and "guest_access" in guest_access.content
and guest_access.content["guest_access"] == "can_join"
and guest_access.content.get(EventContentFields.GUEST_ACCESS)
== GuestAccess.CAN_JOIN
)
async def kick_guest_users(self, current_state: Iterable[EventBase]) -> None:
"""Kick any local guest users from the room.
This is called when the room state changes from guests allowed to not-allowed.
Params:
current_state: the current state of the room. We will iterate this to look
for guest users to kick.
"""
for member_event in current_state:
try:
if member_event.type != EventTypes.Member:
continue
if not self.hs.is_mine_id(member_event.state_key):
continue
if member_event.content["membership"] not in {
Membership.JOIN,
Membership.INVITE,
}:
continue
if (
"kind" not in member_event.content
or member_event.content["kind"] != "guest"
):
continue
# We make the user choose to leave, rather than have the
# event-sender kick them. This is partially because we don't
# need to worry about power levels, and partially because guest
# users are a concept which doesn't hugely work over federation,
# and having homeservers have their own users leave keeps more
# of that decision-making and control local to the guest-having
# homeserver.
target_user = UserID.from_string(member_event.state_key)
requester = create_requester(
target_user, is_guest=True, authenticated_entity=self._server_name
)
handler = self.hs.get_room_member_handler()
await handler.update_membership(
requester,
target_user,
member_event.room_id,
"leave",
ratelimit=False,
require_consent=False,
)
except Exception as e:
logger.exception("Error kicking guest user: %s" % (e,))
async def lookup_room_alias(
self, room_alias: RoomAlias
) -> Tuple[RoomID, List[str]]:
@ -1352,7 +1406,6 @@ class RoomMemberMasterHandler(RoomMemberHandler):
self.distributor = hs.get_distributor()
self.distributor.declare("user_left_room")
self._server_name = hs.hostname
async def _is_remote_room_too_complex(
self, room_id: str, remote_room_hosts: List[str]

View file

@ -28,9 +28,15 @@ from synapse.api.constants import (
Membership,
RoomTypes,
)
from synapse.api.errors import AuthError, Codes, NotFoundError, StoreError, SynapseError
from synapse.api.errors import (
AuthError,
Codes,
NotFoundError,
StoreError,
SynapseError,
UnsupportedRoomVersionError,
)
from synapse.events import EventBase
from synapse.events.utils import format_event_for_client_v2
from synapse.types import JsonDict
from synapse.util.caches.response_cache import ResponseCache
@ -82,7 +88,6 @@ class RoomSummaryHandler:
_PAGINATION_SESSION_VALIDITY_PERIOD_MS = 5 * 60 * 1000
def __init__(self, hs: "HomeServer"):
self._clock = hs.get_clock()
self._event_auth_handler = hs.get_event_auth_handler()
self._store = hs.get_datastore()
self._event_serializer = hs.get_event_client_serializer()
@ -641,18 +646,18 @@ class RoomSummaryHandler:
if max_children is None or max_children > MAX_ROOMS_PER_SPACE:
max_children = MAX_ROOMS_PER_SPACE
now = self._clock.time_msec()
events_result: List[JsonDict] = []
for edge_event in itertools.islice(child_events, max_children):
events_result.append(
await self._event_serializer.serialize_event(
edge_event,
time_now=now,
event_format=format_event_for_client_v2,
)
)
return _RoomEntry(room_id, room_entry, events_result)
stripped_events: List[JsonDict] = [
{
"type": e.type,
"state_key": e.state_key,
"content": e.content,
"room_id": e.room_id,
"sender": e.sender,
"origin_server_ts": e.origin_server_ts,
}
for e in itertools.islice(child_events, max_children)
]
return _RoomEntry(room_id, room_entry, stripped_events)
async def _summarize_remote_room(
self,
@ -814,7 +819,12 @@ class RoomSummaryHandler:
logger.info("room %s is unknown, omitting from summary", room_id)
return False
room_version = await self._store.get_room_version(room_id)
try:
room_version = await self._store.get_room_version(room_id)
except UnsupportedRoomVersionError:
# If a room with an unsupported room version is encountered, ignore
# it to avoid breaking the entire summary response.
return False
# Include the room if it has join rules of public or knock.
join_rules_event_id = state_ids.get((EventTypes.JoinRules, ""))
@ -1139,25 +1149,26 @@ def _is_suggested_child_event(edge_event: EventBase) -> bool:
_INVALID_ORDER_CHARS_RE = re.compile(r"[^\x20-\x7E]")
def _child_events_comparison_key(child: EventBase) -> Tuple[bool, Optional[str], str]:
def _child_events_comparison_key(
child: EventBase,
) -> Tuple[bool, Optional[str], int, str]:
"""
Generate a value for comparing two child events for ordering.
The rules for ordering are supposed to be:
The rules for ordering are:
1. The 'order' key, if it is valid.
2. The 'origin_server_ts' of the 'm.room.create' event.
2. The 'origin_server_ts' of the 'm.space.child' event.
3. The 'room_id'.
But we skip step 2 since we may not have any state from the room.
Args:
child: The event for generating a comparison key.
Returns:
The comparison key as a tuple of:
False if the ordering is valid.
The ordering field.
The 'order' field or None if it is not given or invalid.
The 'origin_server_ts' field.
The room ID.
"""
order = child.content.get("order")
@ -1168,4 +1179,4 @@ def _child_events_comparison_key(child: EventBase) -> Tuple[bool, Optional[str],
order = None
# Items without an order come last.
return (order is None, order, child.room_id)
return (order is None, order, child.origin_server_ts, child.room_id)

View file

@ -80,7 +80,6 @@ class SamlHandler(BaseHandler):
# the SsoIdentityProvider protocol type.
self.idp_icon = None
self.idp_brand = None
self.unstable_idp_brand = None
# a map from saml session id to Saml2SessionData object
self._outstanding_requests_dict: Dict[str, Saml2SessionData] = {}

View file

@ -104,11 +104,6 @@ class SsoIdentityProvider(Protocol):
"""Optional branding identifier"""
return None
@property
def unstable_idp_brand(self) -> Optional[str]:
"""Optional brand identifier for the unstable API (see MSC2858)."""
return None
@abc.abstractmethod
async def handle_redirect_request(
self,
@ -449,14 +444,16 @@ class SsoHandler:
if not user_id:
attributes = await self._call_attribute_mapper(sso_to_matrix_id_mapper)
if attributes.localpart is None:
# the mapper doesn't return a username. bail out with a redirect to
# the username picker.
await self._redirect_to_username_picker(
next_step_url = self._get_url_for_next_new_user_step(
attributes=attributes
)
if next_step_url:
await self._redirect_to_next_new_user_step(
auth_provider_id,
remote_user_id,
attributes,
client_redirect_url,
next_step_url,
extra_login_attributes,
)
@ -535,18 +532,53 @@ class SsoHandler:
)
return attributes
async def _redirect_to_username_picker(
def _get_url_for_next_new_user_step(
self,
attributes: Optional[UserAttributes] = None,
session: Optional[UsernameMappingSession] = None,
) -> bytes:
"""Returns the URL to redirect to for the next step of new user registration
Given attributes from the user mapping provider or a UsernameMappingSession,
returns the URL to redirect to for the next step of the registration flow.
Args:
attributes: the user attributes returned by the user mapping provider,
from before a UsernameMappingSession has begun.
session: an active UsernameMappingSession, possibly with some of its
attributes chosen by the user.
Returns:
The URL to redirect to, or an empty value if no redirect is necessary
"""
# Must provide either attributes or session, not both
assert (attributes is not None) != (session is not None)
if (attributes and attributes.localpart is None) or (
session and session.chosen_localpart is None
):
return b"/_synapse/client/pick_username/account_details"
elif self._consent_at_registration and not (
session and session.terms_accepted_version
):
return b"/_synapse/client/new_user_consent"
else:
return b"/_synapse/client/sso_register" if session else b""
async def _redirect_to_next_new_user_step(
self,
auth_provider_id: str,
remote_user_id: str,
attributes: UserAttributes,
client_redirect_url: str,
next_step_url: bytes,
extra_login_attributes: Optional[JsonDict],
) -> NoReturn:
"""Creates a UsernameMappingSession and redirects the browser
Called if the user mapping provider doesn't return a localpart for a new user.
Raises a RedirectException which redirects the browser to the username picker.
Called if the user mapping provider doesn't return complete information for a new user.
Raises a RedirectException which redirects the browser to a specified URL.
Args:
auth_provider_id: A unique identifier for this SSO provider, e.g.
@ -559,12 +591,15 @@ class SsoHandler:
client_redirect_url: The redirect URL passed in by the client, which we
will eventually redirect back to.
next_step_url: The URL to redirect to for the next step of the new user flow.
extra_login_attributes: An optional dictionary of extra
attributes to be provided to the client in the login response.
Raises:
RedirectException
"""
# TODO: If needed, allow using/looking up an existing session here.
session_id = random_string(16)
now = self._clock.time_msec()
session = UsernameMappingSession(
@ -575,13 +610,18 @@ class SsoHandler:
client_redirect_url=client_redirect_url,
expiry_time_ms=now + self._MAPPING_SESSION_VALIDITY_PERIOD_MS,
extra_login_attributes=extra_login_attributes,
# Treat the localpart returned by the user mapping provider as though
# it was chosen by the user. If it's None, it must be chosen eventually.
chosen_localpart=attributes.localpart,
# TODO: Consider letting the user mapping provider specify defaults for
# other user-chosen attributes.
)
self._username_mapping_sessions[session_id] = session
logger.info("Recorded registration session id %s", session_id)
# Set the cookie and redirect to the username picker
e = RedirectException(b"/_synapse/client/pick_username/account_details")
# Set the cookie and redirect to the next step
e = RedirectException(next_step_url)
e.cookies.append(
b"%s=%s; path=/"
% (USERNAME_MAPPING_SESSION_COOKIE_NAME, session_id.encode("ascii"))
@ -810,16 +850,9 @@ class SsoHandler:
)
session.emails_to_use = filtered_emails
# we may now need to collect consent from the user, in which case, redirect
# to the consent-extraction-unit
if self._consent_at_registration:
redirect_url = b"/_synapse/client/new_user_consent"
# otherwise, redirect to the completion page
else:
redirect_url = b"/_synapse/client/sso_register"
respond_with_redirect(request, redirect_url)
respond_with_redirect(
request, self._get_url_for_next_new_user_step(session=session)
)
async def handle_terms_accepted(
self, request: Request, session_id: str, terms_version: str
@ -847,8 +880,9 @@ class SsoHandler:
session.terms_accepted_version = terms_version
# we're done; now we can register the user
respond_with_redirect(request, b"/_synapse/client/sso_register")
respond_with_redirect(
request, self._get_url_for_next_new_user_step(session=session)
)
async def register_sso_user(self, request: Request, session_id: str) -> None:
"""Called once we have all the info we need to register a new user.

View file

@ -13,6 +13,7 @@
# limitations under the License.
import logging
from enum import Enum, auto
from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING:
@ -21,6 +22,12 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
class MatchChange(Enum):
no_change = auto()
now_true = auto()
now_false = auto()
class StateDeltasHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
@ -31,18 +38,12 @@ class StateDeltasHandler:
event_id: Optional[str],
key_name: str,
public_value: str,
) -> Optional[bool]:
) -> MatchChange:
"""Given two events check if the `key_name` field in content changed
from not matching `public_value` to doing so.
For example, check if `history_visibility` (`key_name`) changed from
`shared` to `world_readable` (`public_value`).
Returns:
None if the field in the events either both match `public_value`
or if neither do, i.e. there has been no change.
True if it didn't match `public_value` but now does
False if it did match `public_value` but now doesn't
"""
prev_event = None
event = None
@ -54,7 +55,7 @@ class StateDeltasHandler:
if not event and not prev_event:
logger.debug("Neither event exists: %r %r", prev_event_id, event_id)
return None
return MatchChange.no_change
prev_value = None
value = None
@ -68,8 +69,8 @@ class StateDeltasHandler:
logger.debug("prev_value: %r -> value: %r", prev_value, value)
if value == public_value and prev_value != public_value:
return True
return MatchChange.now_true
elif value != public_value and prev_value == public_value:
return False
return MatchChange.now_false
else:
return None
return MatchChange.no_change

View file

@ -18,7 +18,7 @@ from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Tuple
from typing_extensions import Counter as CounterType
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.metrics import event_processing_positions
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import JsonDict
@ -54,7 +54,7 @@ class StatsHandler:
# Guard to ensure we only process deltas one at a time
self._is_processing = False
if self.stats_enabled and hs.config.run_background_tasks:
if self.stats_enabled and hs.config.worker.run_background_tasks:
self.notifier.add_replication_callback(self.notify_new_event)
# We kick this off so that we don't have to wait for a change before
@ -254,7 +254,7 @@ class StatsHandler:
elif typ == EventTypes.Create:
room_state["is_federatable"] = (
event_content.get("m.federate", True) is True
event_content.get(EventContentFields.FEDERATE, True) is True
)
elif typ == EventTypes.JoinRules:
room_state["join_rules"] = event_content.get("join_rule")
@ -273,7 +273,9 @@ class StatsHandler:
elif typ == EventTypes.CanonicalAlias:
room_state["canonical_alias"] = event_content.get("alias")
elif typ == EventTypes.GuestAccess:
room_state["guest_access"] = event_content.get("guest_access")
room_state["guest_access"] = event_content.get(
EventContentFields.GUEST_ACCESS
)
for room_id, state in room_to_state_updates.items():
logger.debug("Updating room_stats_state for %s: %s", room_id, state)

View file

@ -505,10 +505,13 @@ class SyncHandler:
else:
limited = False
log_kv({"limited": limited})
if potential_recents:
recents = sync_config.filter_collection.filter_room_timeline(
potential_recents
)
log_kv({"recents_after_sync_filtering": len(recents)})
# We check if there are any state events, if there are then we pass
# all current state events to the filter_events function. This is to
@ -526,6 +529,7 @@ class SyncHandler:
recents,
always_include_ids=current_state_ids,
)
log_kv({"recents_after_visibility_filtering": len(recents)})
else:
recents = []
@ -566,10 +570,15 @@ class SyncHandler:
events, end_key = await self.store.get_recent_events_for_room(
room_id, limit=load_limit + 1, end_token=end_key
)
log_kv({"loaded_recents": len(events)})
loaded_recents = sync_config.filter_collection.filter_room_timeline(
events
)
log_kv({"loaded_recents_after_sync_filtering": len(loaded_recents)})
# We check if there are any state events, if there are then we pass
# all current state events to the filter_events function. This is to
# ensure that we always include current state in the timeline
@ -586,6 +595,9 @@ class SyncHandler:
loaded_recents,
always_include_ids=current_state_ids,
)
log_kv({"loaded_recents_after_client_filtering": len(loaded_recents)})
loaded_recents.extend(recents)
recents = loaded_recents
@ -1115,6 +1127,8 @@ class SyncHandler:
logger.debug("Fetching group data")
await self._generate_sync_entry_for_groups(sync_result_builder)
num_events = 0
# debug for https://github.com/matrix-org/synapse/issues/4422
for joined_room in sync_result_builder.joined:
room_id = joined_room.room_id
@ -1122,6 +1136,14 @@ class SyncHandler:
issue4422_logger.debug(
"Sync result for newly joined room %s: %r", room_id, joined_room
)
num_events += len(joined_room.timeline.events)
log_kv(
{
"joined_rooms_in_result": len(sync_result_builder.joined),
"events_in_result": num_events,
}
)
logger.debug("Sync response calculation complete")
return SyncResult(
@ -1466,6 +1488,7 @@ class SyncHandler:
if not sync_result_builder.full_state:
if since_token and not ephemeral_by_room and not account_data_by_room:
have_changed = await self._have_rooms_changed(sync_result_builder)
log_kv({"rooms_have_changed": have_changed})
if not have_changed:
tags_by_room = await self.store.get_updated_tags(
user_id, since_token.account_data_key
@ -1500,25 +1523,30 @@ class SyncHandler:
tags_by_room = await self.store.get_tags_for_user(user_id)
log_kv({"rooms_changed": len(room_changes.room_entries)})
room_entries = room_changes.room_entries
invited = room_changes.invited
knocked = room_changes.knocked
newly_joined_rooms = room_changes.newly_joined_rooms
newly_left_rooms = room_changes.newly_left_rooms
async def handle_room_entries(room_entry):
logger.debug("Generating room entry for %s", room_entry.room_id)
res = await self._generate_room_entry(
sync_result_builder,
ignored_users,
room_entry,
ephemeral=ephemeral_by_room.get(room_entry.room_id, []),
tags=tags_by_room.get(room_entry.room_id),
account_data=account_data_by_room.get(room_entry.room_id, {}),
always_include=sync_result_builder.full_state,
)
logger.debug("Generated room entry for %s", room_entry.room_id)
return res
async def handle_room_entries(room_entry: "RoomSyncResultBuilder"):
with start_active_span("generate_room_entry"):
set_tag("room_id", room_entry.room_id)
log_kv({"events": len(room_entry.events or [])})
logger.debug("Generating room entry for %s", room_entry.room_id)
res = await self._generate_room_entry(
sync_result_builder,
ignored_users,
room_entry,
ephemeral=ephemeral_by_room.get(room_entry.room_id, []),
tags=tags_by_room.get(room_entry.room_id),
account_data=account_data_by_room.get(room_entry.room_id, {}),
always_include=sync_result_builder.full_state,
)
logger.debug("Generated room entry for %s", room_entry.room_id)
return res
await concurrently_execute(handle_room_entries, room_entries, 10)
@ -1931,6 +1959,12 @@ class SyncHandler:
room_id = room_builder.room_id
since_token = room_builder.since_token
upto_token = room_builder.upto_token
log_kv(
{
"since_token": since_token,
"upto_token": upto_token,
}
)
batch = await self._load_filtered_recents(
room_id,
@ -1940,6 +1974,13 @@ class SyncHandler:
potential_recents=events,
newly_joined_room=newly_joined,
)
log_kv(
{
"batch_events": len(batch.events),
"prev_batch": batch.prev_batch,
"batch_limited": batch.limited,
}
)
# Note: `batch` can be both empty and limited here in the case where
# `_load_filtered_recents` can't find any events the user should see

View file

@ -53,7 +53,7 @@ class FollowerTypingHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.server_name = hs.config.server_name
self.server_name = hs.config.server.server_name
self.clock = hs.get_clock()
self.is_mine_id = hs.is_mine_id
@ -73,7 +73,7 @@ class FollowerTypingHandler:
self._room_typing: Dict[str, Set[str]] = {}
self._member_last_federation_poke: Dict[RoomMember, int] = {}
self.wheel_timer = WheelTimer(bucket_size=5000)
self.wheel_timer: WheelTimer[RoomMember] = WheelTimer(bucket_size=5000)
self._latest_room_serial = 0
self.clock.looping_call(self._handle_timeouts, 5000)

View file

@ -17,7 +17,7 @@ from typing import TYPE_CHECKING, Any, Dict, List, Optional
import synapse.metrics
from synapse.api.constants import EventTypes, HistoryVisibility, JoinRules, Membership
from synapse.handlers.state_deltas import StateDeltasHandler
from synapse.handlers.state_deltas import MatchChange, StateDeltasHandler
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.roommember import ProfileInfo
from synapse.types import JsonDict
@ -30,14 +30,26 @@ logger = logging.getLogger(__name__)
class UserDirectoryHandler(StateDeltasHandler):
"""Handles querying of and keeping updated the user_directory.
"""Handles queries and updates for the user_directory.
N.B.: ASSUMES IT IS THE ONLY THING THAT MODIFIES THE USER DIRECTORY
The user directory is filled with users who this server can see are joined to a
world_readable or publicly joinable room. We keep a database table up to date
by streaming changes of the current state and recalculating whether users should
be in the directory or not when necessary.
When a local user searches the user_directory, we report two kinds of users:
- users this server can see are joined to a world_readable or publicly
joinable room, and
- users belonging to a private room shared by that local user.
The two cases are tracked separately in the `users_in_public_rooms` and
`users_who_share_private_rooms` tables. Both kinds of users have their
username and avatar tracked in a `user_directory` table.
This handler has three responsibilities:
1. Forwarding requests to `/user_directory/search` to the UserDirectoryStore.
2. Providing hooks for the application to call when local users are added,
removed, or have their profile changed.
3. Listening for room state changes that indicate remote users have
joined or left a room, or that their profile has changed.
"""
def __init__(self, hs: "HomeServer"):
@ -130,7 +142,7 @@ class UserDirectoryHandler(StateDeltasHandler):
user_id, profile.display_name, profile.avatar_url
)
async def handle_user_deactivated(self, user_id: str) -> None:
async def handle_local_user_deactivated(self, user_id: str) -> None:
"""Called when a user ID is deactivated"""
# FIXME(#3714): We should probably do this in the same worker as all
# the other changes.
@ -196,7 +208,7 @@ class UserDirectoryHandler(StateDeltasHandler):
public_value=Membership.JOIN,
)
if change is False:
if change is MatchChange.now_false:
# Need to check if the server left the room entirely, if so
# we might need to remove all the users in that room
is_in_room = await self.store.is_host_joined(
@ -219,14 +231,14 @@ class UserDirectoryHandler(StateDeltasHandler):
is_support = await self.store.is_support_user(state_key)
if not is_support:
if change is None:
if change is MatchChange.no_change:
# Handle any profile changes
await self._handle_profile_change(
state_key, room_id, prev_event_id, event_id
)
continue
if change: # The user joined
if change is MatchChange.now_true: # The user joined
event = await self.store.get_event(event_id, allow_none=True)
# It isn't expected for this event to not exist, but we
# don't want the entire background process to break.
@ -263,14 +275,14 @@ class UserDirectoryHandler(StateDeltasHandler):
logger.debug("Handling change for %s: %s", typ, room_id)
if typ == EventTypes.RoomHistoryVisibility:
change = await self._get_key_change(
publicness = await self._get_key_change(
prev_event_id,
event_id,
key_name="history_visibility",
public_value=HistoryVisibility.WORLD_READABLE,
)
elif typ == EventTypes.JoinRules:
change = await self._get_key_change(
publicness = await self._get_key_change(
prev_event_id,
event_id,
key_name="join_rule",
@ -278,9 +290,7 @@ class UserDirectoryHandler(StateDeltasHandler):
)
else:
raise Exception("Invalid event type")
# If change is None, no change. True => become world_readable/public,
# False => was world_readable/public
if change is None:
if publicness is MatchChange.no_change:
logger.debug("No change")
return
@ -290,13 +300,13 @@ class UserDirectoryHandler(StateDeltasHandler):
room_id
)
logger.debug("Change: %r, is_public: %r", change, is_public)
logger.debug("Change: %r, publicness: %r", publicness, is_public)
if change and not is_public:
if publicness is MatchChange.now_true and not is_public:
# If we became world readable but room isn't currently public then
# we ignore the change
return
elif not change and is_public:
elif publicness is MatchChange.now_false and is_public:
# If we stopped being world readable but are still public,
# ignore the change
return

View file

@ -572,6 +572,25 @@ def parse_string_from_args(
return strings[0]
@overload
def parse_json_value_from_request(request: Request) -> JsonDict:
...
@overload
def parse_json_value_from_request(
request: Request, allow_empty_body: Literal[False]
) -> JsonDict:
...
@overload
def parse_json_value_from_request(
request: Request, allow_empty_body: bool = False
) -> Optional[JsonDict]:
...
def parse_json_value_from_request(
request: Request, allow_empty_body: bool = False
) -> Optional[JsonDict]:

View file

@ -384,7 +384,7 @@ class SynapseRequest(Request):
# authenticated (e.g. and admin is puppetting a user) then we log both.
requester, authenticated_entity = self.get_authenticated_entity()
if authenticated_entity:
requester = f"{authenticated_entity}.{requester}"
requester = f"{authenticated_entity}|{requester}"
self.site.access_logger.log(
log_level,

View file

@ -236,8 +236,17 @@ except ImportError:
try:
from rust_python_jaeger_reporter import Reporter
# jaeger-client 4.7.0 requires that reporters inherit from BaseReporter, which
# didn't exist before that version.
try:
from jaeger_client.reporter import BaseReporter
except ImportError:
class BaseReporter: # type: ignore[no-redef]
pass
@attr.s(slots=True, frozen=True)
class _WrappedRustReporter:
class _WrappedRustReporter(BaseReporter):
"""Wrap the reporter to ensure `report_span` never throws."""
_reporter = attr.ib(type=Reporter, default=attr.Factory(Reporter))
@ -374,7 +383,7 @@ def init_tracer(hs: "HomeServer"):
config = JaegerConfig(
config=hs.config.jaeger_config,
service_name=f"{hs.config.server_name} {hs.get_instance_name()}",
service_name=f"{hs.config.server.server_name} {hs.get_instance_name()}",
scope_manager=LogContextScopeManager(hs.config),
metrics_factory=PrometheusMetricsFactory(),
)
@ -382,6 +391,7 @@ def init_tracer(hs: "HomeServer"):
# If we have the rust jaeger reporter available let's use that.
if RustReporter:
logger.info("Using rust_python_jaeger_reporter library")
assert config.sampler is not None
tracer = config.create_tracer(RustReporter(), config.sampler)
opentracing.set_global_tracer(tracer)
else:

View file

@ -178,7 +178,7 @@ class ModuleApi:
@property
def public_baseurl(self) -> str:
"""The configured public base URL for this homeserver."""
return self._hs.config.public_baseurl
return self._hs.config.server.public_baseurl
@property
def email_app_name(self) -> str:
@ -640,7 +640,7 @@ class ModuleApi:
if desc is None:
desc = f.__name__
if self._hs.config.run_background_tasks or run_on_all_instances:
if self._hs.config.worker.run_background_tasks or run_on_all_instances:
self._clock.looping_call(
run_as_background_process,
msec,

View file

@ -130,7 +130,7 @@ class Mailer:
"""
params = {"token": token, "client_secret": client_secret, "sid": sid}
link = (
self.hs.config.public_baseurl
self.hs.config.server.public_baseurl
+ "_synapse/client/password_reset/email/submit_token?%s"
% urllib.parse.urlencode(params)
)
@ -140,7 +140,7 @@ class Mailer:
await self.send_email(
email_address,
self.email_subjects.password_reset
% {"server_name": self.hs.config.server_name},
% {"server_name": self.hs.config.server.server_name},
template_vars,
)
@ -160,7 +160,7 @@ class Mailer:
"""
params = {"token": token, "client_secret": client_secret, "sid": sid}
link = (
self.hs.config.public_baseurl
self.hs.config.server.public_baseurl
+ "_matrix/client/unstable/registration/email/submit_token?%s"
% urllib.parse.urlencode(params)
)
@ -170,7 +170,7 @@ class Mailer:
await self.send_email(
email_address,
self.email_subjects.email_validation
% {"server_name": self.hs.config.server_name},
% {"server_name": self.hs.config.server.server_name},
template_vars,
)
@ -191,7 +191,7 @@ class Mailer:
"""
params = {"token": token, "client_secret": client_secret, "sid": sid}
link = (
self.hs.config.public_baseurl
self.hs.config.server.public_baseurl
+ "_matrix/client/unstable/add_threepid/email/submit_token?%s"
% urllib.parse.urlencode(params)
)
@ -201,7 +201,7 @@ class Mailer:
await self.send_email(
email_address,
self.email_subjects.email_validation
% {"server_name": self.hs.config.server_name},
% {"server_name": self.hs.config.server.server_name},
template_vars,
)
@ -258,7 +258,7 @@ class Mailer:
# actually sort our so-called rooms_in_order list, most recent room first
rooms_in_order.sort(key=lambda r: -(notifs_by_room[r][-1]["received_ts"] or 0))
rooms = []
rooms: List[Dict[str, Any]] = []
for r in rooms_in_order:
roomvars = await self._get_room_vars(
@ -362,6 +362,7 @@ class Mailer:
"notifs": [],
"invite": is_invite,
"link": self._make_room_link(room_id),
"avatar_url": await self._get_room_avatar(room_state_ids),
}
if not is_invite:
@ -393,6 +394,27 @@ class Mailer:
return room_vars
async def _get_room_avatar(
self,
room_state_ids: StateMap[str],
) -> Optional[str]:
"""
Retrieve the avatar url for this room---if it exists.
Args:
room_state_ids: The event IDs of the current room state.
Returns:
room's avatar url if it's present and a string; otherwise None.
"""
event_id = room_state_ids.get((EventTypes.RoomAvatar, ""))
if event_id:
ev = await self.store.get_event(event_id)
url = ev.content.get("url")
if isinstance(url, str):
return url
return None
async def _get_notif_vars(
self,
notif: Dict[str, Any],
@ -830,7 +852,7 @@ class Mailer:
# XXX: make r0 once API is stable
return "%s_matrix/client/unstable/pushers/remove?%s" % (
self.hs.config.public_baseurl,
self.hs.config.server.public_baseurl,
urllib.parse.urlencode(params),
)

View file

@ -73,7 +73,7 @@ class DirectTcpReplicationClientFactory(ReconnectingClientFactory):
):
self.client_name = client_name
self.command_handler = command_handler
self.server_name = hs.config.server_name
self.server_name = hs.config.server.server_name
self.hs = hs
self._clock = hs.get_clock() # As self.clock is defined in super class

View file

@ -168,7 +168,7 @@ class ReplicationCommandHandler:
continue
# Only add any other streams if we're on master.
if hs.config.worker_app is not None:
if hs.config.worker.worker_app is not None:
continue
if stream.NAME == FederationStream.NAME and hs.config.send_federation:
@ -222,7 +222,7 @@ class ReplicationCommandHandler:
},
)
self._is_master = hs.config.worker_app is None
self._is_master = hs.config.worker.worker_app is None
self._federation_sender = None
if self._is_master and not hs.config.send_federation:

View file

@ -40,7 +40,7 @@ class ReplicationStreamProtocolFactory(Factory):
def __init__(self, hs):
self.command_handler = hs.get_tcp_replication()
self.clock = hs.get_clock()
self.server_name = hs.config.server_name
self.server_name = hs.config.server.server_name
# If we've created a `ReplicationStreamProtocolFactory` then we're
# almost certainly registering a replication listener, so let's ensure

View file

@ -42,7 +42,7 @@ class FederationStream(Stream):
ROW_TYPE = FederationStreamRow
def __init__(self, hs: "HomeServer"):
if hs.config.worker_app is None:
if hs.config.worker.worker_app is None:
# master process: get updates from the FederationRemoteSendQueue.
# (if the master is configured to send federation itself, federation_sender
# will be a real FederationSender, which has stubs for current_token and

View file

@ -0,0 +1,17 @@
[
{
"provider_name": "Twitter",
"provider_url": "http://www.twitter.com/",
"endpoints": [
{
"schemes": [
"https://twitter.com/*/status/*",
"https://*.twitter.com/*/status/*",
"https://twitter.com/*/moments/*",
"https://*.twitter.com/*/moments/*"
],
"url": "https://publish.twitter.com/oembed"
}
]
}
]

View file

@ -247,7 +247,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
RegistrationTokenRestServlet(hs).register(http_server)
# Some servlets only get registered for the main process.
if hs.config.worker_app is None:
if hs.config.worker.worker_app is None:
SendServerNoticeServlet(hs).register(http_server)

View file

@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, Optional, Tuple
from typing import TYPE_CHECKING, Awaitable, Optional, Tuple
from synapse.api.constants import EventTypes
from synapse.api.errors import NotFoundError, SynapseError
@ -101,7 +101,9 @@ class SendServerNoticeServlet(RestServlet):
return 200, {"event_id": event.event_id}
def on_PUT(self, request: SynapseRequest, txn_id: str) -> Tuple[int, JsonDict]:
def on_PUT(
self, request: SynapseRequest, txn_id: str
) -> Awaitable[Tuple[int, JsonDict]]:
return self.txns.fetch_or_execute_request(
request, self.on_POST, request, txn_id
)

View file

@ -16,7 +16,7 @@
"""
import logging
import re
from typing import Iterable, Pattern
from typing import Any, Awaitable, Callable, Iterable, Pattern, Tuple, TypeVar, cast
from synapse.api.errors import InteractiveAuthIncompleteError
from synapse.api.urls import CLIENT_API_PREFIX
@ -76,7 +76,10 @@ def set_timeline_upper_limit(filter_json: JsonDict, filter_timeline_limit: int)
)
def interactive_auth_handler(orig):
C = TypeVar("C", bound=Callable[..., Awaitable[Tuple[int, JsonDict]]])
def interactive_auth_handler(orig: C) -> C:
"""Wraps an on_POST method to handle InteractiveAuthIncompleteErrors
Takes a on_POST method which returns an Awaitable (errcode, body) response
@ -91,10 +94,10 @@ def interactive_auth_handler(orig):
await self.auth_handler.check_auth
"""
async def wrapped(*args, **kwargs):
async def wrapped(*args: Any, **kwargs: Any) -> Tuple[int, JsonDict]:
try:
return await orig(*args, **kwargs)
except InteractiveAuthIncompleteError as e:
return 401, e.result
return wrapped
return cast(C, wrapped)

View file

@ -16,9 +16,11 @@
import logging
import random
from http import HTTPStatus
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Optional, Tuple
from urllib.parse import urlparse
from twisted.web.server import Request
from synapse.api.constants import LoginType
from synapse.api.errors import (
Codes,
@ -28,15 +30,17 @@ from synapse.api.errors import (
)
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.handlers.ui_auth import UIAuthSessionDataConstants
from synapse.http.server import finish_request, respond_with_html
from synapse.http.server import HttpServer, finish_request, respond_with_html
from synapse.http.servlet import (
RestServlet,
assert_params_in_dict,
parse_json_object_from_request,
parse_string,
)
from synapse.http.site import SynapseRequest
from synapse.metrics import threepid_send_requests
from synapse.push.mailer import Mailer
from synapse.types import JsonDict
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.stringutils import assert_valid_client_secret, random_string
from synapse.util.threepids import check_3pid_allowed, validate_email
@ -68,7 +72,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
template_text=self.config.email_password_reset_template_text,
)
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
if self.config.threepid_behaviour_email == ThreepidBehaviour.OFF:
if self.config.local_threepid_handling_disabled_due_to_email_config:
logger.warning(
@ -159,7 +163,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
class PasswordRestServlet(RestServlet):
PATTERNS = client_patterns("/account/password$")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.hs = hs
self.auth = hs.get_auth()
@ -169,7 +173,7 @@ class PasswordRestServlet(RestServlet):
self._set_password_handler = hs.get_set_password_handler()
@interactive_auth_handler
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
body = parse_json_object_from_request(request)
# we do basic sanity checks here because the auth layer will store these
@ -190,6 +194,7 @@ class PasswordRestServlet(RestServlet):
#
# In the second case, we require a password to confirm their identity.
requester = None
if self.auth.has_access_token(request):
requester = await self.auth.get_user_by_req(request)
try:
@ -206,16 +211,15 @@ class PasswordRestServlet(RestServlet):
# If a password is available now, hash the provided password and
# store it for later.
if new_password:
password_hash = await self.auth_handler.hash(new_password)
new_password_hash = await self.auth_handler.hash(new_password)
await self.auth_handler.set_session_data(
e.session_id,
UIAuthSessionDataConstants.PASSWORD_HASH,
password_hash,
new_password_hash,
)
raise
user_id = requester.user.to_string()
else:
requester = None
try:
result, params, session_id = await self.auth_handler.check_ui_auth(
[[LoginType.EMAIL_IDENTITY]],
@ -230,11 +234,11 @@ class PasswordRestServlet(RestServlet):
# If a password is available now, hash the provided password and
# store it for later.
if new_password:
password_hash = await self.auth_handler.hash(new_password)
new_password_hash = await self.auth_handler.hash(new_password)
await self.auth_handler.set_session_data(
e.session_id,
UIAuthSessionDataConstants.PASSWORD_HASH,
password_hash,
new_password_hash,
)
raise
@ -264,7 +268,7 @@ class PasswordRestServlet(RestServlet):
# If we have a password in this request, prefer it. Otherwise, use the
# password hash from an earlier request.
if new_password:
password_hash = await self.auth_handler.hash(new_password)
password_hash: Optional[str] = await self.auth_handler.hash(new_password)
elif session_id is not None:
password_hash = await self.auth_handler.get_session_data(
session_id, UIAuthSessionDataConstants.PASSWORD_HASH, None
@ -288,7 +292,7 @@ class PasswordRestServlet(RestServlet):
class DeactivateAccountRestServlet(RestServlet):
PATTERNS = client_patterns("/account/deactivate$")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.hs = hs
self.auth = hs.get_auth()
@ -296,7 +300,7 @@ class DeactivateAccountRestServlet(RestServlet):
self._deactivate_account_handler = hs.get_deactivate_account_handler()
@interactive_auth_handler
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
body = parse_json_object_from_request(request)
erase = body.get("erase", False)
if not isinstance(erase, bool):
@ -338,7 +342,7 @@ class DeactivateAccountRestServlet(RestServlet):
class EmailThreepidRequestTokenRestServlet(RestServlet):
PATTERNS = client_patterns("/account/3pid/email/requestToken$")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.hs = hs
self.config = hs.config
@ -353,7 +357,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
template_text=self.config.email_add_threepid_template_text,
)
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
if self.config.threepid_behaviour_email == ThreepidBehaviour.OFF:
if self.config.local_threepid_handling_disabled_due_to_email_config:
logger.warning(
@ -449,7 +453,7 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
self.store = self.hs.get_datastore()
self.identity_handler = hs.get_identity_handler()
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
body = parse_json_object_from_request(request)
assert_params_in_dict(
body, ["client_secret", "country", "phone_number", "send_attempt"]
@ -525,11 +529,7 @@ class AddThreepidEmailSubmitTokenServlet(RestServlet):
"/add_threepid/email/submit_token$", releases=(), unstable=True
)
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): server
"""
def __init__(self, hs: "HomeServer"):
super().__init__()
self.config = hs.config
self.clock = hs.get_clock()
@ -539,7 +539,7 @@ class AddThreepidEmailSubmitTokenServlet(RestServlet):
self.config.email_add_threepid_template_failure_html
)
async def on_GET(self, request):
async def on_GET(self, request: Request) -> None:
if self.config.threepid_behaviour_email == ThreepidBehaviour.OFF:
if self.config.local_threepid_handling_disabled_due_to_email_config:
logger.warning(
@ -596,18 +596,14 @@ class AddThreepidMsisdnSubmitTokenServlet(RestServlet):
"/add_threepid/msisdn/submit_token$", releases=(), unstable=True
)
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): server
"""
def __init__(self, hs: "HomeServer"):
super().__init__()
self.config = hs.config
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.identity_handler = hs.get_identity_handler()
async def on_POST(self, request):
async def on_POST(self, request: Request) -> Tuple[int, JsonDict]:
if not self.config.account_threepid_delegate_msisdn:
raise SynapseError(
400,
@ -632,7 +628,7 @@ class AddThreepidMsisdnSubmitTokenServlet(RestServlet):
class ThreepidRestServlet(RestServlet):
PATTERNS = client_patterns("/account/3pid$")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.hs = hs
self.identity_handler = hs.get_identity_handler()
@ -640,14 +636,14 @@ class ThreepidRestServlet(RestServlet):
self.auth_handler = hs.get_auth_handler()
self.datastore = self.hs.get_datastore()
async def on_GET(self, request):
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
threepids = await self.datastore.user_get_threepids(requester.user.to_string())
return 200, {"threepids": threepids}
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
if not self.hs.config.enable_3pid_changes:
raise SynapseError(
400, "3PID changes are disabled on this server", Codes.FORBIDDEN
@ -688,7 +684,7 @@ class ThreepidRestServlet(RestServlet):
class ThreepidAddRestServlet(RestServlet):
PATTERNS = client_patterns("/account/3pid/add$")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.hs = hs
self.identity_handler = hs.get_identity_handler()
@ -696,7 +692,7 @@ class ThreepidAddRestServlet(RestServlet):
self.auth_handler = hs.get_auth_handler()
@interactive_auth_handler
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
if not self.hs.config.enable_3pid_changes:
raise SynapseError(
400, "3PID changes are disabled on this server", Codes.FORBIDDEN
@ -738,13 +734,13 @@ class ThreepidAddRestServlet(RestServlet):
class ThreepidBindRestServlet(RestServlet):
PATTERNS = client_patterns("/account/3pid/bind$")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.hs = hs
self.identity_handler = hs.get_identity_handler()
self.auth = hs.get_auth()
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
body = parse_json_object_from_request(request)
assert_params_in_dict(body, ["id_server", "sid", "client_secret"])
@ -767,14 +763,14 @@ class ThreepidBindRestServlet(RestServlet):
class ThreepidUnbindRestServlet(RestServlet):
PATTERNS = client_patterns("/account/3pid/unbind$")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.hs = hs
self.identity_handler = hs.get_identity_handler()
self.auth = hs.get_auth()
self.datastore = self.hs.get_datastore()
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
"""Unbind the given 3pid from a specific identity server, or identity servers that are
known to have this 3pid bound
"""
@ -798,13 +794,13 @@ class ThreepidUnbindRestServlet(RestServlet):
class ThreepidDeleteRestServlet(RestServlet):
PATTERNS = client_patterns("/account/3pid/delete$")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.hs = hs
self.auth = hs.get_auth()
self.auth_handler = hs.get_auth_handler()
async def on_POST(self, request):
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
if not self.hs.config.enable_3pid_changes:
raise SynapseError(
400, "3PID changes are disabled on this server", Codes.FORBIDDEN
@ -835,7 +831,7 @@ class ThreepidDeleteRestServlet(RestServlet):
return 200, {"id_server_unbind_result": id_server_unbind_result}
def assert_valid_next_link(hs: "HomeServer", next_link: str):
def assert_valid_next_link(hs: "HomeServer", next_link: str) -> None:
"""
Raises a SynapseError if a given next_link value is invalid
@ -877,11 +873,11 @@ def assert_valid_next_link(hs: "HomeServer", next_link: str):
class WhoamiRestServlet(RestServlet):
PATTERNS = client_patterns("/account/whoami$")
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.auth = hs.get_auth()
async def on_GET(self, request):
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
response = {"user_id": requester.user.to_string()}
@ -894,7 +890,7 @@ class WhoamiRestServlet(RestServlet):
return 200, response
def register_servlets(hs, http_server):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
EmailPasswordRequestTokenRestServlet(hs).register(http_server)
PasswordRestServlet(hs).register(http_server)
DeactivateAccountRestServlet(hs).register(http_server)

View file

@ -13,12 +13,19 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import AuthError, NotFoundError, SynapseError
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict
from ._base import client_patterns
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@ -32,13 +39,15 @@ class AccountDataServlet(RestServlet):
"/user/(?P<user_id>[^/]*)/account_data/(?P<account_data_type>[^/]*)"
)
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.handler = hs.get_account_data_handler()
async def on_PUT(self, request, user_id, account_data_type):
async def on_PUT(
self, request: SynapseRequest, user_id: str, account_data_type: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot add account data for other users.")
@ -49,7 +58,9 @@ class AccountDataServlet(RestServlet):
return 200, {}
async def on_GET(self, request, user_id, account_data_type):
async def on_GET(
self, request: SynapseRequest, user_id: str, account_data_type: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot get account data for other users.")
@ -76,13 +87,19 @@ class RoomAccountDataServlet(RestServlet):
"/account_data/(?P<account_data_type>[^/]*)"
)
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.handler = hs.get_account_data_handler()
async def on_PUT(self, request, user_id, room_id, account_data_type):
async def on_PUT(
self,
request: SynapseRequest,
user_id: str,
room_id: str,
account_data_type: str,
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot add account data for other users.")
@ -102,7 +119,13 @@ class RoomAccountDataServlet(RestServlet):
return 200, {}
async def on_GET(self, request, user_id, room_id, account_data_type):
async def on_GET(
self,
request: SynapseRequest,
user_id: str,
room_id: str,
account_data_type: str,
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot get account data for other users.")
@ -117,6 +140,6 @@ class RoomAccountDataServlet(RestServlet):
return 200, event
def register_servlets(hs, http_server):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
AccountDataServlet(hs).register(http_server)
RoomAccountDataServlet(hs).register(http_server)

View file

@ -68,7 +68,10 @@ class AuthRestServlet(RestServlet):
html = self.terms_template.render(
session=session,
terms_url="%s_matrix/consent?v=%s"
% (self.hs.config.public_baseurl, self.hs.config.user_consent_version),
% (
self.hs.config.server.public_baseurl,
self.hs.config.user_consent_version,
),
myurl="%s/r0/auth/%s/fallback/web"
% (CLIENT_API_PREFIX, LoginType.TERMS),
)
@ -135,7 +138,7 @@ class AuthRestServlet(RestServlet):
session=session,
terms_url="%s_matrix/consent?v=%s"
% (
self.hs.config.public_baseurl,
self.hs.config.server.public_baseurl,
self.hs.config.user_consent_version,
),
myurl="%s/r0/auth/%s/fallback/web"

View file

@ -15,7 +15,7 @@
import logging
from functools import wraps
from typing import TYPE_CHECKING, Optional, Tuple
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Optional, Tuple
from twisted.web.server import Request
@ -43,14 +43,18 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
def _validate_group_id(f):
def _validate_group_id(
f: Callable[..., Awaitable[Tuple[int, JsonDict]]]
) -> Callable[..., Awaitable[Tuple[int, JsonDict]]]:
"""Wrapper to validate the form of the group ID.
Can be applied to any on_FOO methods that accepts a group ID as a URL parameter.
"""
@wraps(f)
def wrapper(self, request: Request, group_id: str, *args, **kwargs):
def wrapper(
self: RestServlet, request: Request, group_id: str, *args: Any, **kwargs: Any
) -> Awaitable[Tuple[int, JsonDict]]:
if not GroupID.is_valid(group_id):
raise SynapseError(400, "%s is not a legal group ID" % (group_id,))
@ -156,7 +160,7 @@ class GroupSummaryRoomsCatServlet(RestServlet):
group_id: str,
category_id: Optional[str],
room_id: str,
):
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
requester_user_id = requester.user.to_string()
@ -188,7 +192,7 @@ class GroupSummaryRoomsCatServlet(RestServlet):
@_validate_group_id
async def on_DELETE(
self, request: SynapseRequest, group_id: str, category_id: str, room_id: str
):
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
requester_user_id = requester.user.to_string()
@ -451,7 +455,7 @@ class GroupSummaryUsersRoleServlet(RestServlet):
@_validate_group_id
async def on_DELETE(
self, request: SynapseRequest, group_id: str, role_id: str, user_id: str
):
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
requester_user_id = requester.user.to_string()
@ -674,7 +678,7 @@ class GroupAdminRoomsConfigServlet(RestServlet):
@_validate_group_id
async def on_PUT(
self, request: SynapseRequest, group_id: str, room_id: str, config_key: str
):
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
requester_user_id = requester.user.to_string()
@ -706,7 +710,7 @@ class GroupAdminUsersInviteServlet(RestServlet):
@_validate_group_id
async def on_PUT(
self, request: SynapseRequest, group_id, user_id
self, request: SynapseRequest, group_id: str, user_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
requester_user_id = requester.user.to_string()
@ -738,7 +742,7 @@ class GroupAdminUsersKickServlet(RestServlet):
@_validate_group_id
async def on_PUT(
self, request: SynapseRequest, group_id, user_id
self, request: SynapseRequest, group_id: str, user_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
requester_user_id = requester.user.to_string()

View file

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
from typing import TYPE_CHECKING, Awaitable, Dict, List, Optional, Tuple
from twisted.web.server import Request
@ -96,7 +96,9 @@ class KnockRoomAliasServlet(RestServlet):
return 200, {"room_id": room_id}
def on_PUT(self, request: Request, room_identifier: str, txn_id: str):
def on_PUT(
self, request: Request, room_identifier: str, txn_id: str
) -> Awaitable[Tuple[int, JsonDict]]:
set_tag("txn_id", txn_id)
return self.txns.fetch_or_execute_request(

View file

@ -79,7 +79,6 @@ class LoginRestServlet(RestServlet):
self.saml2_enabled = hs.config.saml2_enabled
self.cas_enabled = hs.config.cas_enabled
self.oidc_enabled = hs.config.oidc_enabled
self._msc2858_enabled = hs.config.experimental.msc2858_enabled
self._msc2918_enabled = hs.config.access_token_lifetime is not None
self.auth = hs.get_auth()
@ -94,14 +93,14 @@ class LoginRestServlet(RestServlet):
self._address_ratelimiter = Ratelimiter(
store=hs.get_datastore(),
clock=hs.get_clock(),
rate_hz=self.hs.config.rc_login_address.per_second,
burst_count=self.hs.config.rc_login_address.burst_count,
rate_hz=self.hs.config.ratelimiting.rc_login_address.per_second,
burst_count=self.hs.config.ratelimiting.rc_login_address.burst_count,
)
self._account_ratelimiter = Ratelimiter(
store=hs.get_datastore(),
clock=hs.get_clock(),
rate_hz=self.hs.config.rc_login_account.per_second,
burst_count=self.hs.config.rc_login_account.burst_count,
rate_hz=self.hs.config.ratelimiting.rc_login_account.per_second,
burst_count=self.hs.config.ratelimiting.rc_login_account.burst_count,
)
# ensure the CAS/SAML/OIDC handlers are loaded on this worker instance.
@ -111,7 +110,7 @@ class LoginRestServlet(RestServlet):
_load_sso_handlers(hs)
def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
flows = []
flows: List[JsonDict] = []
if self.jwt_enabled:
flows.append({"type": LoginRestServlet.JWT_TYPE})
flows.append({"type": LoginRestServlet.JWT_TYPE_DEPRECATED})
@ -122,25 +121,15 @@ class LoginRestServlet(RestServlet):
flows.append({"type": LoginRestServlet.CAS_TYPE})
if self.cas_enabled or self.saml2_enabled or self.oidc_enabled:
sso_flow: JsonDict = {
"type": LoginRestServlet.SSO_TYPE,
"identity_providers": [
_get_auth_flow_dict_for_idp(
idp,
)
for idp in self._sso_handler.get_identity_providers().values()
],
}
if self._msc2858_enabled:
# backwards-compatibility support for clients which don't
# support the stable API yet
sso_flow["org.matrix.msc2858.identity_providers"] = [
_get_auth_flow_dict_for_idp(idp, use_unstable_brands=True)
for idp in self._sso_handler.get_identity_providers().values()
]
flows.append(sso_flow)
flows.append(
{
"type": LoginRestServlet.SSO_TYPE,
"identity_providers": [
_get_auth_flow_dict_for_idp(idp)
for idp in self._sso_handler.get_identity_providers().values()
],
}
)
# While it's valid for us to advertise this login type generally,
# synapse currently only gives out these tokens as part of the
@ -433,9 +422,7 @@ class LoginRestServlet(RestServlet):
return result
def _get_auth_flow_dict_for_idp(
idp: SsoIdentityProvider, use_unstable_brands: bool = False
) -> JsonDict:
def _get_auth_flow_dict_for_idp(idp: SsoIdentityProvider) -> JsonDict:
"""Return an entry for the login flow dict
Returns an entry suitable for inclusion in "identity_providers" in the
@ -443,17 +430,12 @@ def _get_auth_flow_dict_for_idp(
Args:
idp: the identity provider to describe
use_unstable_brands: whether we should use brand identifiers suitable
for the unstable API
"""
e: JsonDict = {"id": idp.idp_id, "name": idp.idp_name}
if idp.idp_icon:
e["icon"] = idp.idp_icon
if idp.idp_brand:
e["brand"] = idp.idp_brand
# use the stable brand identifier if the unstable identifier isn't defined.
if use_unstable_brands and idp.unstable_idp_brand:
e["brand"] = idp.unstable_idp_brand
return e
@ -504,24 +486,7 @@ class SsoRedirectServlet(RestServlet):
# register themselves with the main SSOHandler.
_load_sso_handlers(hs)
self._sso_handler = hs.get_sso_handler()
self._msc2858_enabled = hs.config.experimental.msc2858_enabled
self._public_baseurl = hs.config.public_baseurl
def register(self, http_server: HttpServer) -> None:
super().register(http_server)
if self._msc2858_enabled:
# expose additional endpoint for MSC2858 support: backwards-compat support
# for clients which don't yet support the stable endpoints.
http_server.register_paths(
"GET",
client_patterns(
"/org.matrix.msc2858/login/sso/redirect/(?P<idp_id>[A-Za-z0-9_.~-]+)$",
releases=(),
unstable=True,
),
self.on_GET,
self.__class__.__name__,
)
self._public_baseurl = hs.config.server.public_baseurl
async def on_GET(
self, request: SynapseRequest, idp_id: Optional[str] = None

Some files were not shown because too many files have changed in this diff Show more