Try to avoid an OOM by checking fewer extremities.
Generally this is a big rewrite of _maybe_backfill, to try and fix some of the TODOs and other problems in it. It's best reviewed commit-by-commit.
Multiple calls to `EventsWorkerStore._get_events_from_cache_or_db` can
reuse the same database fetch, which is initiated by the first call.
Ensure that cancelling the first call doesn't cancel the other calls
sharing the same database fetch.
Signed-off-by: Sean Quah <seanq@element.io>
This will mainly be useful when dealing with module callbacks, which are
all typed as returning `Awaitable`s instead of coroutines or
`Deferred`s.
Signed-off-by: Sean Quah <seanq@element.io>
When we join a room via the faster-joins mechanism, we end up with "partial
state" at some points on the event DAG. Many parts of the codebase need to
wait for the full state to load. So, we implement a mechanism to keep track of
which events have partial state, and wait for them to be fully-populated.
MSC2314 has now been closed, so we're backing out its implementation, which
originally happened in #6176.
Unfortunately it's not a direct revert, as that PR mixed in a bunch of
unrelated changes to tests etc.
In trying to use the MSC3026 busy presence status, the user's status
would be set back to 'online' next time they synced. This change makes
it so that syncing does not affect a user's presence status if it
is currently set to 'busy': it must be removed through the presence
API.
The MSC defers to implementations on the behaviour of busy presence,
so this ought to remain compatible with the MSC.
This was missed when initially stabilising room version 8 and was
left in as a compatibility shim. Most homeservers have upgraded
to a version which expects the proper field name, and the failure
mode is reasonable (a user on an older server may have to attempt
joining the room twice with an obscure error message the first time).
We work through all the events with partial state, updating the state at each
of them. Once it's done, we recalculate the state for the whole room, and then
mark the room as having complete state.
* Add some type hints to datastore
* newsfile
* change `Collection` to `List`
* refactor return type of `select_users_txn`
* correct type hint in `stream.py`
* Remove `Optional` in `select_users_txn`
* remove not needed return type in `__init__`
* Revert change in `get_stream_id_for_event_txn`
* Remove import from `Literal`
* Specify `tls` extra for Twisted dependency.
It was already pulled in for us by `treq`, but we should be explicit
that we do use the `tls` functionality of Twisted directly.
* Mark `idna` as dev-dependency
This doesn't actually change anything, as `Twisted[tls]` will put it in
as a main dependency anyway.
Consider the requester's ignored users when calculating the
bundled aggregations.
See #12285 / 4df10d3214
for corresponding changes for the `/relations` endpoint.
Of note:
* No untyped defs in `register_new_matrix_user`
This one might be contraversial. `request_registration` has three
dependency-injection arguments used for testing. I'm removing the
injection of the `requests` module and using `unitest.mock.patch` in the
test cases instead.
Doing `reveal_type(requests)` and `reveal_type(requests.get)` before the
change:
```
synapse/_scripts/register_new_matrix_user.py:45: note: Revealed type is "Any"
synapse/_scripts/register_new_matrix_user.py:46: note: Revealed type is "Any"
```
And after:
```
synapse/_scripts/register_new_matrix_user.py:44: note: Revealed type is "types.ModuleType"
synapse/_scripts/register_new_matrix_user.py:45: note: Revealed type is "def (url: Union[builtins.str, builtins.bytes], params: Union[Union[_typeshed.SupportsItems[Union[builtins.str, builtins.bytes, builtins.int, builtins.float], Union[builtins.str, builtins.bytes, builtins.int, builtins.float, typing.Iterable[Union[builtins.str, builtins.bytes, builtins.int, builtins.float]], None]], Tuple[Union[builtins.str, builtins.bytes, builtins.int, builtins.float], Union[builtins.str, builtins.bytes, builtins.int, builtins.float, typing.Iterable[Union[builtins.str, builtins.bytes, builtins.int, builtins.float]], None]], typing.Iterable[Tuple[Union[builtins.str, builtins.bytes, builtins.int, builtins.float], Union[builtins.str, builtins.bytes, builtins.int, builtins.float, typing.Iterable[Union[builtins.str, builtins.bytes, builtins.int, builtins.float]], None]]], builtins.str, builtins.bytes], None] =, data: Union[Any, None] =, headers: Union[Any, None] =, cookies: Union[Any, None] =, files: Union[Any, None] =, auth: Union[Any, None] =, timeout: Union[Any, None] =, allow_redirects: builtins.bool =, proxies: Union[Any, None] =, hooks: Union[Any, None] =, stream: Union[Any, None] =, verify: Union[Any, None] =, cert: Union[Any, None] =, json: Union[Any, None] =) -> requests.models.Response"
```
* Drive-by comment in `synapse.storage.types`
* No untyped defs in `synapse_port_db`
This was by far the most painful. I'm happy to break this up into
smaller pieces for review if it's not managable as-is.
* Pull out query param types to `synapse.http.types`
* Use QueryParams everywhere
* Simplify `encode_query_args`
* Add annotation which would have caught #12410
Principally, `prometheus_client.REGISTRY.register` now requires its argument to
extend `prometheus_client.Collector`.
Additionally, `Gauge.set` is now annotated so that passing `Optional[int]`
causes an error.
Just after a task acquires a contended `Linearizer` lock, it sleeps.
If the task is cancelled during this sleep, we need to release the lock.
Signed-off-by: Sean Quah <seanq@element.io>
`StreamToken.from_string` and `RoomStreamToken.parse` are both async
methods that could be cancelled. These methods must not replace
`CancelledError`s with `SynapseError`s.
Signed-off-by: Sean Quah <seanq@element.io>
Refactor and convert `Linearizer` to async. This makes a `Linearizer`
cancellation bug easier to fix.
Also refactor to use an async context manager, which eliminates an
unlikely footgun where code that doesn't immediately use the context
manager could forget to release the lock.
Signed-off-by: Sean Quah <seanq@element.io>
This is a first step in dealing with #7721.
The idea is basically that rather than calculating the full set of users a device list update needs to be sent to up front, we instead simply record the rooms the user was in at the time of the change. This will allow a few things:
1. we can defer calculating the set of remote servers that need to be poked about the change; and
2. during `/sync` and `/keys/changes` we can avoid also avoid calculating users who share rooms with other users, and instead just look at the rooms that have changed.
However, care needs to be taken to correctly handle server downgrades. As such this PR writes to both `device_lists_changes_in_room` and the `device_lists_outbound_pokes` table synchronously. In a future release we can then bump the database schema compat version to `69` and then we can assume that the new `device_lists_changes_in_room` exists and is handled.
There is a temporary option to disable writing to `device_lists_outbound_pokes` synchronously, allowing us to test the new code path does work (and by implication upgrading to a future release and downgrading to this one will work correctly).
Note: Ideally we'd do the calculation of room to servers on a worker (e.g. the background worker), but currently only master can write to the `device_list_outbound_pokes` table.
Switching to a sequence means there's no need to track `last_txn` on the
AS state table to generate new TXN IDs. This also means that there is
no longer contention between the AS scheduler and AS handler on updates
to the `application_services_state` table, which will prevent serialization
errors during the complete AS txn transaction.
It seems like calling `_get_state_group_for_events` for an event where the
state is unknown is an error. Accordingly, let's raise an exception rather than
silently returning an empty result.
If we're missing most of the events in the room state, then we may as well call the /state endpoint, instead of individually requesting each and every event.
The intention here is to avoid doing state lookups for outliers in
`/_matrix/federation/v1/event`. Unfortunately that's expanded into something of
a rewrite of `filter_events_for_server`, which ended up trying to do that
operation in a couple of places.
The PaginationChunk class attempted to bundle some properties
together, but really just caused callers to jump through hoops and
hid implementation details.
This endpoint was removed from MSC2675 before it was approved.
It is currently unspecified (even in any MSCs) and therefore subject to
removal. It is not implemented by any known clients.
This also changes the bundled aggregation format for `m.annotation`,
which previously included pagination tokens for the `/aggregations`
endpoint, which are no longer useful.
Document the behaviour of `LoggingTransaction.call_after` and
`LoggingTransaction.call_on_exception` when transactions are retried.
Signed-off-by: Sean Quah <seanq@element.io>
Follow-up to https://github.com/matrix-org/synapse/pull/12083
Since we are now using the new `state_event_ids` parameter to do all of the heavy lifting.
We can remove any spots where we plumbed `auth_event_ids` just for MSC2716 things in
https://github.com/matrix-org/synapse/pull/9247/files.
Removing `auth_event_ids` from following functions:
- `create_and_send_nonmember_event`
- `_local_membership_update`
- `update_membership`
- `update_membership_locked`
When we are processing a `/backfill` request from a remote server, exclude any
outliers from consideration early on. We can't return outliers anyway (since we
don't know the state at the outlier), and filtering them out earlier means that
we won't attempt to calulate the state for them.
This should speed up push rule calculations for rooms with large numbers of local users when the main push rule cache fails.
Co-authored-by: reivilibre <oliverw@matrix.org>
* Make it possible to enable compression for the metrics HTTP resource
This can provide significant bandwidth savings pulling metrics from
synapse instances.
* Add changelog file.
* Fix type hint
We fetch the thread summary in two phases:
1. The summary that is shared by all users (count of messages and latest event).
2. Whether the requesting user has participated in the thread.
There's no use in attempting step 2 for events which did not return a summary
from step 1.
* Formally type the UserProfile in user searches
* export UserProfile in synapse.module_api
* Update docs
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
An error occured if a filter was supplied with `event_fields` which did not include
`unsigned`.
In that case, bundled aggregations are still added as the spec states it is allowed
for servers to add additional fields.
To handle cancellation, we ensure that `after_callback`s and
`exception_callback`s are always run, since the transaction will
complete on another thread regardless of cancellation.
We also wait until everything is done before releasing the
`CancelledError`, so that logging contexts won't get used after they
have been finished.
Signed-off-by: Sean Quah <seanq@element.io>
These decorators mostly support cancellation already. Add cancellation
tests and fix use of finished logging contexts by delaying cancellation,
as suggested by @erikjohnston.
Signed-off-by: Sean Quah <seanq@element.io>
`delay_cancellation` behaves like `stop_cancellation`, except it
delays `CancelledError`s until the original `Deferred` resolves.
This is handy for unifying cleanup paths and ensuring that uncancelled
coroutines don't use finished logcontexts.
Signed-off-by: Sean Quah <seanq@element.io>
The unstable identifiers are still supported if the experimental configuration
flag is enabled. The unstable identifiers will be removed in a future release.
This is allowed per MSC2675, although the original implementation did
not allow for it and would return an empty chunk / not bundle aggregations.
The main thing to improve is that the various caches get cleared properly
when an event is redacted, and that edits must not leak if the original
event is redacted (as that would presumably leak something similar to
the original event content).