This reverts commit bdf82efea5 (#17291)
This seems to have stopped persisting auth chains for new events, and so
is causing state res to fall back to the slow methods
We calculate the auth chain links outside of the main persist event
transaction to ensure that we do not block other event sending during
the calculation.
Sort is no longer configurable and we always sort rooms by the `stream_ordering` of the last event in the room or the point where the user can see up to in cases of leave/ban/invite/knock.
Add `event.internal_metadata.instance_name` (the worker instance that persisted the event) to go alongside the existing `event.internal_metadata.stream_ordering`.
`instance_name` is useful to properly compare and query for events with a token since you need to compare both the `stream_ordering` and `instance_name` against the vector clock/`instance_map` in the `RoomStreamToken`.
This is pre-requisite work and may be used in https://github.com/element-hq/synapse/pull/17293
Adding `event.internal_metadata.instance_name` was first mentioned in the initial Sliding Sync PR while pairing with @erikjohnston, see 09609cb0db (diff-5cd773fb307aa754bd3948871ba118b1ef0303f4d72d42a2d21e38242bf4e096R405-R410)
PR where this was introduced: https://github.com/matrix-org/synapse/pull/14817
### What does this affect?
`get_last_event_in_room_before_stream_ordering(...)` is used in Sync v2 in a lot of different state calculations.
`get_last_event_in_room_before_stream_ordering(...)` is also used in `/rooms/{roomId}/members`
https://github.com/matrix-org/matrix-spec-proposals/pull/4151
This is intended to be enabled by default for immediate use. When FCP is
complete, the unstable endpoint will be dropped and stable endpoint
supported instead - no backwards compatibility is expected for the
unstable endpoint.
Spawning from https://github.com/element-hq/synapse/pull/17187#discussion_r1619492779 around wanting to put `SlidingSyncBody` (parse the request in the rest layer), `SlidingSyncConfig` (from the rest layer, pass to the handler), `SlidingSyncResponse` (pass the response from the handler back to the rest layer to respond) somewhere that doesn't contaminate the imports and cause circular import issues.
- Moved Pydantic parsing models to `synapse/types/rest`
- Moved handler types to `synapse/types/handlers`
If the stream ID in the unconverted table is ahead of the device lists
ID gen, then it can break all /sync requests that had an ID from ahead
of the table.
The fix is to make sure we add the unconverted table to the list of
tables we check at start up.
Broke in https://github.com/element-hq/synapse/pull/17229
Fixes: #17013
Add logging for whether room keys are replaced
This is motivated by the Crypto team who need to diagnose crypto issues.
The existing opentracing logging is not enough because it is not enabled
for all users.
Based on [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575): Sliding Sync
This iteration only focuses on returning the list of room IDs in the sliding window API (without sorting/filtering).
Rooms appear in the Sliding sync response based on:
- `invite`, `join`, `knock`, `ban` membership events
- Kicks (`leave` membership events where `sender` is different from the `user_id`/`state_key`)
- `newly_left` (rooms that were left during the given token range, > `from_token` and <= `to_token`)
- In order for bans/kicks to not show up, you need to `/forget` those rooms. This doesn't modify the event itself though and only adds the `forgotten` flag to `room_memberships` in Synapse. There isn't a way to tell when a room was forgotten at the moment so we can't factor it into the from/to range.
### Example request
`POST http://localhost:8008/_matrix/client/unstable/org.matrix.msc3575/sync`
```json
{
"lists": {
"foo-list": {
"ranges": [ [0, 99] ],
"sort": [ "by_notification_level", "by_recency", "by_name" ],
"required_state": [
["m.room.join_rules", ""],
["m.room.history_visibility", ""],
["m.space.child", "*"]
],
"timeline_limit": 100
}
}
}
```
Response:
```json
{
"next_pos": "s58_224_0_13_10_1_1_16_0_1",
"lists": {
"foo-list": {
"count": 1,
"ops": [
{
"op": "SYNC",
"range": [0, 99],
"room_ids": [
"!MmgikIyFzsuvtnbvVG:my.synapse.linux.server"
]
}
]
}
},
"rooms": {},
"extensions": {}
}
```
Use fully-qualified `PersistedEventPosition` (`instance_name` and `stream_ordering`) when returning `RoomsForUser` to facilitate proper comparisons and `RoomStreamToken` generation.
Spawning from https://github.com/element-hq/synapse/pull/17187 where we want to utilize this change
Otherwise things will get confused.
An alternative would be to make sure that for lagging stream we don't
return anything (and make sure the returned next_batch token doesn't go
backwards). But that is a faff.
We try and deduplicate in two places: 1) really early on, and 2) just
before we persist the event. The first case was broken due to it
occuring before the profile information was added, and so it thought the
event contents were different.
The second case did catch it and handle it correctly, however doing so
creates a redundant state group leading to bloat.
Fixes#3791
Fixes up #17239
We need to keep the spam check within the `try/except` block. Also makes
it so that we don't enter the top span twice.
Also also ensures that we get the right thumbnail length.
There is a problem with `StreamIdGenerator` where it can go backwards
over restarts when a stream ID is requested but then not inserted into
the DB. This is problematic if we want to land #17215, and is generally
a potential cause for all sorts of nastiness.
Instead of trying to fix `StreamIdGenerator`, we may as well move to
`MultiWriterIdGenerator` that does not suffer from this problem (the
latest positions are stored in `stream_positions` table). This involves
adding SQLite support to the class.
This only changes id generators that were already using
`MultiWriterIdGenerator` under postgres, a separate PR will move the
rest of the uses of `StreamIdGenerator` over.
Currently sending a to-device message to a user ID with a dodgy
destination is accepted, but then ends up spamming the logs when we try
and send to the destination.
An alternative would be to reject the request, but I'm slightly nervous
that could break things.
When a module rejects a piece of media we end up trying to close the
same logging context twice.
Instead of fixing the existing code we refactor to use an async context
manager, which is easier to write correctly.