Previously we only queried the device list when the user joined the room; now we
do it when they are invited too. This means that new messages can be encrypted
for the devices of the invited user as of the point they were invited.
WARNING: This commit has two major problems however:
1. If the invited user adds devices after being invited but before joining, the
device-list will not be updated to the other servers in the room (as we don't
know who those servers are).
2. This introduces a regression, as previously the device-list would be correctly
updated when when user joined the room. However, this resync doesn't happen
now, so devices which joined after the invite and before the join may never
be added to the device-list.
This is being merged for DINSIC given the edge case of adding devices between
invite & join is pretty rare in their use case, but before it can be merged to
synapse in general we need to at least re-sync the devicelist when the user joins
or to implement some kind of pubsub mechanism to let interested servers subscribe
to devicelist updates on other servers irrespective of user join/invite membership.
This was originally https://github.com/matrix-org/synapse/pull/3484
The sync API often returns events in a topological rather than stream
ordering, e.g. when the user joined the room or on initial sync. When
this happens we can reuse existing pagination storage functions.
There is no reason to return a tuple of tokens when the last token is
always the token passed as an argument. Changing it makes it consistent
with other storage APIs
Adds a `.wrap` method to ResponseCache which wraps up the boilerplate of a
(get, set) pair, and then use it throughout the codebase.
This will be largely non-functional, but does include the following functional
changes:
* federation_server.on_context_state_request: drops use of _server_linearizer
which looked redundant and could cause incorrect cache misses by yielding
between the get and the set.
* RoomListHandler.get_remote_public_room_list(): fixes logcontext leaks
* the wrap function includes some logging. I'm hoping this won't be too noisy
on production.
The race happens when the user joins a room at the same time as doing a
sync. We fetch the current token and then get the rooms the user is in.
If the join happens after the current token, but before we get the rooms
we end up sending down a partial room entry in the sync.
This is fixed by looking at the stream ordering of the membership
returned by get_rooms_for_user, and handling the case when that stream
ordering is after the current token.
There was a race condition that caused the notifier to 'miss' the
timeout notification, since there were no other checks for the timeout
this caused listeners to get stuck in a loop until something happened.
This is currently very conservative in that it only does this if there is no
`since` token. This limits the risk to clients likely to be doing one-off
syncs (like bridges), but does mean that normal human clients won't benefit
from the time savings here. If the savings are large enough, I would consider
generalising this to just check the filter.