Commit Graph

514 Commits

Author SHA1 Message Date
Erik Johnston
a1ff1e967f
Periodically send pings to detect dead Redis connections (#9218)
This is done by creating a custom `RedisFactory` subclass that
periodically pings all connections in its pool.

We also ensure that the `replyTimeout` param is non-null, so that we
timeout waiting for the reply to those pings (and thus triggering a
reconnect).
2021-01-26 10:54:54 +00:00
Erik Johnston
6633a4015a
Allow moving account data and receipts streams off master (#9104) 2021-01-18 15:47:59 +00:00
Erik Johnston
f08ef64926
Enforce all replication HTTP clients calls use kwargs (#9144) 2021-01-18 15:24:04 +00:00
Erik Johnston
b530eaa262
Allow running sendToDevice on workers (#9044) 2021-01-07 20:19:26 +00:00
Erik Johnston
63593134a1
Some cleanups to device inbox store. (#9041) 2021-01-07 17:20:44 +00:00
Erik Johnston
a7a913918c Merge remote-tracking branch 'origin/erikj/as_mau_block' into develop 2020-12-18 09:51:56 +00:00
Erik Johnston
4c33796b20 Correctly handle AS registerations and add test 2020-12-17 12:55:21 +00:00
Patrick Cloke
bd30cfe86a
Convert internal pusher dicts to attrs classes. (#8940)
This improves type hinting and should use less memory.
2020-12-16 11:25:30 -05:00
Patrick Cloke
1619802228
Various clean-ups to the logging context code (#8935) 2020-12-14 14:19:47 -05:00
Patrick Cloke
96358cb424
Add authentication to replication endpoints. (#8853)
Authentication is done by checking a shared secret provided
in the Synapse configuration file.
2020-12-04 10:56:28 -05:00
Andrew Morgan
5cbe8d93fe
Add typing to membership Replication class methods (#8809)
This PR grew out of #6739, and adds typing to some method arguments

You'll notice that there are a lot of `# type: ignores` in here. This is due to the base methods not matching the overloads here. This is necessary to stop mypy complaining, but a better solution is #8828.
2020-11-27 10:49:38 +00:00
Andrew Morgan
e8d0853739
Generalise _maybe_store_room_on_invite (#8754)
There's a handy function called maybe_store_room_on_invite which allows us to create an entry in the rooms table for a room and its version for which we aren't joined to yet, but we can reference when ingesting events about.

This is currently used for invites where we receive some stripped state about the room and pass it down via /sync to the client, without us being in the room yet.

There is a similar requirement for knocking, where we will eventually do the same thing, and need an entry in the rooms table as well. Thus, reusing this function works, however its name needs to be generalised a bit.

Separated out from #6739.
2020-11-13 16:24:04 +00:00
Erik Johnston
f21e24ffc2
Add ability for access tokens to belong to one user but grant access to another user. (#8616)
We do it this way round so that only the "owner" can delete the access token (i.e. `/logout/all` by the "owner" also deletes that token, but `/logout/all` by the "target user" doesn't).

A future PR will add an API for creating such a token.

When the target user and authenticated entity are different the `Processed request` log line will be logged with a: `{@admin:server as @bob:server} ...`. I'm not convinced by that format (especially since it adds spaces in there, making it harder to use `cut -d ' '` to chop off the start of log lines). Suggestions welcome.
2020-10-29 15:58:44 +00:00
Erik Johnston
a6ea1a957e
Don't pull event from DB when handling replication traffic. (#8669)
I was trying to make it so that we didn't have to start a background task when handling RDATA, but that is a bigger job (due to all the code in `generic_worker`). However I still think not pulling the event from the DB may help reduce some DB usage due to replication, even if most workers will simply go and pull that event from the DB later anyway.

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2020-10-28 12:11:45 +00:00
Erik Johnston
4215a3acd4
Don't unnecessarily start bg process in replication sending loop. (#8670) 2020-10-27 17:37:08 +00:00
Erik Johnston
2b7c180879
Start fewer opentracing spans (#8640)
#8567 started a span for every background process. This is good as it means all Synapse code that gets run should be in a span (unless in the sentinel logging context), but it means we generate about 15x the number of spans as we did previously.

This PR attempts to reduce that number by a) not starting one for send commands to Redis, and b) deferring starting background processes until after we're sure they're necessary.

I don't really know how much this will help.
2020-10-26 09:30:19 +00:00
Richard van der Hoff
97647b33c2
Replace DeferredCache with LruCache where possible (#8563)
Most of these uses don't need a full-blown DeferredCache; LruCache is lighter and more appropriate.
2020-10-19 12:20:29 +01:00
Richard van der Hoff
4182bb812f move DeferredCache into its own module 2020-10-14 23:38:14 +01:00
Richard van der Hoff
9f87da0a84 Rename Cache->DeferredCache 2020-10-14 23:38:14 +01:00
Richard van der Hoff
7eff59ec91 Add some more type annotations to Cache 2020-10-14 23:38:14 +01:00
Erik Johnston
b2486f6656
Fix message duplication if something goes wrong after persisting the event (#8476)
Should fix #3365.
2020-10-13 12:07:56 +01:00
Erik Johnston
8de3703d21
Make event persisters periodically announce position over replication. (#8499)
Currently background proccesses stream the events stream use the "minimum persisted position" (i.e. `get_current_token()`) rather than the vector clock style tokens. This is broadly fine as it doesn't matter if the background processes lag a small amount. However, in extreme cases (i.e. SyTests) where we only write to one event persister the background processes will never make progress.

This PR changes it so that the `MultiWriterIDGenerator` keeps the current position of a given instance as up to date as possible (i.e using the latest token it sees if its not in the process of persisting anything), and then periodically announces that over replication. This then allows the "minimum persisted position" to advance, albeit with a small lag.
2020-10-12 15:51:41 +01:00
Patrick Cloke
1781bbe319
Add type hints to response cache. (#8507) 2020-10-09 11:35:11 -04:00
Erik Johnston
5009ffcaa4
Only send RDATA for instance local events. (#8496)
When pulling events out of the DB to send over replication we were not
filtering by instance name, and so we were sending events for other
instances.
2020-10-09 13:10:33 +01:00
Patrick Cloke
c9c0ad5e20
Remove the deprecated Handlers object (#8494)
All handlers now available via get_*_handler() methods on the HomeServer.
2020-10-09 07:24:34 -04:00
Erik Johnston
6c5d5e507e
Add unit test for event persister sharding (#8433) 2020-10-02 09:57:12 +01:00
Patrick Cloke
4ff0201e62
Enable mypy checking for unreachable code and fix instances. (#8432) 2020-10-01 08:09:18 -04:00
Erik Johnston
ea70f1c362
Various clean ups to room stream tokens. (#8423) 2020-09-29 21:48:33 +01:00
Richard van der Hoff
866c84da8d
Add metrics to track success/otherwise of replication requests (#8406)
One hope is that this might provide some insights into #3365.
2020-09-29 11:06:11 +01:00
Erik Johnston
f112cfe5bb
Fix MultiWriteIdGenerator's handling of restarts. (#8374)
On startup `MultiWriteIdGenerator` fetches the maximum stream ID for
each instance from the table and uses that as its initial "current
position" for each writer. This is problematic as a) it involves either
a scan of events table or an index (neither of which is ideal), and b)
if rows are being persisted out of order elsewhere while the process
restarts then using the maximum stream ID is not correct. This could
theoretically lead to race conditions where e.g. events that are
persisted out of order are not sent down sync streams.

We fix this by creating a new table that tracks the current positions of
each writer to the stream, and update it each time we finish persisting
a new entry. This is a relatively small overhead when persisting events.
However for the cache invalidation stream this is a much bigger relative
overhead, so instead we note that for invalidation we don't actually
care about reliability over restarts (as there's no caches to
invalidate) and simply don't bother reading and writing to the new table
in that particular case.
2020-09-24 16:53:51 +01:00
Erik Johnston
ac11fcbbb8
Add EventStreamPosition type (#8388)
The idea is to remove some of the places we pass around `int`, where it can represent one of two things:

1. the position of an event in the stream; or
2. a token that partitions the stream, used as part of the stream tokens.

The valid operations are then:

1. did a position happen before or after a token;
2. get all events that happened before or after a token; and
3. get all events between two tokens.

(Note that we don't want to allow other operations as we want to change the tokens to be vector clocks rather than simple ints)
2020-09-24 13:24:17 +01:00
Patrick Cloke
8a4a4186de
Simplify super() calls to Python 3 syntax. (#8344)
This converts calls like super(Foo, self) -> super().

Generated with:

    sed -i "" -Ee 's/super\([^\(]+\)/super()/g' **/*.py
2020-09-18 09:56:44 -04:00
Jonathan de Jong
a3f124b821
Switch metaclass initialization to python 3-compatible syntax (#8326) 2020-09-16 15:15:55 -04:00
Patrick Cloke
aec294ee0d
Use slots in attrs classes where possible (#8296)
slots use less memory (and attribute access is faster) while slightly
limiting the flexibility of the class attributes. This focuses on objects
which are instantiated "often" and for short periods of time.
2020-09-14 12:50:06 -04:00
Patrick Cloke
d2a3eb04a4 Fix typos in comments. 2020-09-14 11:46:58 -04:00
Erik Johnston
04cc249b43
Add experimental support for sharding event persister. Again. (#8294)
This is *not* ready for production yet. Caveats:

1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
2020-09-14 10:16:41 +01:00
Erik Johnston
5d3e306d9f
Clean up Notifier.on_new_room_event code path (#8288)
The idea here is that we pass the `max_stream_id` to everything, and only use the stream ID of the particular event to figure out *when* the max stream position has caught up to the event and we can notify people about it.

This is to maintain the distinction between the position of an item in the stream (i.e. event A has stream ID 513) and a token that can be used to partition the stream (i.e. give me all events after stream ID 352). This distinction becomes important when the tokens are more complicated than a single number, which they will be once we start tracking the position of multiple writers in the tokens.

The valid operations here are:

1. Is a position before or after a token
2. Fetching all events between two tokens
3. Merging multiple tokens to get the "max", i.e. `C = max(A, B)` means that for all positions P where P is before A *or* before B, then P is before C.

Future PR will change the token type to a dedicated type.
2020-09-10 13:24:43 +01:00
Patrick Cloke
2ea1c68249
Remove some unused distributor signals (#8216)
Removes the `user_joined_room` and stops calling it since there are no observers.

Also cleans-up some other unused signals and related code.
2020-09-09 12:22:00 -04:00
Erik Johnston
c9dbee50ae
Fixup pusher pool notifications (#8287)
`pusher_pool.on_new_notifications` expected a min and max stream ID, however that was not what we were passing in. Instead, let's just pass it the current max stream ID and have it track the last stream ID it got passed.

I believe that it mostly worked as we called the function for every event. However, it would break for events that got persisted out of order, i.e, that were persisted but the max stream ID wasn't incremented as not all preceding events had finished persisting, and push for that event would be delayed until another event got pushed to the effected users.
2020-09-09 16:56:08 +01:00
Erik Johnston
dc9dcdbd59 Revert "Fixup pusher pool notifications"
This reverts commit e7fd336a53.
2020-09-09 16:19:22 +01:00
Erik Johnston
e7fd336a53 Fixup pusher pool notifications 2020-09-09 16:17:50 +01:00
Patrick Cloke
c619253db8
Stop sub-classing object (#8249) 2020-09-04 06:54:56 -04:00
Brendan Abolivier
9f8abdcc38
Revert "Add experimental support for sharding event persister. (#8170)" (#8242)
* Revert "Add experimental support for sharding event persister. (#8170)"

This reverts commit 82c1ee1c22.

* Changelog
2020-09-04 10:19:42 +01:00
Erik Johnston
82c1ee1c22
Add experimental support for sharding event persister. (#8170)
This is *not* ready for production yet. Caveats:

1. We should write some tests...
2. The stream token that we use for events can get stalled at the minimum position of all writers. This means that new events may not be processed and e.g. sent down sync streams if a writer isn't writing or is slow.
2020-09-02 15:48:37 +01:00
Richard van der Hoff
aa07c37cf0
Move and rename get_devices_with_keys_by_user (#8204)
* Move `get_devices_with_keys_by_user` to `EndToEndKeyWorkerStore`

this seems a better fit for it.

This commit simply moves the existing code: no other changes at all.

* Rename `get_devices_with_keys_by_user`

to better reflect what it does.

* get_device_stream_token abstract method

To avoid referencing fields which are declared in the derived classes, make
`get_device_stream_token` abstract, and define that in the classes which define
`_device_list_id_gen`.
2020-09-01 12:41:21 +01:00
Erik Johnston
3b4556cf87
Fix wait_for_stream_position for multiple waiters. (#8196)
This fixes a bug where having multiple callers waiting on the same
stream and position will cause it to try and compare two deferreds,
which fails (due to the sorted list having an entry of `Tuple[int,
Deferred]`).
2020-08-28 17:12:45 +01:00
Erik Johnston
e3c91a3c55
Make SlavedIdTracker.advance have same interface as MultiWriterIDGenerator (#8171) 2020-08-26 13:15:20 +01:00
Erik Johnston
c9c544cda5
Remove ChainedIdGenerator. (#8123)
It's just a thin wrapper around two ID gens to make `get_current_token`
and `get_next` return tuples. This can easily be replaced by calling the
appropriate methods on the underlying ID gens directly.
2020-08-19 13:41:51 +01:00
Patrick Cloke
eebf52be06
Be stricter about JSON that is accepted by Synapse (#8106) 2020-08-19 07:26:03 -04:00
Erik Johnston
76d21d14a0
Separate get_current_token into two. (#8113)
The function is used for two purposes: 1) for subscribers of streams to
get a token they can use to get further updates with, and 2) for
replication to track position of the writers of the stream.

For streams with a single writer the two scenarios produce the same
result, however the situation becomes complicated for streams with
multiple writers. The current `MultiWriterIdGenerator` does not
correctly handle the first case (which is not an issue as its only used
for the `caches` stream which nothing subscribes to outside of
replication).
2020-08-19 10:39:31 +01:00