When a server leaves a room it may stop sharing a room with remote
users, and thus not get any updates to their device lists. So we need to
check for this case and delete those device lists from the cache.
We don't need to do this if we stop sharing a room because the remote
user leaves the room, because we track that case via looking at
membership changes.
If we detect that the remote users' keys may have changed then we should
attempt to resync against the remote server rather than using the
(potentially) stale local cache.
We were sending device updates down both the federation stream and
device streams. This mean there was a race if the federation sender
worker processed the federation stream first, as when the sender checked
if there were new device updates the slaved ID generator hadn't been
updated with the new stream IDs and so returned nothing.
This situation is correctly handled by events/receipts/etc by not
sending updates down the federation stream and instead having the
federation sender worker listen on the other streams and poke the
transaction queues as appropriate.
Otherwise its just stale data, which may get deleted later anyway so
can't be relied on. It's also a bit of a shotgun if we're trying to get
the current state of a room we're not in.
These are easier to work with than the strings and we normally have one around.
This fixes `FederationHander._persist_auth_tree` which was passing a
RoomVersion object into event_auth.check instead of a string.
* Add note that user_dir requires disabling user dir
updates from the main synapse process.
* Add note that federation_reader should have
the federation listener resource.
Signed-off-by: Jason Robinson <jasonr@matrix.org>
There are quite a few places that we assume that a redaction event has a
corresponding `redacts` key, which is not always the case. So lets
cheekily make it so that event.redacts just returns None instead.
The old statement returned `None` for such a `password_config` (like the one
created on first run), thus retrieval of the `pepper` key failed with
`AttributeError`.
Fixes#5315
Signed-off-by: Ivan Vilata i Balaguer <ivan@selidor.net>
* Raise an exception if there are pending background updates
So we return with a non-0 code
* Changelog
* Port synapse_port_db to async/await
* Port update_database to async/await
* Add version string to mocked homeservers
* Remove unused imports
* Convert overseen bits to async/await
* Fixup logging contexts
* Fix imports
* Add a way to print an error without raising an exception
* Incorporate review
Turns out that figuring out a remote user id for the SAML user isn't quite as obvious as it seems. Factor it out to the SamlMappingProvider so that it's easy to control.
Generally try to make this more comprehensible, and make it match the
conventions.
I've removed the documentation for all the settings which allow you to change
the names of the template files, because I can't really see why they are
useful.
* Port synapse.replication.tcp to async/await
* Newsfile
* Correctly document type of on_<FOO> functions as async
* Don't be overenthusiastic with the asyncing....
Currently we rely on `current_state_events` to figure out what rooms a
user was in and their last membership event in there. However, if the
server leaves the room then the table may be cleaned up and that
information is lost. So lets add a table that separately holds that
information.
AdditionalResource really doesn't add any value, and it gets in the way for
resources which want to support child resources or the like. So, if the
resource object already implements the IResource interface, don't bother
wrapping it.
This was ill-advised. We can't modify verify_keys here, because the response
object has already been signed by the requested key.
Furthermore, it's somewhat unnecessary because existing versions of Synapse
(which get upset that the notary key isn't present in verify_keys) will fall
back to a direct fetch via `/key/v2/server`.
Also: more tests for fetching keys via perspectives: it would be nice if we actually tested when our fetcher can't talk to our notary impl.
The mount in the form of ./matrix-config:/etc overwrites the contents of the container /etc folder. Since all valid ca certificates are stored in /etc, the synapse.push.httppusher, for example, cannot validate the certificate from matrix.org.
* Kill off redundant SynapseRequestFactory
We already get the Site via the Channel, so there's no need for a dedicated
RequestFactory: we can just use the right constructor.
* Workaround for error when fetching notary's own key
As a notary server, when we return our own keys, include all of our signing
keys in verify_keys.
This is a workaround for #6596.
If acme was enabled, the sdnotify startup hook would never be run because we
would try to add it to a hook which had already fired.
There's no need to delay it: we can sdnotify as soon as we've started the
listeners.
* Remove redundant python2 support code
`str.decode()` doesn't exist on python3, so presumably this code was doing
nothing
* Filter out pushers with corrupt data
When we get a row with unparsable json, drop the row, rather than returning a
row with null `data`, which will then cause an explosion later on.
* Improve logging when we can't start a pusher
Log the ID to help us understand the problem
* Make email pusher setup more robust
We know we'll have a `data` member, since that comes from the database. What we
*don't* know is if that is a dict, and if that has a `brand` member, and if
that member is a string.
Previously we tried to be clever and filter out some unnecessary event
IDs to keep the auth chain small, but that had some annoying
interactions with state res v2 so we stop doing that for now.
Previously we tried to be clever and filter out some unnecessary event
IDs to keep the auth chain small, but that had some annoying
interactions with state res v2 so we stop doing that for now.
This fixes a weird bug where, if you were determined enough, you could end up with a rejected event forming part of the state at a backwards-extremity. Authing that backwards extrem would then lead to us trying to pull the rejected event from the db (with allow_rejected=False), which would fail with a 404.
When we request the state/auth_events to populate a backwards extremity (on
backfill or in the case of missing events in a transaction push), we should
check that the returned events are in the right room rather than blindly using
them in the room state or auth chain.
Given that _get_events_from_store_or_dest takes a room_id, it seems clear that
it should be sanity-checking the room_id of the requested events, so let's do
it there.
Make it return the state *after* the requested event, rather than the one
before it. This is a bit easier and requires fewer calls to
get_events_from_store_or_dest.