Currently when fetching state groups from the data store we make two
hits two the database: once for members and once for non-members (unless
request is filtered to one or the other). This adds needless load to the
datbase, so this PR refactors the lookup to make only a single database
hit.
Fixes a bug introduced in https://github.com/matrix-org/synapse/pull/1783 which
meant that single backslashes were not allowed in event field filters.
The intention here is to allow single-backslashes, but disallow
double-backslashes.
Broadly three things here:
* disable W504 which seems a bit whacko
* remove a bunch of `as e` expressions from exception handlers that don't use
them
* use `r""` for strings which include backslashes
Also, we don't use pep8 any more, so we can get rid of the duplicate config
there.
Wrap calls to deferToThread() in a thing which uses a child logcontext to
attribute CPU usage to the right request.
While we're in the area, remove the logcontext_tracer stuff, which is never
used, and afaik doesn't work.
Fixes#4064
This brings it into line with on_new_notifications and on_new_receipts. It
requires a little bit of hoop-jumping in EmailPusher to load the throttle
params before the first loop.
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
Each pusher has its own loop which runs for as long as it has work to do. This
should run in its own background thread with its own logcontext, as other
similar loops elsewhere in the system do - which means that CPU usage is
consistently attributed to that loop, rather than to whatever request happened
to start the loop.
As of #4027, we require psutil to be installed, so it should be in our
dependency list. We can also remove some of the conditional import code
introduced by #992.
Fixes#4062.
It's quite important that get_missing_events returns the *latest* events in the
room; however we were pulling event ids out of the database until we got *at
least* 10, and then taking the *earliest* of the results.
We also shouldn't really be relying on depth, and should be checking the
room_id.
move the example email templates into the synapse package so that they can be
used as package data, which should mean that all of the packaging mechanisms
(pip, docker, debian, arch, etc) should now come with the example templates.
In order to grandfather in people who relied on the templates being in the old
place, check for that situation and fall back to using the defaults if the
templates directory does not exist.
It's quite important that get_missing_events returns the *latest* events in the
room; however we were pulling event ids out of the database until we got *at
least* 10, and then taking the *earliest* of the results.
We also shouldn't really be relying on depth, and should be checking the
room_id.
- Improve logging: log things in the right order, include destination and txids
in all log lines, don't log successful responses twice
- Fix the docstring on TransportLayerClient.send_transaction
- Don't use treq.request, which is overcomplicated for our purposes: just use a
twisted.web.client.Agent.
- simplify the logic for setting up the bodyProducer
- fix bytes/str confusions
We're better off hashing just the event_id than the whole ((type, state_key),
event_id) tuple - so use a dict instead of a set.
Also, iteritems > items.
Since we don't actually delete the keys, just mark the versions
as deleted in the db rather than actually deleting them, then we
won't reuse versions.
Fixes https://github.com/vector-im/riot-web/issues/7448
If a looping call function errors, then it kills the loop entirely.
Currently it throws away the exception logs, so we should make it
actually log them.
Fixes#3929
If a connection is lost before a request is read from Request, Twisted
sets `method` (and `uri`) attributes to dummy values. These dummy values
have incorrect types (i.e. they're not bytes), and so things like
`__repr__` would raise an exception.
To fix this we had a helper method to return the method with a
consistent type.
In particular, we assume that the name and canonical alias events in
the state have not been rejected. In practice this may not be the case
(though we should probably think about fixing that) so lets ensure that
we gracefully handle that case, rather than 404'ing the sync request
like we do now.
If we have a forward extremity for a room as `E`, and you receive `A`, `B`,
s.t. `A -> B -> E`, and `B` also points to an unknown event `X`, then we need
to do state res between `X` and `E`.
When that happens, we need to make sure we include `X` in the state that goes
into the state res alg.
Fixes#3934.
If we've fetched state events from remote servers in order to resolve the state
for a new event, we need to actually pass those events into
resolve_events_with_factory (so that it can do the state res) and then persist
the ones we need - otherwise other bits of the codebase get confused about why
we have state groups pointing to non-existent events.
get_state_groups returns a map from state_group_id to a list of FrozenEvents,
so was very much the wrong thing to be putting as one of the entries in the
list passed to resolve_events_with_factory (which expects maps from
(event_type, state_key) to event id).
We actually want get_state_groups_ids().values() rather than
get_state_groups().
This fixes the main problem in #3923, but there are other problems with this
bit of code which get discovered once you do so.
* add some comments on things that look a bit bogus
* rename this `state` variable to avoid confusion with the `state` used
elsewhere in this function. (There was no actual conflict, but it was
a confusing bit of spaghetti.)
when processing incoming transactions, it can be hard to see what's going on,
because we process a bunch of stuff in parallel, and because we may end up
recursively working our way through a chain of three or four events.
This commit creates a way to use logcontexts to add the relevant event ids to
the log lines.
This ensures that its resource usage metrics get recorded somewhere rather than
getting lost.
(It also fixes an error when called from a nested logging context which
completes before the bg process)
There's really no point in checking for destinations called "localhost" because
there is nothing stopping people creating other DNS entries which point to
127.0.0.1. The right fix for this is
https://github.com/matrix-org/synapse/issues/3953.
Blocking localhost, on the other hand, means that you get a surprise when
trying to connect a test server on localhost to an existing server (with a
'normal' server_name).
When we were authorizing an event, if there was no `m.room.create` in its
auth_events, we would raise a SynapseError with a cryptic message, which then
meant that we would bail out of processing any incoming events, rather than
storing a rejection for the faulty event and moving on.
We should treat the absent event the same as any other auth failure, by
raising an AuthError, so that the event is marked as rejected.
It used to try and produce an estimate, which was sometimes negative.
This caused metrics to be sad, so lets always just calculate it from
scratch.
(This appears to have been a longstanding bug, but one which has been made more
of a problem by #3932 and #3933).
(This was originally done by Erik as part of #3933. I'm cherry-picking it
because really it's a fix in its own right)
Synapse doesn’t allow for media resources to be played directly from
Chrome. It is a problem for users on other networks (e.g. IRC)
communicating with Matrix users through a gateway. The gateway sends
them the raw URL for the resource when a Matrix user uploads a video
and the video cannot be played directly in Chrome using that URL.
Chrome argues it is not authorized to play the video because of the
Content Security Policy. Chrome checks for the "media-src" policy which
is missing, and defauts to the "default-src" policy which is "none".
As Synapse already sends "object-src: 'self'" I thought it wouldn’t be
a problem to add "media-src: 'self'" to the CSP to fix this problem.
symlinks apparently break setuptools on python3 and alpine
(https://bugs.python.org/issue31940), so let's stop using a symlink and just
use the file directly.
Given we have disabled lazy loading for incr syncs in #3840, we can make self-LL more efficient by only doing it on initial sync. Also adds a bounds check for if/when we change our mind, so that we don't try to include LL members on sync responses with no timeline.
ExpiringCache required that `start()` be called before it would actually
start expiring entries. A number of places didn't do that.
This PR removes `start` from ExpiringCache, and automatically starts
backround reaping process on creation instead.
If a HTTP handler throws an exception while processing a request we
automatically write a JSON error response. If the handler had already
started writing a response twisted throws an exception.
We should check for this case and simple abort the connection if there
was an error after the response had started being written.
Add some informative comments about what's going on here.
Also, `sent_to_us_directly` and `get_missing` were doing the same thing (apart
from in `_handle_queued_pdus`, which looks like a bug), so let's get rid of
`get_missing` and use `sent_to_us_directly` consistently.
Let's try to rationalise the logging that happens when we are processing an
incoming transaction, to make it easier to figure out what is going wrong when
they take ages. In particular:
- make everything start with a [room_id event_id] prefix
- make sure we log a warning when catching exceptions rather than just turning
them into other, more cryptic, exceptions.
Currently we rely on the master to invalidate this cache promptly.
However, after having moved most federation endpoints off of master this
no longer happens, causing outbound fedeariont to get blackholed.
Fixes#3798
The existing deferred timeout helper function (and the one into twisted)
suffer from a bug when a deferred's canceller throws an exception, #3842.
The new helper function doesn't suffer from this problem.
We want to wait until we have read the response body before we log the request
as complete, otherwise a confusing thing happens where the request appears to
have completed, but we later fail it.
To do this, we factor the salient details of a request out to a separate
object, which can then keep track of the txn_id, so that it can be logged.
don't filter membership events based on history visibility
as we will already have filtered the messages in the timeline, and state events
are always visible.
and because @erikjohnston said so.
Continues from uhoreg's branch
This just fixed the errcode on /room_keys/version if no backup and
updates the schema delta to be on the latest so it gets run
If we receive an event that doesn't pass their content hash check (e.g.
due to already being redacted) then we hit a bug which causes an
exception to be raised, which then promplty stops the event (and
request) from being processed.
This effects all sorts of federation APIs, including joining rooms with
a redacted state event.
* speed up room summaries by pulling their data from room_memberships rather than room state
* disable LL for incr syncs, and log incr sync stats (#3840)