Synctl did not check if a worker thread was already running when using
`synctl start` and would naively start a fresh copy. This would
sometimes lead to cases where many duplicate copies of a single worker
would run.
This fix adds a pid check when starting worker threads and synctl will
now refuse to start individual workers if they're already running.
When using `add_header` nginx will literally add a header. If a
`content-type` header is already configured (for example through a
server wide default), this means we end up with 2 content-type headers,
like so:
```
content-type: text/html
content-type: application/json
access-control-allow-origin: *
```
That doesn't make sense. Instead, we want the content type of that
block to only be `application/json` which we can achieve using
`default_type` instead.
Signed-off-by: Daniele Sluijters <daenney@users.noreply.github.com>
Checks that the localpart returned by mapping providers for SAML and
OIDC are valid before registering new users.
Extends the OIDC tests for existing users and invalid data.
* Consistently use room_id from federation request body
Some federation APIs have a redundant `room_id` path param (see
https://github.com/matrix-org/matrix-doc/issues/2330). We should make sure we
consistently use either the path param or the body param, and the body param is
easier.
* Kill off some references to "context"
Once upon a time, "rooms" were known as "contexts". I think this kills of the
last references to "contexts".
* Make this line debug (it's noisy)
* Don't include from_key for presence if we are at 0
* Limit read receipts for all rooms to 100
* changelog.d/8744.bugfix
* Allow from_key to be None
* Update 8744.bugfix
* The from_key is superflous
* Update comment
`_locally_reject_invite` generates an out-of-band membership event which can be passed to clients, but not other homeservers.
This is used when we fail to reject an invite over federation. If this happens, we instead just generate a leave event locally and send it down /sync, allowing clients to reject invites even if we can't reach the remote homeserver.
A similar flow needs to be put in place for rescinding knocks. If we're unable to contact any remote server from the room we've tried to knock on, we'd still like to generate and store the leave event locally. Hence the need to reuse, and thus generalise, this method.
Separated from #6739.
The root resource isn't necessarily a JsonResource, so rename this method
accordingly, and update a couple of test classes to use the method rather than
directly manipulating self.resource.
There's a handy function called maybe_store_room_on_invite which allows us to create an entry in the rooms table for a room and its version for which we aren't joined to yet, but we can reference when ingesting events about.
This is currently used for invites where we receive some stripped state about the room and pass it down via /sync to the client, without us being in the room yet.
There is a similar requirement for knocking, where we will eventually do the same thing, and need an entry in the rooms table as well. Thus, reusing this function works, however its name needs to be generalised a bit.
Separated out from #6739.
The main use case is to see how many requests are being made, and how
many are second/third/etc attempts. If there are large number of retries
then that likely indicates a delivery problem.
If the script fails (or is CTRL-C'ed) between porting some of the events table and copying of the sequences then the port script will immediately die if run again due to the postgres DB having inconsistencies between sequences and tables.
The fix is to move the porting of sequences to before porting the tables, so that there is never a period where the Postgres DB is inconsistent. To do that we need to change how we port the sequences so that it calculates the values from the SQLite DB rather than the Postgres DB.
Fixes#8619
This should hopefully speed up `get_auth_chain_difference` a bit in the case of repeated state res on the same rooms.
`get_auth_chain_difference` does a breadth first walk of the auth graphs by repeatedly looking up events' auth events. Different state resolutions on the same room will end up doing a lot of the same event to auth events lookups, so by caching them we should speed things up in cases of repeated state resolutions on the same room.
`adbapi.ConnectionPool` let's you turn on auto reconnect of DB connections. This is off by default.
As far as I can tell if its not enabled dead connections never get removed from the pool.
Maybe helps #8574
* Check support room has only two users
* Create 8728.bugfix
* Update synapse/server_notices/server_notices_manager.py
Co-authored-by: Erik Johnston <erik@matrix.org>
Co-authored-by: Erik Johnston <erik@matrix.org>
If SSO login is used (e.g. SAML) in a multi worker setup, it should be mentioned that currently all SAML logins must run on the same worker, see https://github.com/matrix-org/synapse/issues/7530
Also, if you are using different ports (for example 443 and 8448) in a reverse proxy for client and federation, the path `/_matrix/media` on the client and federation port must point to the listener of the `media_repository` worker, otherwise you'll get a 404 on the federation port for the path `/_matrix/media`, if a remote server is trying to get the media object on federation port, see https://github.com/matrix-org/synapse/issues/8695
This PR adds some documentation that:
* Describes who the audience for the `docs/`, `docs/dev/` and `docs/admin/` directories are, as well as Synapse's wiki page.
* Stresses that we'd like all documentation to be down in markdown.
I idly noticed that these lists were out of sync with each other, causing us to miss a table in a test case (`local_invites`). Let's consolidate this list instead to prevent this from happening in the future.