We can now use `_get_events_from_cache_or_db` rather than going right back to
the database, which means that (a) we can benefit from caching, and (b) it
opens the way forward to more extensive checks on the original event.
We now always require the original event to exist before we will serve up a
redaction.
First of all, let's get rid of `TOKEN_NOT_FOUND_HTTP_STATUS`. It was a hack we
did at one point when it was possible to return either a 403 or a 401 if the
creds were missing. We always return a 401 in these cases now (thankfully), so
it's not needed.
Let's also stop abusing `AuthError` for these cases. Honestly they have nothing
that relates them to the other places that `AuthError` is used, other than the
fact that they are loosely under the 'Auth' banner. It makes no sense for them
to share exception classes.
Instead, let's add a couple of new exception classes: `InvalidClientTokenError`
and `MissingClientTokenError`, for the `M_UNKNOWN_TOKEN` and `M_MISSING_TOKEN`
cases respectively - and an `InvalidClientCredentialsError` base class for the
two of them.
this is only used in one place, so it's clearer if we inline it and reduce the
API surface.
Also, fixes a buglet where we would create an access token even if we were
about to block the user (we would never return the AT, so the user could never
use it, but it was still created and added to the db.)
When asking for the relations of an event, include the original event in the response. This will mostly be used for efficiently showing edit history, but could be useful in other circumstances.
Nothing uses this now, so we can remove the dead code, and clean up the
API.
Since we're changing the shape of the return value anyway, we take the
opportunity to give the method a better name.
- Put the default window_size back to 1000ms (broken by #5181)
- Make the `rc_federation` config actually do something
- fix an off-by-one error in the 'concurrent' limit
- Avoid creating an unused `_PerHostRatelimiter` object for every single
incoming request
Adds new config option `cleanup_extremities_with_dummy_events` which
periodically sends dummy events to rooms with more than 10 extremities.
THIS IS REALLY EXPERIMENTAL.
Some keys are stored in the synapse database with a null valid_until_ms
which caused an exception to be thrown when using that key. We fix this
by treating nulls as zeroes, i.e. they keys will match verification
requests with a minimum_valid_until_ms of zero (i.e. don't validate ts)
but will not match requests with a non-zero minimum_valid_until_ms.
Fixes#5391.
Sends password reset emails from the homeserver instead of proxying to the identity server. This is now the default behaviour for security reasons. If you wish to continue proxying password reset requests to the identity server you must now enable the email.trust_identity_server_for_password_resets option.
This PR is a culmination of 3 smaller PRs which have each been separately reviewed:
* #5308
* #5345
* #5368
There are a few changes going on here:
* We make checking the signature on a key server response optional: if no
verify_keys are specified, we trust to TLS to validate the connection.
* We change the default config so that it does not require responses to be
signed by the old key.
* We replace the old 'perspectives' config with 'trusted_key_servers', which
is also formatted slightly differently.
* We emit a warning to the logs every time we trust a key server response
signed by the old key.
* Fix background updates to handle redactions/rejections
In background updates based on current state delta stream we need to
handle that we may not have all the events (or at least that
`get_events` may raise an exception).
Also:
* rename VerifyKeyRequest->VerifyJsonRequest
* calculate key_ids on VerifyJsonRequest construction
* refactor things to pass around VerifyJsonRequests instead of 4-tuples
When handling incoming federation requests, make sure that we have an
up-to-date copy of the signing key.
We do not yet enforce the validity period for event signatures.
When enabling the account validity feature, Synapse will look at startup for registered account without an expiration date, and will set one equals to 'now + validity_period' for them. On large servers, it can mean that a large number of users will have the same expiration date, which means that they will all be sent a renewal email at the same time, which isn't ideal.
In order to mitigate this, this PR allows server admins to define a 'max_delta' so that the expiration date is a random value in the [now + validity_period ; now + validity_period + max_delta] range. This allows renewal emails to be progressively sent over a configured period instead of being sent all in one big batch.
The list of server names was redundant, since it was equivalent to the keys on
the server_to_deferred map. This reduces the number of large lists being passed
around, and has the benefit of deduplicating the entries in `wait_on`.
Replaces DEFAULT_ROOM_VERSION constant with a method that first checks the config, then returns a hardcoded value if the option is not present.
That hardcoded value is now located in the server.py config file.
Rather than have three methods which have to have the same interface,
factor out a separate interface which is provided by three implementations.
I find it easier to grok the code this way.
This is a first step to checking that the key is valid at the required moment.
The idea here is that, rather than passing VerifyKey objects in and out of the
storage layer, we instead pass FetchKeyResult objects, which simply wrap the
VerifyKey and add a valid_until_ts field.
Storing server keys hammered the database a bit. This replaces the
implementation which stored a single key, with one which can do many updates at
once.
If account validity is enabled in the server's configuration, this job will run at startup as a background job and will stick an expiration date to any registered account missing one.
Prevents a SynapseError being raised inside of a IResolutionReceiver and instead opts to just return 0 results. This thus means that we have to lump a failed lookup and a blacklisted lookup together with the same error message, but the substitute should be generic enough to cover both cases.
This commit adds two config options:
* `restrict_public_rooms_to_local_users`
Requires auth to fetch the public rooms directory through the CS API and disables fetching it through the federation API.
* `require_auth_for_profile_requests`
When set to `true`, requires that requests to `/profile` over the CS API are authenticated, and only returns the user's profile if the requester shares a room with the profile's owner, as per MSC1301.
MSC1301 also specifies a behaviour for federation (only returning the profile if the server asking for it shares a room with the profile's owner), but that's currently really non-trivial to do in a not too expensive way. Next step is writing down a MSC that allows a HS to specify which user sent the profile query. In this implementation, Synapse won't send a profile query over federation if it doesn't believe it already shares a room with the profile's owner, though.
Groups have been intentionally omitted from this commit.
This endpoint isn't much use for its intended purpose if you first need to get
yourself an admin's auth token.
I've restricted it to the `/_synapse/admin` path to make it a bit easier to
lock down for those concerned about exposing this information. I don't imagine
anyone is using it in anger currently.
Hopefully this time we really will fix#4422.
We need to make sure that the cache on
`get_rooms_for_user_with_stream_ordering` is invalidated *before* the
SyncHandler is notified for the new events, and we can now do so reliably via
the `events` stream.
I don't have a database with the same name as my user, so leaving the database
name unset fails.
While we're at it, clear out some unused stuff in the test setup.