Currently the handling of auto_join_rooms only works when a user
registers itself via public register api. Registrations via
registration_shared_secret and ModuleApi do not work
This auto_joins the users in the registration handler which enables
the auto join feature for all 3 registration paths.
This is related to issue #2725
Signed-Off-by: Matthias Kesler <krombel@krombel.de>
* Split state group persist into seperate storage func
* Add per database engine code for state group id gen
* Move store_state_group to StateReadStore
This allows other workers to use it, and so resolve state.
* Hook up store_state_group
* Fix tests
* Rename _store_mult_state_groups_txn
* Rename StateGroupReadStore
* Remove redundant _have_persisted_state_group_txn
* Update comments
* Comment compute_event_context
* Set start val for state_group_id_seq
... otherwise we try to recreate old state groups
* Update comments
* Don't store state for outliers
* Update comment
* Update docstring as state groups are ints
We extract the storage-independent bits of the state group resolution out to a
separate functiom, and stick it in a new handler, in preparation for its use
from the storage layer.
... instead of creating our own special SQLiteMemoryDbPool, whose purpose was a
bit of a mystery.
For some reason this makes one of the tests run slightly slower, so bump the
sleep(). Sorry.
Add federation_domain_whitelist
gives a way to restrict which domains your HS is allowed to federate with.
useful mainly for gracefully preventing a private but internet-connected HS from trying to federate to the wider public Matrix network
Twisted core doesn't have a general purpose one, so we need to write one
ourselves.
Features:
- All writing happens in background thread
- Supports both push and pull producers
- Push producers get paused if the consumer falls behind
It turns out that the only thing we use the __dict__ of LoggingContext for is
`request`, and given we create lots of LoggingContexts and then copy them every
time we do a db transaction or log line, using the __dict__ seems a bit
redundant. Let's try to optimise things by making the request attribute
explicit.
I'm still unclear on what the intended behaviour for
`verify_json_objects_for_server` is, but at least I now understand the
behaviour of most of the things it calls...
Most of the time was spent copying a dict to filter out sentinel values
that indicated that keys did not exist in the dict. The sentinel values
were added to ensure that we cached the non-existence of keys.
By updating DictionaryCache to keep track of which keys were known to
not exist itself we can remove a dictionary copy.
The cache wrappers had a habit of leaking the logcontext into the reactor while
the lookup function was running, and then not restoring it correctly when the
lookup function had completed. It's all the fault of
`preserve_context_over_{fn,deferred}` which are basically a bit broken.
Due to a failure to instantiate DeferredTimedOutError, time_bound_deferred
would throw a CancelledError when the deferred timed out, which was rather
confusing.
The `@cached` decorator on `KeyStore._get_server_verify_key` was missing
its `num_args` parameter, which meant that it was returning the wrong key for
any server which had more than one recorded key.
By way of a fix, change the default for `num_args` to be *all* arguments. To
implement that, factor out a common base class for `CacheDescriptor` and `CacheListDescriptor`.
Fix a bug in ``logcontext.preserve_fn`` which made it leak context into the
reactor, and add a test for it.
Also, get rid of ``logcontext.reset_context_after_deferred``, which tried to do
the same thing but had its own, different, set of bugs.
This was broken when device list updates were implemented, as Mailer
could no longer instantiate an AuthHandler due to a dependency on
federation sending.
Instead of calculating the size of the cache repeatedly, which can take
a long time now that it can use a callback, instead cache the size and
update that on insertion and deletion.
This requires changing the cache descriptors to have two caches, one for
pending deferreds and the other for the actual values. There's no reason
to evict from the pending deferreds as they won't take up any more
memory.
The old test expected an incorrect wrapping due to the preview function
not using unicode properly, so it got the wrong length.
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
We might as well treat all refresh_tokens as invalid. Just return a 403 from
/tokenrefresh, so that we don't have a load of dead, untestable code hanging
around.
Still TODO: removing the table from the schema.
The 'time' caveat on the access tokens was something of a lie, since we weren't
enforcing it; more pertinently its presence stops us ever adding useful time
caveats.
Let's move in the right direction by not lying in our caveats.
Since we're not doing refresh tokens any more, we should start killing off the
dead code paths. /tokenrefresh itself is a bit of a thornier subject, since
there might be apps out there using it, but we can at least not generate
refresh tokens on new logins.
Allows delegating the password auth to an external module. This also
moves the LDAP auth to using this system, allowing it to be removed from
the synapse tree entirely in the future.
Some streams will occaisonally advance their positions without actually
having any new rows to send over federation. Currently this means that
the token will not advance on the workers, leading to them repeatedly
sending a slightly out of date token. This in turns requires the master
to hit the DB to check if there are any new rows, rather than hitting
the no op logic where we check if the given token matches the current
token.
This commit changes the API to always return an entry if the position
for a stream has changed, allowing workers to advance their tokens
correctly.
This is for two reasons:
1. Suppresses duplicates correctly, as the notifier doesn't do any
duplicate suppression.
2. Makes it easier to connect the AppserviceHandler to the replication
stream.
This includes:
- Splitting out methods of a class into stand alone functions, to make
them easier to test.
- Adding unit tests to split out functions, testing HTML -> preview.
- Handle the fact that elements in lxml may have tail text.
In the situation where all of a user's devices get deleted, we want to
indicate this to a client, so we want to return an empty dictionary, rather
than nothing at all.
for the email and http pushers rather than trying to make a single
method that will work with their conflicting requirements.
The http pusher needs to get the messages in ascending stream order, and
doesn't want to miss a message.
The email pusher needs to get the messages in descending timestamp order,
and doesn't mind if it misses messages.
1. Give the handler used for logging in unit tests a formatter, so that the
output is slightly more meaningful
2. Log some synapse.storage stuff, because it's useful.
A bit of a cleanup for background_updates, and make sure that the real
background updates have run before we start the unit tests, so that they don't
interfere with the tests.
implement a GET /devices endpoint which lists all of the user's devices.
It also returns the last IP where we saw that device, so there is some dancing
to fish that out of the user_ips table.
Record the device_id when we add a client ip; it's somewhat redundant as we
could get it via the access_token, but it will make querying rather easier.
This doesn't cover *all* of the registration flows, but it does cover the most
common ones: in particular: shared_secret registration, appservice
registration, and normal user/pass registration.
Pull device_id from the registration parameters. Register the device in the
devices table. Associate the device with the returned access and refresh
tokens. Profit.
* `RegistrationHandler.appservice_register` no longer issues an access token:
instead it is left for the caller to do it. (There are two of these, one in
`synapse/rest/client/v1/register.py`, which now simply calls
`AuthHandler.issue_access_token`, and the other in
`synapse/rest/client/v2_alpha/register.py`, which is covered below).
* In `synapse/rest/client/v2_alpha/register.py`, move the generation of
access_tokens into `_create_registration_details`. This means that the normal
flow no longer needs to call `AuthHandler.issue_access_token`; the
shared-secret flow can tell `RegistrationHandler.register` not to generate a
token; and the appservice flow continues to work despite the above change.
This is meant to be an *almost* non-functional change, with the exception that
it fixes what looks a lot like a bug in that it only calls
`auth_handler.add_threepid` and `add_pusher` once instead of three times.
The idea is to move the generation of the `access_token` out of
`registration_handler.register`, because `access_token`s now require a
device_id, and we only want to generate a device_id once registration has been
successful.
Add a 'devices' table to the storage, as well as a 'device_id' column to
refresh_tokens.
Allow the client to pass a device_id, and initial_device_display_name, to
/login. If login is successful, then register the device in the devices table
if it wasn't known already. If no device_id was supplied, make one up.
Associate the device_id with the access token and refresh token, so that we can
get at it again later. Ensure that the device_id is copied from the refresh
token to the access_token when the token is refreshed.
Use the pure-python ldap3 library, which eliminates the need for a
system dependency.
Offer both a `search` and `simple_bind` mode, for more sophisticated
ldap scenarios.
- `search` tries to find a matching DN within the `user_base` while
employing the `user_filter`, then tries the bind when a single
matching DN was found.
- `simple_bind` tries the bind against a specific DN by combining the
localpart and `user_base`
Offer support for STARTTLS on a plain connection.
The configuration was changed to reflect these new possibilities.
Signed-off-by: Martin Weinelt <hexa@darmstadt.ccc.de>
Renames ``load_config`` to ``load_or_generate_config``
Adds a method called ``load_config`` that just loads the
config.
The main synapse.app.homeserver will continue to use
``load_or_generate_config`` to retain backwards compat.
However new worker processes can use ``load_config`` to
load the config avoiding some of the cruft needed to generate
the config.
As the new ``load_config`` method is expected to be used by new
configs it removes support for the legacy commandline overrides
that ``load_or_generate_config`` supports
We change it so that each cache has an individual CacheMetric, instead
of having one global CacheMetric. This means that when a cache tries to
increment a counter it does not need to go through so many indirections.
* Add infrastructure to the presence handler to track sync requests in external processes
* Expire stale entries for dead external processes
* Add an http endpoint for making users as syncing
Add some docstrings and comments.
* Fixes
Access it directly from the homeserver itself. It already wasn't
inheriting from BaseHandler storing it on the Handlers object was
already somewhat dubious.
synapse
This is necessary for replicating the data in synapse to be visible to a
separate service because presence and typing notifications aren't stored
in a database so won't be visible to another process.
This API can be used to either get the raw data by requesting the tables
themselves or to just receive notifications for updates by following the
streams meta-stream.
Returns updates for each table requested a JSON array of arrays with a
row for each row in the table.
Each table is prefixed by a header row with the: name of the table,
current stream_id position for the table, number of rows, number of
columns and the names of the columns.
This is followed by the rows that have been added to the server since
the requester last asked.
The API has a timeout and is hooked up to the notifier so that a slave
can long poll for updates.
Currently we store all access tokens in the DB, and fall back to that
check if we can't validate the macaroon, so our fallback works here, but
for guests, their macaroons don't get persisted, so we don't get to
find them in the database. Each restart, we generate a new ephemeral
key, so guests lose access after each server restart.
I tried to fix up the config stuff to be less insane, but gave up, so
instead I bolt on yet another piece of custom one-off insanity.
Also, add some basic tests for config generation and loading.
Introduce a User object
I'm sick of passing around more and more things as tuple items around
the whole world, and needing to edit every call site every time there is
more information about a user. So pass them around together as an
object.
This object has incredibly poorly named fields because we have a
convention that `user` indicates a UserID object, and `user_id`
indicates a string. I tried to clean up the whole repo to fix this, but
gave up. So instead, I introduce a second convention. A user_object is a
User, and a user_id_object is a UserId. I may have cried a little bit.
This tracks data about the entity which made the request. This is
instead of passing around a tuple, which requires call-site
modifications every time a new piece of optional context is passed
around.
I tried to introduce a User object. I gave up.
For some reason, one test imports Mock class from mock.mock
rather than from mock.
This change fixes this error.
Signed-off-by: Oleg Girko <ol@infoserver.lv>
Changes to m.room.power_levels events are supposed to be handled at a high
priority; however a typo meant that the relevant bit of code was never
executed, so they were handled just like any other state change - which meant
that a bad person could cause room state changes by forking the graph from a
point in history when they were allowed to do so.
This follows the same flows-based flow as regular registration, but as
the only implemented flow has no requirements, it auto-succeeds. In the
future, other flows (e.g. captcha) may be required, so clients should
treat this like the regular registration flow choices.
SYN-287
This requires that HS owners either opt in or out of stats reporting.
When --generate-config is passed, --report-stats must be specified
If an already-generated config is used, and doesn't have the
report_stats key, it is requested to be set.
A couple of weird caveats:
* If we can't validate your macaroon, we fall back to checking that
your access token is in the DB, and ignoring the failure
* Even if we can validate your macaroon, we still have to hit the DB to
get the access token ID, which we pretend is a device ID all over the
codebase.
This mostly adds the interesting code, and points out the two pieces we
need to delete (and necessary conditions) in order to fix the above
caveats.
Removes device_id and ClientInfo
device_id is never actually written, and the matrix.org DB has no
non-null entries for it. Right now, it's just cluttering up code.
This doesn't remove the columns from the database, because that's
fiddly.
This allows refresh tokens to be exchanged for (access_token,
refresh_token).
It also starts issuing them on login, though no clients currently
interpret them.
This just replaces random bytes with macaroons. The macaroons are not
inspected by the client or server.
In particular, they claim to have an expiry time, but nothing verifies
that they have not expired.
Follow-up commits will actually enforce the expiration, and allow for
token refresh.
See https://bit.ly/matrix-auth for more information