For all the homeserver classes, only the FrontendProxyServer passes
its reactor when doing the http listen. Looking at previous PR's looks
like this was introduced to make it possible to write a test, otherwise
when you try to run a test with the test homeserver it tries to
do a real bind to a port. Passing the reactor that the homeserver
is instantiated with should probably be the right thing to do anyway?
Signed-off-by: Jason Robinson <jasonr@matrix.org>
For all the homeserver classes, only the FrontendProxyServer passes
its reactor when doing the http listen. Looking at previous PR's looks
like this was introduced to make it possible to write a test, otherwise
when you try to run a test with the test homeserver it tries to
do a real bind to a port. Passing the reactor that the homeserver
is instantiated with should probably be the right thing to do anyway?
Signed-off-by: Jason Robinson <jasonr@matrix.org>
This is largely a precursor for the removal of the bundled webclient. The idea
is to present a page at / which reassures people that something is working, and
to give them some links for next steps.
The welcome page lives at `/_matrix/static/`, so is enabled alongside the other
`static` resources (which, in practice, means the client API is enabled). We'll
redirect to it from `/` if we have nothing better to display there.
It would be nice to have a way to disable it (in the same way that you might
disable the nginx welcome page), but I can't really think of a good way to do
that without a load of ickiness.
It's based on the work done by @krombel for #2601.
This implements both a SAML2 metadata endpoint (at
`/_matrix/saml2/metadata.xml`), and a SAML2 response receiver (at
`/_matrix/saml2/authn_response`). If the SAML2 response matches what's been
configured, we complete the SSO login flow by redirecting to the client url
(aka `RelayState` in SAML2 jargon) with a login token.
What we don't yet have is anything to build a SAML2 request and redirect the
user to the identity provider. That is left as an exercise for the reader.
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
As of #4027, we require psutil to be installed, so it should be in our
dependency list. We can also remove some of the conditional import code
introduced by #992.
Fixes#4062.
symlinks apparently break setuptools on python3 and alpine
(https://bugs.python.org/issue31940), so let's stop using a symlink and just
use the file directly.
ExpiringCache required that `start()` be called before it would actually
start expiring entries. A number of places didn't do that.
This PR removes `start` from ExpiringCache, and automatically starts
backround reaping process on creation instead.
We should explicitly close any db connections we open, because failing to do so
can block other transactions as per
https://github.com/matrix-org/synapse/issues/3682.
Let's also try to factor out some of the boilerplate by having server classes
define their datastore class rather than duplicating the whole of `setup`.
Turns out that the user directory handling is fairly racey as a bunch
of stuff assumes that the processing happens on master, which it doesn't
when there is a synapse.app.user_dir worker. So lets just call the
function directly until we actually get round to fixing it, since it
doesn't make the situation any worse.
Run the handlers for replication commands as background processes. This should
improve the visibility in our metrics, and reduce the number of "running db
transaction from sentinel context" warnings.
Ideally it means converting the things that fire off deferreds into the night
into things that actually return a Deferred when they are done. I've made a bit
of a stab at this, but it will probably be leaky.
First of all, avoid resetting the logcontext before running the pushers, to fix
the "Starting db txn 'get_all_updated_receipts' from sentinel context" warning.
Instead, give them their own "background process" logcontexts.
It turns out that looping_call does check the deferred returned by its
callback, and (at least in the case of client_ips), we were relying on this,
and I broke it in #3604.
Update run_as_background_process to return the deferred, and make sure we
return it to clock.looping_call.
This allows us to handle /context/ requests on the client_reader worker
without having to pull in all the various stream handlers (e.g.
precence, typing, pushers etc). The only thing the token gets used for
is pagination, and that ignores everything but the room portion of the
token.