https://github.com/matrix-org/synapse/issues/9962 uncovered that we accidentally removed all but one of the presence updates that we store in the database when persisting multiple updates. This could cause users' presence state to be stale.
The bug was fixed in #10014, and this PR just adds a test that failed on the old code, and was used to initially verify the bug.
The test attempts to insert some presence into the database in a batch using `PresenceStore.update_presence`, and then simply pulls it out again.
This should help ensure that equivalent results are achieved between
homeservers querying for the summary of a space.
This implements modified MSC1772 rules, according to MSC2946.
The different is that the origin_server_ts of the m.room.create event
is not used as a tie-breaker since this might not be known if the
homeserver is not part of the room.
Part of #9744
Removes all redundant `# -*- coding: utf-8 -*-` lines from files, as python 3 automatically reads source code as utf-8 now.
`Signed-off-by: Jonathan de Jong <jonathan@automatia.nl>`
At the moment, if you'd like to share presence between local or remote users, those users must be sharing a room together. This isn't always the most convenient or useful situation though.
This PR adds a module to Synapse that will allow deployments to set up extra logic on where presence updates should be routed. The module must implement two methods, `get_users_for_states` and `get_interested_users`. These methods are given presence updates or user IDs and must return information that Synapse will use to grant passing presence updates around.
A method is additionally added to `ModuleApi` which allows triggering a set of users to receive the current, online presence information for all users they are considered interested in. This is the equivalent of that user receiving presence information during an initial sync.
The goal of this module is to be fairly generic and useful for a variety of applications, with hard requirements being:
* Sending state for a specific set or all known users to a defined set of local and remote users.
* The ability to trigger an initial sync for specific users, so they receive all current state.
Fixes https://github.com/matrix-org/synapse/issues/9572
When a SSO user logs in for the first time, we create a local Matrix user for them. This goes through the register_user flow, which ends up triggering the spam checker. Spam checker modules don't currently have any way to differentiate between a user trying to sign up initially, versus an SSO user (whom has presumably already been approved elsewhere) trying to log in for the first time.
This PR passes `auth_provider_id` as an argument to the `check_registration_for_spam` function. This argument will contain an ID of an SSO provider (`"saml"`, `"cas"`, etc.) if one was used, else `None`.
This great big stack of commits is a a whole load of hoop-jumping to make it easier to store additional values in login tokens, and then to actually store the SSO Identity Provider in the login token. (Making use of that data will follow in a subsequent PR.)
This PR attempts to eliminate unnecessary presence sending work when your local server joins a room, or when a remote server joins a room your server is participating in by processing state deltas in chunks rather than individually.
---
When your server joins a room for the first time, it requests the historical state as well. This chunk of new state is passed to the presence handler which, after filtering that state down to only membership joins, will send presence updates to homeservers for each join processed.
It turns out that we were being a bit naive and processing each event individually, and sending out presence updates for every one of those joins. Even if many different joins were users on the same server (hello IRC bridges), we'd send presence to that same homeserver for every remote user join we saw.
This PR attempts to deduplicate all of that by processing the entire batch of state deltas at once, instead of only doing each join individually. We process the joins and note down which servers need which presence:
* If it was a local user join, send that user's latest presence to all servers in the room
* If it was a remote user join, send the presence for all local users in the room to that homeserver
We deduplicate by inserting all of those pending updates into a dictionary of the form:
```
{
server_name1: {presence_update1, ...},
server_name2: {presence_update1, presence_update2, ...}
}
```
Only after building this dict do we then start sending out presence updates.
This PR adds a homeserver config option, `user_directory.prefer_local_users`, that when enabled will show local users higher in user directory search results than remote users. This option is off by default.
Note that turning this on doesn't necessarily mean that remote users will always be put below local users, but they should be assuming all other ranking factors (search query match, profile information present etc) are identical.
This is useful for, say, University networks that are openly federating, but want to prioritise local students and staff in the user directory over other random users.
Add off-by-default configuration settings to:
- disable putting an invitee's profile info in invite events
- disable profile lookup via federation
Signed-off-by: Andrew Ferrazzutti <fair@miscworks.net>
- Update black version to the latest
- Run black auto formatting over the codebase
- Run autoformatting according to [`docs/code_style.md
`](80d6dc9783/docs/code_style.md)
- Update `code_style.md` docs around installing black to use the correct version
This is the final step for supporting multiple OIDC providers concurrently.
First of all, we reorganise the config so that you can specify a list of OIDC providers, instead of a single one. Before:
oidc_config:
enabled: true
issuer: "https://oidc_provider"
# etc
After:
oidc_providers:
- idp_id: prov1
issuer: "https://oidc_provider"
- idp_id: prov2
issuer: "https://another_oidc_provider"
The old format is still grandfathered in.
With that done, it's then simply a matter of having OidcHandler instantiate a new OidcProvider for each configured provider.
* make the OIDC bits of the test work at a higher level - via the REST api instead of poking the OIDCHandler directly.
* Move it to test_login.py, where I think it fits better.
Again in preparation for handling more than one OIDC provider, add a new caveat to the macaroon used as an OIDC session cookie, which remembers which OIDC provider we are talking to. In future, when we get a callback, we'll need it to make sure we talk to the right IdP.
As part of this, I'm adding an idp_id and idp_name field to the OIDC configuration object. They aren't yet documented, and we'll just use the old values by default.
The idea here is that we will have an instance of OidcProvider for each
configured IdP, with OidcHandler just doing the marshalling of them.
For now it's still hardcoded with a single provider.
Some light refactoring of OidcHandler, in preparation for bigger things:
* remove inheritance from deprecated BaseHandler
* add an object to hold the things that go into a session cookie
* factor out a separate class for manipulating said cookies
SynapseRequest is in danger of becoming a bit of a dumping-ground for "useful stuff relating to Requests",
which isn't really its intention (its purpose is to override render, finished and connectionLost to set up the
LoggingContext and write the right entries to the request log).
Putting utility functions inside SynapseRequest means that lots of our code ends up requiring a
SynapseRequest when there is nothing synapse-specific about the Request at all, and any old
twisted.web.iweb.IRequest will do. This increases code coupling and makes testing more difficult.
In short: move get_user_agent out to a utility function.
The final part (for now) of my work to implement a username picker in synapse itself. The idea is that we allow
`UsernameMappingProvider`s to return `localpart=None`, in which case, rather than redirecting the browser
back to the client, we redirect to a username-picker resource, which allows the user to enter a username.
We *then* complete the SSO flow (including doing the client permission checks).
The static resources for the username picker itself (in
https://github.com/matrix-org/synapse/tree/rav/username_picker/synapse/res/username_picker)
are essentially lifted wholesale from
https://github.com/matrix-org/matrix-synapse-saml-mozilla/tree/master/matrix_synapse_saml_mozilla/res.
As the comment says, we might want to think about making them customisable, but that can be a follow-up.
Fixes#8876.
Fixes a bug that deactivated users appear in the directory when their profile information was updated.
To change profile information of deactivated users is neccesary for example you will remove displayname or avatar.
But they should not appear in directory. They are deactivated.
Co-authored-by: Erik Johnston <erikj@jki.re>
This is another part of my work towards fixing #8876. It moves some of the logic currently in the SAML and OIDC handlers - in particular the call to `AuthHandler.complete_sso_login` down into the `SsoHandler`.
* move simple_async_mock to test_utils
... so that it can be re-used
* Remove references to `SamlHandler._map_saml_response_to_user` from tests
This method is going away, so we can no longer use it as a test point. Instead,
factor out a higher-level method which takes a SAML object, and verify correct
behaviour by mocking out `AuthHandler.complete_sso_login`.
* changelog