mirror of
https://git.anonymousland.org/anonymousland/synapse.git
synced 2024-10-01 11:49:51 -04:00
Merge remote-tracking branch 'upstream/release-v1.28.0'
This commit is contained in:
commit
f6684e4e55
@ -14,7 +14,7 @@ jobs:
|
||||
platforms: linux/amd64
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
|
||||
platforms: linux/amd64,linux/arm/v7,linux/arm64
|
||||
platforms: linux/amd64,linux/arm64
|
||||
|
||||
dockerhubuploadlatest:
|
||||
docker:
|
||||
@ -27,7 +27,7 @@ jobs:
|
||||
# until all of the platforms are built.
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:latest
|
||||
platforms: linux/amd64,linux/arm/v7,linux/arm64
|
||||
platforms: linux/amd64,linux/arm64
|
||||
|
||||
workflows:
|
||||
build:
|
||||
|
91
CHANGES.md
91
CHANGES.md
@ -1,9 +1,98 @@
|
||||
Synapse 1.28.0rc1 (2021-02-19)
|
||||
==============================
|
||||
|
||||
Note that this release drops support for ARMv7 in the official Docker images, due to repeated problems building for ARMv7 (and the associated maintenance burden this entails).
|
||||
|
||||
This release also fixes the documentation included in v1.27.0 around the callback URI for SAML2 identity providers. If your server is configured to use single sign-on via a SAML2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
|
||||
|
||||
Removal warning
|
||||
---------------
|
||||
|
||||
The v1 list accounts API is deprecated and will be removed in a future release.
|
||||
This API was undocumented and misleading. It can be replaced by the
|
||||
[v2 list accounts API](https://github.com/matrix-org/synapse/blob/release-v1.28.0/docs/admin_api/user_admin_api.rst#list-accounts),
|
||||
which has been available since Synapse 1.7.0 (2019-12-13).
|
||||
|
||||
Please check if you're using any scripts which use the admin API and replace
|
||||
`GET /_synapse/admin/v1/users/<user_id>` with `GET /_synapse/admin/v2/users`.
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- New admin API to get the context of an event: `/_synapse/admin/rooms/{roomId}/context/{eventId}`. ([\#9150](https://github.com/matrix-org/synapse/issues/9150))
|
||||
- Further improvements to the user experience of registration via single sign-on. ([\#9300](https://github.com/matrix-org/synapse/issues/9300), [\#9301](https://github.com/matrix-org/synapse/issues/9301))
|
||||
- Add hook to spam checker modules that allow checking file uploads and remote downloads. ([\#9311](https://github.com/matrix-org/synapse/issues/9311))
|
||||
- Add support for receiving OpenID Connect authentication responses via form `POST`s rather than `GET`s. ([\#9376](https://github.com/matrix-org/synapse/issues/9376))
|
||||
- Add the shadow-banning status to the admin API for user info. ([\#9400](https://github.com/matrix-org/synapse/issues/9400))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix long-standing bug where sending email notifications would fail for rooms that the server had since left. ([\#9257](https://github.com/matrix-org/synapse/issues/9257))
|
||||
- Fix bug in Synapse 1.27.0rc1 which meant the "session expired" error page during SSO registration was badly formatted. ([\#9296](https://github.com/matrix-org/synapse/issues/9296))
|
||||
- Assert a maximum length for some parameters for spec compliance. ([\#9321](https://github.com/matrix-org/synapse/issues/9321), [\#9393](https://github.com/matrix-org/synapse/issues/9393))
|
||||
- Fix additional errors when previewing URLs: "AttributeError 'NoneType' object has no attribute 'xpath'" and "ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.". ([\#9333](https://github.com/matrix-org/synapse/issues/9333))
|
||||
- Fix a bug causing Synapse to impose the wrong type constraints on fields when processing responses from appservices to `/_matrix/app/v1/thirdparty/user/{protocol}`. ([\#9361](https://github.com/matrix-org/synapse/issues/9361))
|
||||
- Fix bug where Synapse would occasionally stop reconnecting to Redis after the connection was lost. ([\#9391](https://github.com/matrix-org/synapse/issues/9391))
|
||||
- Fix a long-standing bug when upgrading a room: "TypeError: '>' not supported between instances of 'NoneType' and 'int'". ([\#9395](https://github.com/matrix-org/synapse/issues/9395))
|
||||
- Reduce the amount of memory used when generating the URL preview of a file that is larger than the `max_spider_size`. ([\#9421](https://github.com/matrix-org/synapse/issues/9421))
|
||||
- Fix a long-standing bug in the deduplication of old presence, resulting in no deduplication. ([\#9425](https://github.com/matrix-org/synapse/issues/9425))
|
||||
- The `ui_auth.session_timeout` config option can now be specified in terms of number of seconds/minutes/etc/. Contributed by Rishabh Arya. ([\#9426](https://github.com/matrix-org/synapse/issues/9426))
|
||||
- Fix a bug introduced in v1.27.0: "TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType." related to the user directory. ([\#9428](https://github.com/matrix-org/synapse/issues/9428))
|
||||
|
||||
|
||||
Updates to the Docker image
|
||||
---------------------------
|
||||
|
||||
- Drop support for ARMv7 in Docker images. ([\#9433](https://github.com/matrix-org/synapse/issues/9433))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Reorganize CHANGELOG.md. ([\#9281](https://github.com/matrix-org/synapse/issues/9281))
|
||||
- Add note to `auto_join_rooms` config option explaining existing rooms must be publicly joinable. ([\#9291](https://github.com/matrix-org/synapse/issues/9291))
|
||||
- Correct name of Synapse's service file in TURN howto. ([\#9308](https://github.com/matrix-org/synapse/issues/9308))
|
||||
- Fix the braces in the `oidc_providers` section of the sample config. ([\#9317](https://github.com/matrix-org/synapse/issues/9317))
|
||||
- Update installation instructions on Fedora. ([\#9322](https://github.com/matrix-org/synapse/issues/9322))
|
||||
- Add HTTP/2 support to the nginx example configuration. Contributed by David Vo. ([\#9390](https://github.com/matrix-org/synapse/issues/9390))
|
||||
- Update docs for using Gitea as OpenID provider. ([\#9404](https://github.com/matrix-org/synapse/issues/9404))
|
||||
- Document that pusher instances are shardable. ([\#9407](https://github.com/matrix-org/synapse/issues/9407))
|
||||
- Fix erroneous documentation from v1.27.0 about updating the SAML2 callback URL. ([\#9434](https://github.com/matrix-org/synapse/issues/9434))
|
||||
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- Deprecate old admin API `GET /_synapse/admin/v1/users/<user_id>`. ([\#9429](https://github.com/matrix-org/synapse/issues/9429))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Fix 'object name reserved for internal use' errors with recent versions of SQLite. ([\#9003](https://github.com/matrix-org/synapse/issues/9003))
|
||||
- Add experimental support for running Synapse with PyPy. ([\#9123](https://github.com/matrix-org/synapse/issues/9123))
|
||||
- Deny access to additional IP addresses by default. ([\#9240](https://github.com/matrix-org/synapse/issues/9240))
|
||||
- Update the `Cursor` type hints to better match PEP 249. ([\#9299](https://github.com/matrix-org/synapse/issues/9299))
|
||||
- Add debug logging for SRV lookups. Contributed by @Bubu. ([\#9305](https://github.com/matrix-org/synapse/issues/9305))
|
||||
- Improve logging for OIDC login flow. ([\#9307](https://github.com/matrix-org/synapse/issues/9307))
|
||||
- Share the code for handling required attributes between the CAS and SAML handlers. ([\#9326](https://github.com/matrix-org/synapse/issues/9326))
|
||||
- Clean up the code to load the metadata for OpenID Connect identity providers. ([\#9362](https://github.com/matrix-org/synapse/issues/9362))
|
||||
- Convert tests to use `HomeserverTestCase`. ([\#9377](https://github.com/matrix-org/synapse/issues/9377), [\#9396](https://github.com/matrix-org/synapse/issues/9396))
|
||||
- Update the version of black used to 20.8b1. ([\#9381](https://github.com/matrix-org/synapse/issues/9381))
|
||||
- Allow OIDC config to override discovered values. ([\#9384](https://github.com/matrix-org/synapse/issues/9384))
|
||||
- Remove some dead code from the acceptance of room invites path. ([\#9394](https://github.com/matrix-org/synapse/issues/9394))
|
||||
- Clean up an unused method in the presence handler code. ([\#9408](https://github.com/matrix-org/synapse/issues/9408))
|
||||
|
||||
|
||||
Synapse 1.27.0 (2021-02-16)
|
||||
===========================
|
||||
|
||||
Note that this release includes a change in Synapse to use Redis as a cache ─ as well as a pub/sub mechanism ─ if Redis support is enabled for workers. No action is needed by server administrators, and we do not expect resource usage of the Redis instance to change dramatically.
|
||||
|
||||
This release also changes the callback URI for OpenID Connect (OIDC) identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
|
||||
This release also changes the callback URI for OpenID Connect (OIDC) and SAML2 identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 or SAML2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
|
||||
|
||||
This release also changes escaping of variables in the HTML templates for SSO or email notifications. If you have customised these templates, please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
|
||||
|
||||
|
271
CONTRIBUTING.md
271
CONTRIBUTING.md
@ -1,4 +1,31 @@
|
||||
# Contributing code to Synapse
|
||||
Welcome to Synapse
|
||||
|
||||
This document aims to get you started with contributing to this repo!
|
||||
|
||||
- [1. Who can contribute to Synapse?](#1-who-can-contribute-to-synapse)
|
||||
- [2. What do I need?](#2-what-do-i-need)
|
||||
- [3. Get the source.](#3-get-the-source)
|
||||
- [4. Install the dependencies](#4-install-the-dependencies)
|
||||
* [Under Unix (macOS, Linux, BSD, ...)](#under-unix-macos-linux-bsd-)
|
||||
* [Under Windows](#under-windows)
|
||||
- [5. Get in touch.](#5-get-in-touch)
|
||||
- [6. Pick an issue.](#6-pick-an-issue)
|
||||
- [7. Turn coffee and documentation into code and documentation!](#7-turn-coffee-and-documentation-into-code-and-documentation)
|
||||
- [8. Test, test, test!](#8-test-test-test)
|
||||
* [Run the linters.](#run-the-linters)
|
||||
* [Run the unit tests.](#run-the-unit-tests)
|
||||
* [Run the integration tests.](#run-the-integration-tests)
|
||||
- [9. Submit your patch.](#9-submit-your-patch)
|
||||
* [Changelog](#changelog)
|
||||
+ [How do I know what to call the changelog file before I create the PR?](#how-do-i-know-what-to-call-the-changelog-file-before-i-create-the-pr)
|
||||
+ [Debian changelog](#debian-changelog)
|
||||
* [Sign off](#sign-off)
|
||||
- [10. Turn feedback into better code.](#10-turn-feedback-into-better-code)
|
||||
- [11. Find a new issue.](#11-find-a-new-issue)
|
||||
- [Notes for maintainers on merging PRs etc](#notes-for-maintainers-on-merging-prs-etc)
|
||||
- [Conclusion](#conclusion)
|
||||
|
||||
# 1. Who can contribute to Synapse?
|
||||
|
||||
Everyone is welcome to contribute code to [matrix.org
|
||||
projects](https://github.com/matrix-org), provided that they are willing to
|
||||
@ -9,70 +36,179 @@ license the code under the same terms as the project's overall 'outbound'
|
||||
license - in our case, this is almost always Apache Software License v2 (see
|
||||
[LICENSE](LICENSE)).
|
||||
|
||||
## How to contribute
|
||||
# 2. What do I need?
|
||||
|
||||
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download).
|
||||
|
||||
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git).
|
||||
|
||||
For some tests, you will need [a recent version of Docker](https://docs.docker.com/get-docker/).
|
||||
|
||||
|
||||
# 3. Get the source.
|
||||
|
||||
The preferred and easiest way to contribute changes is to fork the relevant
|
||||
project on github, and then [create a pull request](
|
||||
project on GitHub, and then [create a pull request](
|
||||
https://help.github.com/articles/using-pull-requests/) to ask us to pull your
|
||||
changes into our repo.
|
||||
|
||||
Some other points to follow:
|
||||
Please base your changes on the `develop` branch.
|
||||
|
||||
* Please base your changes on the `develop` branch.
|
||||
```sh
|
||||
git clone git@github.com:YOUR_GITHUB_USER_NAME/synapse.git
|
||||
git checkout develop
|
||||
```
|
||||
|
||||
* Please follow the [code style requirements](#code-style).
|
||||
If you need help getting started with git, this is beyond the scope of the document, but you
|
||||
can find many good git tutorials on the web.
|
||||
|
||||
* Please include a [changelog entry](#changelog) with each PR.
|
||||
# 4. Install the dependencies
|
||||
|
||||
* Please [sign off](#sign-off) your contribution.
|
||||
## Under Unix (macOS, Linux, BSD, ...)
|
||||
|
||||
* Please keep an eye on the pull request for feedback from the [continuous
|
||||
integration system](#continuous-integration-and-testing) and try to fix any
|
||||
errors that come up.
|
||||
Once you have installed Python 3 and added the source, please open a terminal and
|
||||
setup a *virtualenv*, as follows:
|
||||
|
||||
* If you need to [update your PR](#updating-your-pull-request), just add new
|
||||
commits to your branch rather than rebasing.
|
||||
```sh
|
||||
cd path/where/you/have/cloned/the/repository
|
||||
python3 -m venv ./env
|
||||
source ./env/bin/activate
|
||||
pip install -e ".[all,lint,mypy,test]"
|
||||
pip install tox
|
||||
```
|
||||
|
||||
## Code style
|
||||
This will install the developer dependencies for the project.
|
||||
|
||||
## Under Windows
|
||||
|
||||
TBD
|
||||
|
||||
|
||||
# 5. Get in touch.
|
||||
|
||||
Join our developer community on Matrix: #synapse-dev:matrix.org !
|
||||
|
||||
|
||||
# 6. Pick an issue.
|
||||
|
||||
Fix your favorite problem or perhaps find a [Good First Issue](https://github.com/matrix-org/synapse/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+First+Issue%22)
|
||||
to work on.
|
||||
|
||||
|
||||
# 7. Turn coffee and documentation into code and documentation!
|
||||
|
||||
Synapse's code style is documented [here](docs/code_style.md). Please follow
|
||||
it, including the conventions for the [sample configuration
|
||||
file](docs/code_style.md#configuration-file-format).
|
||||
|
||||
Many of the conventions are enforced by scripts which are run as part of the
|
||||
[continuous integration system](#continuous-integration-and-testing). To help
|
||||
check if you have followed the code style, you can run `scripts-dev/lint.sh`
|
||||
locally. You'll need python 3.6 or later, and to install a number of tools:
|
||||
There is a growing amount of documentation located in the [docs](docs)
|
||||
directory. This documentation is intended primarily for sysadmins running their
|
||||
own Synapse instance, as well as developers interacting externally with
|
||||
Synapse. [docs/dev](docs/dev) exists primarily to house documentation for
|
||||
Synapse developers. [docs/admin_api](docs/admin_api) houses documentation
|
||||
regarding Synapse's Admin API, which is used mostly by sysadmins and external
|
||||
service developers.
|
||||
|
||||
```
|
||||
# Install the dependencies
|
||||
pip install -e ".[lint,mypy]"
|
||||
If you add new files added to either of these folders, please use [GitHub-Flavoured
|
||||
Markdown](https://guides.github.com/features/mastering-markdown/).
|
||||
|
||||
# Run the linter script
|
||||
Some documentation also exists in [Synapse's GitHub
|
||||
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
|
||||
contributed to by community authors.
|
||||
|
||||
|
||||
# 8. Test, test, test!
|
||||
<a name="test-test-test"></a>
|
||||
|
||||
While you're developing and before submitting a patch, you'll
|
||||
want to test your code.
|
||||
|
||||
## Run the linters.
|
||||
|
||||
The linters look at your code and do two things:
|
||||
|
||||
- ensure that your code follows the coding style adopted by the project;
|
||||
- catch a number of errors in your code.
|
||||
|
||||
They're pretty fast, don't hesitate!
|
||||
|
||||
```sh
|
||||
source ./env/bin/activate
|
||||
./scripts-dev/lint.sh
|
||||
```
|
||||
|
||||
**Note that the script does not just test/check, but also reformats code, so you
|
||||
may wish to ensure any new code is committed first**.
|
||||
Note that this script *will modify your files* to fix styling errors.
|
||||
Make sure that you have saved all your files.
|
||||
|
||||
By default, this script checks all files and can take some time; if you alter
|
||||
only certain files, you might wish to specify paths as arguments to reduce the
|
||||
run-time:
|
||||
If you wish to restrict the linters to only the files changed since the last commit
|
||||
(much faster!), you can instead run:
|
||||
|
||||
```sh
|
||||
source ./env/bin/activate
|
||||
./scripts-dev/lint.sh -d
|
||||
```
|
||||
|
||||
Or if you know exactly which files you wish to lint, you can instead run:
|
||||
|
||||
```sh
|
||||
source ./env/bin/activate
|
||||
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
|
||||
```
|
||||
|
||||
You can also provide the `-d` option, which will lint the files that have been
|
||||
changed since the last git commit. This will often be significantly faster than
|
||||
linting the whole codebase.
|
||||
## Run the unit tests.
|
||||
|
||||
Before pushing new changes, ensure they don't produce linting errors. Commit any
|
||||
files that were corrected.
|
||||
The unit tests run parts of Synapse, including your changes, to see if anything
|
||||
was broken. They are slower than the linters but will typically catch more errors.
|
||||
|
||||
```sh
|
||||
source ./env/bin/activate
|
||||
trial tests
|
||||
```
|
||||
|
||||
If you wish to only run *some* unit tests, you may specify
|
||||
another module instead of `tests` - or a test class or a method:
|
||||
|
||||
```sh
|
||||
source ./env/bin/activate
|
||||
trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_invite
|
||||
```
|
||||
|
||||
If your tests fail, you may wish to look at the logs:
|
||||
|
||||
```sh
|
||||
less _trial_temp/test.log
|
||||
```
|
||||
|
||||
## Run the integration tests.
|
||||
|
||||
The integration tests are a more comprehensive suite of tests. They
|
||||
run a full version of Synapse, including your changes, to check if
|
||||
anything was broken. They are slower than the unit tests but will
|
||||
typically catch more errors.
|
||||
|
||||
The following command will let you run the integration test with the most common
|
||||
configuration:
|
||||
|
||||
```sh
|
||||
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:py37
|
||||
```
|
||||
|
||||
This configuration should generally cover your needs. For more details about other configurations, see [documentation in the SyTest repo](https://github.com/matrix-org/sytest/blob/develop/docker/README.md).
|
||||
|
||||
|
||||
# 9. Submit your patch.
|
||||
|
||||
Once you're happy with your patch, it's time to prepare a Pull Request.
|
||||
|
||||
To prepare a Pull Request, please:
|
||||
|
||||
1. verify that [all the tests pass](#test-test-test), including the coding style;
|
||||
2. [sign off](#sign-off) your contribution;
|
||||
3. `git push` your commit to your fork of Synapse;
|
||||
4. on GitHub, [create the Pull Request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request);
|
||||
5. add a [changelog entry](#changelog) and push it to your Pull Request;
|
||||
6. for most contributors, that's all - however, if you are a member of the organization `matrix-org`, on GitHub, please request a review from `matrix.org / Synapse Core`.
|
||||
|
||||
Please ensure your changes match the cosmetic style of the existing project,
|
||||
and **never** mix cosmetic and functional changes in the same commit, as it
|
||||
makes it horribly hard to review otherwise.
|
||||
|
||||
## Changelog
|
||||
|
||||
@ -156,24 +292,6 @@ directory, you will need both a regular newsfragment *and* an entry in the
|
||||
debian changelog. (Though typically such changes should be submitted as two
|
||||
separate pull requests.)
|
||||
|
||||
## Documentation
|
||||
|
||||
There is a growing amount of documentation located in the [docs](docs)
|
||||
directory. This documentation is intended primarily for sysadmins running their
|
||||
own Synapse instance, as well as developers interacting externally with
|
||||
Synapse. [docs/dev](docs/dev) exists primarily to house documentation for
|
||||
Synapse developers. [docs/admin_api](docs/admin_api) houses documentation
|
||||
regarding Synapse's Admin API, which is used mostly by sysadmins and external
|
||||
service developers.
|
||||
|
||||
New files added to both folders should be written in [Github-Flavoured
|
||||
Markdown](https://guides.github.com/features/mastering-markdown/), and attempts
|
||||
should be made to migrate existing documents to markdown where possible.
|
||||
|
||||
Some documentation also exists in [Synapse's Github
|
||||
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
|
||||
contributed to by community authors.
|
||||
|
||||
## Sign off
|
||||
|
||||
In order to have a concrete record that your contribution is intentional
|
||||
@ -240,47 +358,36 @@ Git allows you to add this signoff automatically when using the `-s`
|
||||
flag to `git commit`, which uses the name and email set in your
|
||||
`user.name` and `user.email` git configs.
|
||||
|
||||
## Continuous integration and testing
|
||||
|
||||
[Buildkite](https://buildkite.com/matrix-dot-org/synapse) will automatically
|
||||
run a series of checks and tests against any PR which is opened against the
|
||||
project; if your change breaks the build, this will be shown in GitHub, with
|
||||
links to the build results. If your build fails, please try to fix the errors
|
||||
and update your branch.
|
||||
# 10. Turn feedback into better code.
|
||||
|
||||
To run unit tests in a local development environment, you can use:
|
||||
Once the Pull Request is opened, you will see a few things:
|
||||
|
||||
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
|
||||
for SQLite-backed Synapse on Python 3.5.
|
||||
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
|
||||
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
|
||||
(requires a running local PostgreSQL with access to create databases).
|
||||
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
|
||||
(requires Docker). Entirely self-contained, recommended if you don't want to
|
||||
set up PostgreSQL yourself.
|
||||
1. our automated CI (Continuous Integration) pipeline will run (again) the linters, the unit tests, the integration tests and more;
|
||||
2. one or more of the developers will take a look at your Pull Request and offer feedback.
|
||||
|
||||
Docker images are available for running the integration tests (SyTest) locally,
|
||||
see the [documentation in the SyTest repo](
|
||||
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
|
||||
information.
|
||||
From this point, you should:
|
||||
|
||||
## Updating your pull request
|
||||
1. Look at the results of the CI pipeline.
|
||||
- If there is any error, fix the error.
|
||||
2. If a developer has requested changes, make these changes and let us know if it is ready for a developer to review again.
|
||||
3. Create a new commit with the changes.
|
||||
- Please do NOT overwrite the history. New commits make the reviewer's life easier.
|
||||
- Push this commits to your Pull Request.
|
||||
4. Back to 1.
|
||||
|
||||
If you decide to make changes to your pull request - perhaps to address issues
|
||||
raised in a review, or to fix problems highlighted by [continuous
|
||||
integration](#continuous-integration-and-testing) - just add new commits to your
|
||||
branch, and push to GitHub. The pull request will automatically be updated.
|
||||
Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!
|
||||
|
||||
Please **avoid** rebasing your branch, especially once the PR has been
|
||||
reviewed: doing so makes it very difficult for a reviewer to see what has
|
||||
changed since a previous review.
|
||||
# 11. Find a new issue.
|
||||
|
||||
## Notes for maintainers on merging PRs etc
|
||||
By now, you know the drill!
|
||||
|
||||
# Notes for maintainers on merging PRs etc
|
||||
|
||||
There are some notes for those with commit access to the project on how we
|
||||
manage git [here](docs/dev/git.md).
|
||||
|
||||
## Conclusion
|
||||
# Conclusion
|
||||
|
||||
That's it! Matrix is a very open and collaborative project as you might expect
|
||||
given our obsession with open communication. If we're going to successfully
|
||||
|
20
INSTALL.md
20
INSTALL.md
@ -151,29 +151,15 @@ sudo pacman -S base-devel python python-pip \
|
||||
|
||||
##### CentOS/Fedora
|
||||
|
||||
Installing prerequisites on CentOS 8 or Fedora>26:
|
||||
Installing prerequisites on CentOS or Fedora Linux:
|
||||
|
||||
```sh
|
||||
sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
|
||||
libwebp-devel tk-devel redhat-rpm-config \
|
||||
python3-virtualenv libffi-devel openssl-devel
|
||||
libwebp-devel libxml2-devel libxslt-devel libpq-devel \
|
||||
python3-virtualenv libffi-devel openssl-devel python3-devel
|
||||
sudo dnf groupinstall "Development Tools"
|
||||
```
|
||||
|
||||
Installing prerequisites on CentOS 7 or Fedora<=25:
|
||||
|
||||
```sh
|
||||
sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
|
||||
lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \
|
||||
python3-virtualenv libffi-devel openssl-devel
|
||||
sudo yum groupinstall "Development Tools"
|
||||
```
|
||||
|
||||
Note that Synapse does not support versions of SQLite before 3.11, and CentOS 7
|
||||
uses SQLite 3.7. You may be able to work around this by installing a more
|
||||
recent SQLite version, but it is recommended that you instead use a Postgres
|
||||
database: see [docs/postgres.md](docs/postgres.md).
|
||||
|
||||
##### macOS
|
||||
|
||||
Installing prerequisites on macOS:
|
||||
|
23
UPGRADE.rst
23
UPGRADE.rst
@ -88,20 +88,21 @@ for example:
|
||||
Upgrading to v1.27.0
|
||||
====================
|
||||
|
||||
Changes to callback URI for OAuth2 / OpenID Connect
|
||||
---------------------------------------------------
|
||||
Changes to callback URI for OAuth2 / OpenID Connect and SAML2
|
||||
-------------------------------------------------------------
|
||||
|
||||
This version changes the URI used for callbacks from OAuth2 identity providers. If
|
||||
your server is configured for single sign-on via an OpenID Connect or OAuth2 identity
|
||||
provider, you will need to add ``[synapse public baseurl]/_synapse/client/oidc/callback``
|
||||
to the list of permitted "redirect URIs" at the identity provider.
|
||||
This version changes the URI used for callbacks from OAuth2 and SAML2 identity providers:
|
||||
|
||||
See `docs/openid.md <docs/openid.md>`_ for more information on setting up OpenID
|
||||
Connect.
|
||||
* If your server is configured for single sign-on via an OpenID Connect or OAuth2 identity
|
||||
provider, you will need to add ``[synapse public baseurl]/_synapse/client/oidc/callback``
|
||||
to the list of permitted "redirect URIs" at the identity provider.
|
||||
|
||||
(Note: a similar change is being made for SAML2; in this case the old URI
|
||||
``[synapse public baseurl]/_matrix/saml2`` is being deprecated, but will continue to
|
||||
work, so no immediate changes are required for existing installations.)
|
||||
See `docs/openid.md <docs/openid.md>`_ for more information on setting up OpenID
|
||||
Connect.
|
||||
|
||||
* If your server is configured for single sign-on via a SAML2 identity provider, you will
|
||||
need to add ``[synapse public baseurl]/_synapse/client/saml2/authn_response`` as a permitted
|
||||
"ACS location" (also known as "allowed callback URLs") at the identity provider.
|
||||
|
||||
Changes to HTML templates
|
||||
-------------------------
|
||||
|
@ -92,7 +92,7 @@ class SynapseCmd(cmd.Cmd):
|
||||
return self.config["user"].split(":")[1]
|
||||
|
||||
def do_config(self, line):
|
||||
""" Show the config for this client: "config"
|
||||
"""Show the config for this client: "config"
|
||||
Edit a key value mapping: "config key value" e.g. "config token 1234"
|
||||
Config variables:
|
||||
user: The username to auth with.
|
||||
@ -360,7 +360,7 @@ class SynapseCmd(cmd.Cmd):
|
||||
print(e)
|
||||
|
||||
def do_topic(self, line):
|
||||
""""topic [set|get] <roomid> [<newtopic>]"
|
||||
""" "topic [set|get] <roomid> [<newtopic>]"
|
||||
Set the topic for a room: topic set <roomid> <newtopic>
|
||||
Get the topic for a room: topic get <roomid>
|
||||
"""
|
||||
@ -690,7 +690,7 @@ class SynapseCmd(cmd.Cmd):
|
||||
self._do_presence_state(2, line)
|
||||
|
||||
def _parse(self, line, keys, force_keys=False):
|
||||
""" Parses the given line.
|
||||
"""Parses the given line.
|
||||
|
||||
Args:
|
||||
line : The line to parse
|
||||
@ -721,7 +721,7 @@ class SynapseCmd(cmd.Cmd):
|
||||
query_params={"access_token": None},
|
||||
alt_text=None,
|
||||
):
|
||||
""" Runs an HTTP request and pretty prints the output.
|
||||
"""Runs an HTTP request and pretty prints the output.
|
||||
|
||||
Args:
|
||||
method: HTTP method
|
||||
|
@ -23,11 +23,10 @@ from twisted.web.http_headers import Headers
|
||||
|
||||
|
||||
class HttpClient:
|
||||
""" Interface for talking json over http
|
||||
"""
|
||||
"""Interface for talking json over http"""
|
||||
|
||||
def put_json(self, url, data):
|
||||
""" Sends the specifed json data using PUT
|
||||
"""Sends the specifed json data using PUT
|
||||
|
||||
Args:
|
||||
url (str): The URL to PUT data to.
|
||||
@ -41,7 +40,7 @@ class HttpClient:
|
||||
pass
|
||||
|
||||
def get_json(self, url, args=None):
|
||||
""" Gets some json from the given host homeserver and path
|
||||
"""Gets some json from the given host homeserver and path
|
||||
|
||||
Args:
|
||||
url (str): The URL to GET data from.
|
||||
@ -58,7 +57,7 @@ class HttpClient:
|
||||
|
||||
|
||||
class TwistedHttpClient(HttpClient):
|
||||
""" Wrapper around the twisted HTTP client api.
|
||||
"""Wrapper around the twisted HTTP client api.
|
||||
|
||||
Attributes:
|
||||
agent (twisted.web.client.Agent): The twisted Agent used to send the
|
||||
@ -87,8 +86,7 @@ class TwistedHttpClient(HttpClient):
|
||||
defer.returnValue(json.loads(body))
|
||||
|
||||
def _create_put_request(self, url, json_data, headers_dict={}):
|
||||
""" Wrapper of _create_request to issue a PUT request
|
||||
"""
|
||||
"""Wrapper of _create_request to issue a PUT request"""
|
||||
|
||||
if "Content-Type" not in headers_dict:
|
||||
raise defer.error(RuntimeError("Must include Content-Type header for PUTs"))
|
||||
@ -98,8 +96,7 @@ class TwistedHttpClient(HttpClient):
|
||||
)
|
||||
|
||||
def _create_get_request(self, url, headers_dict={}):
|
||||
""" Wrapper of _create_request to issue a GET request
|
||||
"""
|
||||
"""Wrapper of _create_request to issue a GET request"""
|
||||
return self._create_request("GET", url, headers_dict=headers_dict)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@ -127,8 +124,7 @@ class TwistedHttpClient(HttpClient):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _create_request(self, method, url, producer=None, headers_dict={}):
|
||||
""" Creates and sends a request to the given url
|
||||
"""
|
||||
"""Creates and sends a request to the given url"""
|
||||
headers_dict["User-Agent"] = ["Synapse Cmd Client"]
|
||||
|
||||
retries_left = 5
|
||||
@ -185,8 +181,7 @@ class _RawProducer:
|
||||
|
||||
|
||||
class _JsonProducer:
|
||||
""" Used by the twisted http client to create the HTTP body from json
|
||||
"""
|
||||
"""Used by the twisted http client to create the HTTP body from json"""
|
||||
|
||||
def __init__(self, jsn):
|
||||
self.data = jsn
|
||||
|
@ -63,8 +63,7 @@ class CursesStdIO:
|
||||
self.redraw()
|
||||
|
||||
def redraw(self):
|
||||
""" method for redisplaying lines
|
||||
based on internal list of lines """
|
||||
"""method for redisplaying lines based on internal list of lines"""
|
||||
|
||||
self.stdscr.clear()
|
||||
self.paintStatus(self.statusText)
|
||||
|
@ -56,7 +56,7 @@ def excpetion_errback(failure):
|
||||
|
||||
|
||||
class InputOutput:
|
||||
""" This is responsible for basic I/O so that a user can interact with
|
||||
"""This is responsible for basic I/O so that a user can interact with
|
||||
the example app.
|
||||
"""
|
||||
|
||||
@ -68,8 +68,7 @@ class InputOutput:
|
||||
self.server = server
|
||||
|
||||
def on_line(self, line):
|
||||
""" This is where we process commands.
|
||||
"""
|
||||
"""This is where we process commands."""
|
||||
|
||||
try:
|
||||
m = re.match(r"^join (\S+)$", line)
|
||||
@ -133,7 +132,7 @@ class IOLoggerHandler(logging.Handler):
|
||||
|
||||
|
||||
class Room:
|
||||
""" Used to store (in memory) the current membership state of a room, and
|
||||
"""Used to store (in memory) the current membership state of a room, and
|
||||
which home servers we should send PDUs associated with the room to.
|
||||
"""
|
||||
|
||||
@ -148,8 +147,7 @@ class Room:
|
||||
self.have_got_metadata = False
|
||||
|
||||
def add_participant(self, participant):
|
||||
""" Someone has joined the room
|
||||
"""
|
||||
"""Someone has joined the room"""
|
||||
self.participants.add(participant)
|
||||
self.invited.discard(participant)
|
||||
|
||||
@ -160,14 +158,13 @@ class Room:
|
||||
self.oldest_server = server
|
||||
|
||||
def add_invited(self, invitee):
|
||||
""" Someone has been invited to the room
|
||||
"""
|
||||
"""Someone has been invited to the room"""
|
||||
self.invited.add(invitee)
|
||||
self.servers.add(origin_from_ucid(invitee))
|
||||
|
||||
|
||||
class HomeServer(ReplicationHandler):
|
||||
""" A very basic home server implentation that allows people to join a
|
||||
"""A very basic home server implentation that allows people to join a
|
||||
room and then invite other people.
|
||||
"""
|
||||
|
||||
@ -181,8 +178,7 @@ class HomeServer(ReplicationHandler):
|
||||
self.output = output
|
||||
|
||||
def on_receive_pdu(self, pdu):
|
||||
""" We just received a PDU
|
||||
"""
|
||||
"""We just received a PDU"""
|
||||
pdu_type = pdu.pdu_type
|
||||
|
||||
if pdu_type == "sy.room.message":
|
||||
@ -199,23 +195,20 @@ class HomeServer(ReplicationHandler):
|
||||
)
|
||||
|
||||
def _on_message(self, pdu):
|
||||
""" We received a message
|
||||
"""
|
||||
"""We received a message"""
|
||||
self.output.print_line(
|
||||
"#%s %s %s" % (pdu.context, pdu.content["sender"], pdu.content["body"])
|
||||
)
|
||||
|
||||
def _on_join(self, context, joinee):
|
||||
""" Someone has joined a room, either a remote user or a local user
|
||||
"""
|
||||
"""Someone has joined a room, either a remote user or a local user"""
|
||||
room = self._get_or_create_room(context)
|
||||
room.add_participant(joinee)
|
||||
|
||||
self.output.print_line("#%s %s %s" % (context, joinee, "*** JOINED"))
|
||||
|
||||
def _on_invite(self, origin, context, invitee):
|
||||
""" Someone has been invited
|
||||
"""
|
||||
"""Someone has been invited"""
|
||||
room = self._get_or_create_room(context)
|
||||
room.add_invited(invitee)
|
||||
|
||||
@ -228,8 +221,7 @@ class HomeServer(ReplicationHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def send_message(self, room_name, sender, body):
|
||||
""" Send a message to a room!
|
||||
"""
|
||||
"""Send a message to a room!"""
|
||||
destinations = yield self.get_servers_for_context(room_name)
|
||||
|
||||
try:
|
||||
@ -247,8 +239,7 @@ class HomeServer(ReplicationHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def join_room(self, room_name, sender, joinee):
|
||||
""" Join a room!
|
||||
"""
|
||||
"""Join a room!"""
|
||||
self._on_join(room_name, joinee)
|
||||
|
||||
destinations = yield self.get_servers_for_context(room_name)
|
||||
@ -269,8 +260,7 @@ class HomeServer(ReplicationHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def invite_to_room(self, room_name, sender, invitee):
|
||||
""" Invite someone to a room!
|
||||
"""
|
||||
"""Invite someone to a room!"""
|
||||
self._on_invite(self.server_name, room_name, invitee)
|
||||
|
||||
destinations = yield self.get_servers_for_context(room_name)
|
||||
|
@ -193,15 +193,12 @@ class TrivialXmppClient:
|
||||
time.sleep(7)
|
||||
print("SSRC spammer started")
|
||||
while self.running:
|
||||
ssrcMsg = (
|
||||
"<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>"
|
||||
% {
|
||||
"tojid": "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid),
|
||||
"nick": self.userId,
|
||||
"assrc": self.ssrcs["audio"],
|
||||
"vssrc": self.ssrcs["video"],
|
||||
}
|
||||
)
|
||||
ssrcMsg = "<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>" % {
|
||||
"tojid": "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid),
|
||||
"nick": self.userId,
|
||||
"assrc": self.ssrcs["audio"],
|
||||
"vssrc": self.ssrcs["video"],
|
||||
}
|
||||
res = self.sendIq(ssrcMsg)
|
||||
print("reply from ssrc announce: ", res)
|
||||
time.sleep(10)
|
||||
|
@ -10,6 +10,7 @@
|
||||
* [Undoing room shutdowns](#undoing-room-shutdowns)
|
||||
- [Make Room Admin API](#make-room-admin-api)
|
||||
- [Forward Extremities Admin API](#forward-extremities-admin-api)
|
||||
- [Event Context API](#event-context-api)
|
||||
|
||||
# List Room API
|
||||
|
||||
@ -594,3 +595,121 @@ that were deleted.
|
||||
"deleted": 1
|
||||
}
|
||||
```
|
||||
|
||||
# Event Context API
|
||||
|
||||
This API lets a client find the context of an event. This is designed primarily to investigate abuse reports.
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/rooms/<room_id>/context/<event_id>
|
||||
```
|
||||
|
||||
This API mimmicks [GET /_matrix/client/r0/rooms/{roomId}/context/{eventId}](https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-rooms-roomid-context-eventid). Please refer to the link for all details on parameters and reseponse.
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
{
|
||||
"end": "t29-57_2_0_2",
|
||||
"events_after": [
|
||||
{
|
||||
"content": {
|
||||
"body": "This is an example text message",
|
||||
"msgtype": "m.text",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<b>This is an example text message</b>"
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"event_id": "$143273582443PhrSn:example.org",
|
||||
"room_id": "!636q39766251:example.com",
|
||||
"sender": "@example:example.org",
|
||||
"origin_server_ts": 1432735824653,
|
||||
"unsigned": {
|
||||
"age": 1234
|
||||
}
|
||||
}
|
||||
],
|
||||
"event": {
|
||||
"content": {
|
||||
"body": "filename.jpg",
|
||||
"info": {
|
||||
"h": 398,
|
||||
"w": 394,
|
||||
"mimetype": "image/jpeg",
|
||||
"size": 31037
|
||||
},
|
||||
"url": "mxc://example.org/JWEIFJgwEIhweiWJE",
|
||||
"msgtype": "m.image"
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"event_id": "$f3h4d129462ha:example.com",
|
||||
"room_id": "!636q39766251:example.com",
|
||||
"sender": "@example:example.org",
|
||||
"origin_server_ts": 1432735824653,
|
||||
"unsigned": {
|
||||
"age": 1234
|
||||
}
|
||||
},
|
||||
"events_before": [
|
||||
{
|
||||
"content": {
|
||||
"body": "something-important.doc",
|
||||
"filename": "something-important.doc",
|
||||
"info": {
|
||||
"mimetype": "application/msword",
|
||||
"size": 46144
|
||||
},
|
||||
"msgtype": "m.file",
|
||||
"url": "mxc://example.org/FHyPlCeYUSFFxlgbQYZmoEoe"
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"event_id": "$143273582443PhrSn:example.org",
|
||||
"room_id": "!636q39766251:example.com",
|
||||
"sender": "@example:example.org",
|
||||
"origin_server_ts": 1432735824653,
|
||||
"unsigned": {
|
||||
"age": 1234
|
||||
}
|
||||
}
|
||||
],
|
||||
"start": "t27-54_2_0_2",
|
||||
"state": [
|
||||
{
|
||||
"content": {
|
||||
"creator": "@example:example.org",
|
||||
"room_version": "1",
|
||||
"m.federate": true,
|
||||
"predecessor": {
|
||||
"event_id": "$something:example.org",
|
||||
"room_id": "!oldroom:example.org"
|
||||
}
|
||||
},
|
||||
"type": "m.room.create",
|
||||
"event_id": "$143273582443PhrSn:example.org",
|
||||
"room_id": "!636q39766251:example.com",
|
||||
"sender": "@example:example.org",
|
||||
"origin_server_ts": 1432735824653,
|
||||
"unsigned": {
|
||||
"age": 1234
|
||||
},
|
||||
"state_key": ""
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"membership": "join",
|
||||
"avatar_url": "mxc://example.org/SEsfnsuifSDFSSEF",
|
||||
"displayname": "Alice Margatroid"
|
||||
},
|
||||
"type": "m.room.member",
|
||||
"event_id": "$143273582443PhrSn:example.org",
|
||||
"room_id": "!636q39766251:example.com",
|
||||
"sender": "@example:example.org",
|
||||
"origin_server_ts": 1432735824653,
|
||||
"unsigned": {
|
||||
"age": 1234
|
||||
},
|
||||
"state_key": "@alice:example.org"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
@ -29,8 +29,9 @@ It returns a JSON body like the following:
|
||||
}
|
||||
],
|
||||
"avatar_url": "<avatar_url>",
|
||||
"admin": false,
|
||||
"deactivated": false,
|
||||
"admin": 0,
|
||||
"deactivated": 0,
|
||||
"shadow_banned": 0,
|
||||
"password_hash": "$2b$12$p9B4GkqYdRTPGD",
|
||||
"creation_ts": 1560432506,
|
||||
"appservice_id": null,
|
||||
@ -150,6 +151,7 @@ A JSON body is returned with the following shape:
|
||||
"admin": 0,
|
||||
"user_type": null,
|
||||
"deactivated": 0,
|
||||
"shadow_banned": 0,
|
||||
"displayname": "<User One>",
|
||||
"avatar_url": null
|
||||
}, {
|
||||
@ -158,6 +160,7 @@ A JSON body is returned with the following shape:
|
||||
"admin": 1,
|
||||
"user_type": null,
|
||||
"deactivated": 0,
|
||||
"shadow_banned": 0,
|
||||
"displayname": "<User Two>",
|
||||
"avatar_url": "<avatar_url>"
|
||||
}
|
||||
@ -262,7 +265,7 @@ The following actions are performed when deactivating an user:
|
||||
- Reject all pending invites
|
||||
- Remove all account validity information related to the user
|
||||
|
||||
The following additional actions are performed during deactivation if``erase``
|
||||
The following additional actions are performed during deactivation if ``erase``
|
||||
is set to ``true``:
|
||||
|
||||
- Remove the user's display name
|
||||
|
@ -8,16 +8,16 @@ errors in code.
|
||||
|
||||
The necessary tools are detailed below.
|
||||
|
||||
First install them with:
|
||||
|
||||
pip install -e ".[lint,mypy]"
|
||||
|
||||
- **black**
|
||||
|
||||
The Synapse codebase uses [black](https://pypi.org/project/black/)
|
||||
as an opinionated code formatter, ensuring all comitted code is
|
||||
properly formatted.
|
||||
|
||||
First install `black` with:
|
||||
|
||||
pip install --upgrade black
|
||||
|
||||
Have `black` auto-format your code (it shouldn't change any
|
||||
functionality) with:
|
||||
|
||||
@ -28,10 +28,6 @@ The necessary tools are detailed below.
|
||||
`flake8` is a code checking tool. We require code to pass `flake8`
|
||||
before being merged into the codebase.
|
||||
|
||||
Install `flake8` with:
|
||||
|
||||
pip install --upgrade flake8 flake8-comprehensions
|
||||
|
||||
Check all application and test code with:
|
||||
|
||||
flake8 synapse tests
|
||||
@ -41,10 +37,6 @@ The necessary tools are detailed below.
|
||||
`isort` ensures imports are nicely formatted, and can suggest and
|
||||
auto-fix issues such as double-importing.
|
||||
|
||||
Install `isort` with:
|
||||
|
||||
pip install --upgrade isort
|
||||
|
||||
Auto-fix imports with:
|
||||
|
||||
isort -rc synapse tests
|
||||
|
@ -365,7 +365,7 @@ login mechanism needs an attribute to uniquely identify users, and that endpoint
|
||||
does not return a `sub` property, an alternative `subject_claim` has to be set.
|
||||
|
||||
1. Create a new application.
|
||||
2. Add this Callback URL: `[synapse public baseurl]/_synapse/oidc/callback`
|
||||
2. Add this Callback URL: `[synapse public baseurl]/_synapse/client/oidc/callback`
|
||||
|
||||
Synapse config:
|
||||
|
||||
@ -388,3 +388,25 @@ oidc_providers:
|
||||
localpart_template: "{{ user.login }}"
|
||||
display_name_template: "{{ user.full_name }}"
|
||||
```
|
||||
|
||||
### XWiki
|
||||
|
||||
Install [OpenID Connect Provider](https://extensions.xwiki.org/xwiki/bin/view/Extension/OpenID%20Connect/OpenID%20Connect%20Provider/) extension in your [XWiki](https://www.xwiki.org) instance.
|
||||
|
||||
Synapse config:
|
||||
|
||||
```yaml
|
||||
oidc_providers:
|
||||
- idp_id: xwiki
|
||||
idp_name: "XWiki"
|
||||
issuer: "https://myxwikihost/xwiki/oidc/"
|
||||
client_id: "your-client-id" # TO BE FILLED
|
||||
# Needed until https://github.com/matrix-org/synapse/issues/9212 is fixed
|
||||
client_secret: "dontcare"
|
||||
scopes: ["openid", "profile"]
|
||||
user_profile_method: "userinfo_endpoint"
|
||||
user_mapping_provider:
|
||||
config:
|
||||
localpart_template: "{{ user.preferred_username }}"
|
||||
display_name_template: "{{ user.name }}"
|
||||
```
|
||||
|
@ -40,12 +40,12 @@ the reverse proxy and the homeserver.
|
||||
|
||||
```
|
||||
server {
|
||||
listen 443 ssl;
|
||||
listen [::]:443 ssl;
|
||||
listen 443 ssl http2;
|
||||
listen [::]:443 ssl http2;
|
||||
|
||||
# For the federation port
|
||||
listen 8448 ssl default_server;
|
||||
listen [::]:8448 ssl default_server;
|
||||
listen 8448 ssl http2 default_server;
|
||||
listen [::]:8448 ssl http2 default_server;
|
||||
|
||||
server_name matrix.example.com;
|
||||
|
||||
|
@ -165,6 +165,7 @@ pid_file: DATADIR/homeserver.pid
|
||||
# - '100.64.0.0/10'
|
||||
# - '192.0.0.0/24'
|
||||
# - '169.254.0.0/16'
|
||||
# - '192.88.99.0/24'
|
||||
# - '198.18.0.0/15'
|
||||
# - '192.0.2.0/24'
|
||||
# - '198.51.100.0/24'
|
||||
@ -173,6 +174,9 @@ pid_file: DATADIR/homeserver.pid
|
||||
# - '::1/128'
|
||||
# - 'fe80::/10'
|
||||
# - 'fc00::/7'
|
||||
# - '2001:db8::/32'
|
||||
# - 'ff00::/8'
|
||||
# - 'fec0::/10'
|
||||
|
||||
# List of IP address CIDR ranges that should be allowed for federation,
|
||||
# identity servers, push servers, and for checking key validity for
|
||||
@ -990,6 +994,7 @@ media_store_path: "DATADIR/media_store"
|
||||
# - '100.64.0.0/10'
|
||||
# - '192.0.0.0/24'
|
||||
# - '169.254.0.0/16'
|
||||
# - '192.88.99.0/24'
|
||||
# - '198.18.0.0/15'
|
||||
# - '192.0.2.0/24'
|
||||
# - '198.51.100.0/24'
|
||||
@ -998,6 +1003,9 @@ media_store_path: "DATADIR/media_store"
|
||||
# - '::1/128'
|
||||
# - 'fe80::/10'
|
||||
# - 'fc00::/7'
|
||||
# - '2001:db8::/32'
|
||||
# - 'ff00::/8'
|
||||
# - 'fec0::/10'
|
||||
|
||||
# List of IP address CIDR ranges that the URL preview spider is allowed
|
||||
# to access even if they are specified in url_preview_ip_range_blacklist.
|
||||
@ -1318,6 +1326,8 @@ account_threepid_delegates:
|
||||
# By default, any room aliases included in this list will be created
|
||||
# as a publicly joinable room when the first user registers for the
|
||||
# homeserver. This behaviour can be customised with the settings below.
|
||||
# If the room already exists, make certain it is a publicly joinable
|
||||
# room. The join rule of the room must be set to 'public'.
|
||||
#
|
||||
#auto_join_rooms:
|
||||
# - "#example:example.com"
|
||||
@ -1860,9 +1870,9 @@ oidc_providers:
|
||||
# user_mapping_provider:
|
||||
# config:
|
||||
# subject_claim: "id"
|
||||
# localpart_template: "{ user.login }"
|
||||
# display_name_template: "{ user.name }"
|
||||
# email_template: "{ user.email }"
|
||||
# localpart_template: "{{ user.login }}"
|
||||
# display_name_template: "{{ user.name }}"
|
||||
# email_template: "{{ user.email }}"
|
||||
|
||||
# For use with Keycloak
|
||||
#
|
||||
@ -1889,8 +1899,8 @@ oidc_providers:
|
||||
# user_mapping_provider:
|
||||
# config:
|
||||
# subject_claim: "id"
|
||||
# localpart_template: "{ user.login }"
|
||||
# display_name_template: "{ user.name }"
|
||||
# localpart_template: "{{ user.login }}"
|
||||
# display_name_template: "{{ user.name }}"
|
||||
|
||||
|
||||
# Enable Central Authentication Service (CAS) for registration and login.
|
||||
@ -2218,11 +2228,11 @@ password_config:
|
||||
#require_uppercase: true
|
||||
|
||||
ui_auth:
|
||||
# The number of milliseconds to allow a user-interactive authentication
|
||||
# session to be active.
|
||||
# The amount of time to allow a user-interactive authentication session
|
||||
# to be active.
|
||||
#
|
||||
# This defaults to 0, meaning the user is queried for their credentials
|
||||
# before every action, but this can be overridden to alow a single
|
||||
# before every action, but this can be overridden to allow a single
|
||||
# validation to be re-used. This weakens the protections afforded by
|
||||
# the user-interactive authentication process, by allowing for multiple
|
||||
# (and potentially different) operations to use the same validation session.
|
||||
@ -2230,7 +2240,7 @@ ui_auth:
|
||||
# Uncomment below to allow for credential validation to last for 15
|
||||
# seconds.
|
||||
#
|
||||
#session_timeout: 15000
|
||||
#session_timeout: "15s"
|
||||
|
||||
|
||||
# Configuration for sending emails from Synapse.
|
||||
|
@ -61,6 +61,9 @@ class ExampleSpamChecker:
|
||||
|
||||
async def check_registration_for_spam(self, email_threepid, username, request_info):
|
||||
return RegistrationBehaviour.ALLOW # allow all registrations
|
||||
|
||||
async def check_media_file_for_spam(self, file_wrapper, file_info):
|
||||
return False # allow all media
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
@ -187,7 +187,7 @@ After updating the homeserver configuration, you must restart synapse:
|
||||
```
|
||||
* If you use systemd:
|
||||
```
|
||||
systemctl restart synapse.service
|
||||
systemctl restart matrix-synapse.service
|
||||
```
|
||||
... and then reload any clients (or wait an hour for them to refresh their
|
||||
settings).
|
||||
|
@ -276,7 +276,8 @@ using):
|
||||
|
||||
Ensure that all SSO logins go to a single process.
|
||||
For multiple workers not handling the SSO endpoints properly, see
|
||||
[#7530](https://github.com/matrix-org/synapse/issues/7530).
|
||||
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
|
||||
[#9427](https://github.com/matrix-org/synapse/issues/9427).
|
||||
|
||||
Note that a HTTP listener with `client` and `federation` resources must be
|
||||
configured in the `worker_listeners` option in the worker config.
|
||||
@ -373,7 +374,15 @@ Handles sending push notifications to sygnal and email. Doesn't handle any
|
||||
REST endpoints itself, but you should set `start_pushers: False` in the
|
||||
shared configuration file to stop the main synapse sending push notifications.
|
||||
|
||||
Note this worker cannot be load-balanced: only one instance should be active.
|
||||
To run multiple instances at once the `pusher_instances` option should list all
|
||||
pusher instances by their worker name, e.g.:
|
||||
|
||||
```yaml
|
||||
pusher_instances:
|
||||
- pusher_worker1
|
||||
- pusher_worker2
|
||||
```
|
||||
|
||||
|
||||
### `synapse.app.appservice`
|
||||
|
||||
|
1
mypy.ini
1
mypy.ini
@ -23,6 +23,7 @@ files =
|
||||
synapse/events/validator.py,
|
||||
synapse/events/spamcheck.py,
|
||||
synapse/federation,
|
||||
synapse/groups,
|
||||
synapse/handlers,
|
||||
synapse/http/client.py,
|
||||
synapse/http/federation/matrix_federation_agent.py,
|
||||
|
@ -162,12 +162,23 @@ else
|
||||
fi
|
||||
|
||||
# Delete schema_version, applied_schema_deltas and applied_module_schemas tables
|
||||
# Also delete any shadow tables from fts4
|
||||
# This needs to be done after synapse_port_db is run
|
||||
echo "Dropping unwanted db tables..."
|
||||
SQL="
|
||||
DROP TABLE schema_version;
|
||||
DROP TABLE applied_schema_deltas;
|
||||
DROP TABLE applied_module_schemas;
|
||||
DROP TABLE event_search_content;
|
||||
DROP TABLE event_search_segments;
|
||||
DROP TABLE event_search_segdir;
|
||||
DROP TABLE event_search_docsize;
|
||||
DROP TABLE event_search_stat;
|
||||
DROP TABLE user_directory_search_content;
|
||||
DROP TABLE user_directory_search_segments;
|
||||
DROP TABLE user_directory_search_segdir;
|
||||
DROP TABLE user_directory_search_docsize;
|
||||
DROP TABLE user_directory_search_stat;
|
||||
"
|
||||
sqlite3 "$SQLITE_DB" <<< "$SQL"
|
||||
psql $POSTGRES_DB_NAME -U "$POSTGRES_USERNAME" -w <<< "$SQL"
|
||||
|
@ -87,7 +87,9 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||
arg_kinds.append(ARG_NAMED_OPT) # Arg is an optional kwarg.
|
||||
|
||||
signature = signature.copy_modified(
|
||||
arg_types=arg_types, arg_names=arg_names, arg_kinds=arg_kinds,
|
||||
arg_types=arg_types,
|
||||
arg_names=arg_names,
|
||||
arg_kinds=arg_kinds,
|
||||
)
|
||||
|
||||
return signature
|
||||
|
2
setup.py
2
setup.py
@ -97,7 +97,7 @@ CONDITIONAL_REQUIREMENTS["all"] = list(ALL_OPTIONAL_REQUIREMENTS)
|
||||
# We pin black so that our tests don't start failing on new releases.
|
||||
CONDITIONAL_REQUIREMENTS["lint"] = [
|
||||
"isort==5.7.0",
|
||||
"black==19.10b0",
|
||||
"black==20.8b1",
|
||||
"flake8-comprehensions",
|
||||
"flake8",
|
||||
]
|
||||
|
@ -89,12 +89,16 @@ class SortedDict(Dict[_KT, _VT]):
|
||||
def __reduce__(
|
||||
self,
|
||||
) -> Tuple[
|
||||
Type[SortedDict[_KT, _VT]], Tuple[Callable[[_KT], Any], List[Tuple[_KT, _VT]]],
|
||||
Type[SortedDict[_KT, _VT]],
|
||||
Tuple[Callable[[_KT], Any], List[Tuple[_KT, _VT]]],
|
||||
]: ...
|
||||
def __repr__(self) -> str: ...
|
||||
def _check(self) -> None: ...
|
||||
def islice(
|
||||
self, start: Optional[int] = ..., stop: Optional[int] = ..., reverse=bool,
|
||||
self,
|
||||
start: Optional[int] = ...,
|
||||
stop: Optional[int] = ...,
|
||||
reverse=bool,
|
||||
) -> Iterator[_KT]: ...
|
||||
def bisect_left(self, value: _KT) -> int: ...
|
||||
def bisect_right(self, value: _KT) -> int: ...
|
||||
|
@ -31,7 +31,9 @@ class SortedList(MutableSequence[_T]):
|
||||
|
||||
DEFAULT_LOAD_FACTOR: int = ...
|
||||
def __init__(
|
||||
self, iterable: Optional[Iterable[_T]] = ..., key: Optional[_Key[_T]] = ...,
|
||||
self,
|
||||
iterable: Optional[Iterable[_T]] = ...,
|
||||
key: Optional[_Key[_T]] = ...,
|
||||
): ...
|
||||
# NB: currently mypy does not honour return type, see mypy #3307
|
||||
@overload
|
||||
@ -76,10 +78,18 @@ class SortedList(MutableSequence[_T]):
|
||||
def __len__(self) -> int: ...
|
||||
def reverse(self) -> None: ...
|
||||
def islice(
|
||||
self, start: Optional[int] = ..., stop: Optional[int] = ..., reverse=bool,
|
||||
self,
|
||||
start: Optional[int] = ...,
|
||||
stop: Optional[int] = ...,
|
||||
reverse=bool,
|
||||
) -> Iterator[_T]: ...
|
||||
def _islice(
|
||||
self, min_pos: int, min_idx: int, max_pos: int, max_idx: int, reverse: bool,
|
||||
self,
|
||||
min_pos: int,
|
||||
min_idx: int,
|
||||
max_pos: int,
|
||||
max_idx: int,
|
||||
reverse: bool,
|
||||
) -> Iterator[_T]: ...
|
||||
def irange(
|
||||
self,
|
||||
|
@ -48,7 +48,7 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.27.0"
|
||||
__version__ = "1.28.0rc1"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
@ -168,7 +168,7 @@ class Auth:
|
||||
rights: str = "access",
|
||||
allow_expired: bool = False,
|
||||
) -> synapse.types.Requester:
|
||||
""" Get a registered user's ID.
|
||||
"""Get a registered user's ID.
|
||||
|
||||
Args:
|
||||
request: An HTTP request with an access_token query parameter.
|
||||
@ -294,9 +294,12 @@ class Auth:
|
||||
return user_id, app_service
|
||||
|
||||
async def get_user_by_access_token(
|
||||
self, token: str, rights: str = "access", allow_expired: bool = False,
|
||||
self,
|
||||
token: str,
|
||||
rights: str = "access",
|
||||
allow_expired: bool = False,
|
||||
) -> TokenLookupResult:
|
||||
""" Validate access token and get user_id from it
|
||||
"""Validate access token and get user_id from it
|
||||
|
||||
Args:
|
||||
token: The access token to get the user by
|
||||
@ -489,7 +492,7 @@ class Auth:
|
||||
return service
|
||||
|
||||
async def is_server_admin(self, user: UserID) -> bool:
|
||||
""" Check if the given user is a local server admin.
|
||||
"""Check if the given user is a local server admin.
|
||||
|
||||
Args:
|
||||
user: user to check
|
||||
@ -500,7 +503,10 @@ class Auth:
|
||||
return await self.store.is_server_admin(user)
|
||||
|
||||
def compute_auth_events(
|
||||
self, event, current_state_ids: StateMap[str], for_verification: bool = False,
|
||||
self,
|
||||
event,
|
||||
current_state_ids: StateMap[str],
|
||||
for_verification: bool = False,
|
||||
) -> List[str]:
|
||||
"""Given an event and current state return the list of event IDs used
|
||||
to auth an event.
|
||||
|
@ -27,6 +27,11 @@ MAX_ALIAS_LENGTH = 255
|
||||
# the maximum length for a user id is 255 characters
|
||||
MAX_USERID_LENGTH = 255
|
||||
|
||||
# The maximum length for a group id is 255 characters
|
||||
MAX_GROUPID_LENGTH = 255
|
||||
MAX_GROUP_CATEGORYID_LENGTH = 255
|
||||
MAX_GROUP_ROLEID_LENGTH = 255
|
||||
|
||||
|
||||
class Membership:
|
||||
|
||||
@ -128,8 +133,7 @@ class UserTypes:
|
||||
|
||||
|
||||
class RelationTypes:
|
||||
"""The types of relations known to this server.
|
||||
"""
|
||||
"""The types of relations known to this server."""
|
||||
|
||||
ANNOTATION = "m.annotation"
|
||||
REPLACE = "m.replace"
|
||||
|
@ -390,8 +390,7 @@ class InvalidCaptchaError(SynapseError):
|
||||
|
||||
|
||||
class LimitExceededError(SynapseError):
|
||||
"""A client has sent too many requests and is being throttled.
|
||||
"""
|
||||
"""A client has sent too many requests and is being throttled."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@ -408,8 +407,7 @@ class LimitExceededError(SynapseError):
|
||||
|
||||
|
||||
class RoomKeysVersionError(SynapseError):
|
||||
"""A client has tried to upload to a non-current version of the room_keys store
|
||||
"""
|
||||
"""A client has tried to upload to a non-current version of the room_keys store"""
|
||||
|
||||
def __init__(self, current_version: str):
|
||||
"""
|
||||
@ -426,7 +424,9 @@ class UnsupportedRoomVersionError(SynapseError):
|
||||
|
||||
def __init__(self, msg: str = "Homeserver does not support this room version"):
|
||||
super().__init__(
|
||||
code=400, msg=msg, errcode=Codes.UNSUPPORTED_ROOM_VERSION,
|
||||
code=400,
|
||||
msg=msg,
|
||||
errcode=Codes.UNSUPPORTED_ROOM_VERSION,
|
||||
)
|
||||
|
||||
|
||||
@ -461,8 +461,7 @@ class IncompatibleRoomVersionError(SynapseError):
|
||||
|
||||
|
||||
class PasswordRefusedError(SynapseError):
|
||||
"""A password has been refused, either during password reset/change or registration.
|
||||
"""
|
||||
"""A password has been refused, either during password reset/change or registration."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@ -470,7 +469,9 @@ class PasswordRefusedError(SynapseError):
|
||||
errcode: str = Codes.WEAK_PASSWORD,
|
||||
):
|
||||
super().__init__(
|
||||
code=400, msg=msg, errcode=errcode,
|
||||
code=400,
|
||||
msg=msg,
|
||||
errcode=errcode,
|
||||
)
|
||||
|
||||
|
||||
@ -493,7 +494,7 @@ class RequestSendFailed(RuntimeError):
|
||||
|
||||
|
||||
def cs_error(msg: str, code: str = Codes.UNKNOWN, **kwargs):
|
||||
""" Utility method for constructing an error response for client-server
|
||||
"""Utility method for constructing an error response for client-server
|
||||
interactions.
|
||||
|
||||
Args:
|
||||
@ -510,7 +511,7 @@ def cs_error(msg: str, code: str = Codes.UNKNOWN, **kwargs):
|
||||
|
||||
|
||||
class FederationError(RuntimeError):
|
||||
""" This class is used to inform remote homeservers about erroneous
|
||||
"""This class is used to inform remote homeservers about erroneous
|
||||
PDUs they sent us.
|
||||
|
||||
FATAL: The remote server could not interpret the source event.
|
||||
|
@ -56,8 +56,7 @@ class UserPresenceState(
|
||||
|
||||
@classmethod
|
||||
def default(cls, user_id):
|
||||
"""Returns a default presence state.
|
||||
"""
|
||||
"""Returns a default presence state."""
|
||||
return cls(
|
||||
user_id=user_id,
|
||||
state=PresenceState.OFFLINE,
|
||||
|
@ -58,7 +58,7 @@ def register_sighup(func, *args, **kwargs):
|
||||
|
||||
|
||||
def start_worker_reactor(appname, config, run_command=reactor.run):
|
||||
""" Run the reactor in the main process
|
||||
"""Run the reactor in the main process
|
||||
|
||||
Daemonizes if necessary, and then configures some resources, before starting
|
||||
the reactor. Pulls configuration from the 'worker' settings in 'config'.
|
||||
@ -93,7 +93,7 @@ def start_reactor(
|
||||
logger,
|
||||
run_command=reactor.run,
|
||||
):
|
||||
""" Run the reactor in the main process
|
||||
"""Run the reactor in the main process
|
||||
|
||||
Daemonizes if necessary, and then configures some resources, before starting
|
||||
the reactor
|
||||
@ -313,9 +313,7 @@ async def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerCon
|
||||
refresh_certificate(hs)
|
||||
|
||||
# Start the tracer
|
||||
synapse.logging.opentracing.init_tracer( # type: ignore[attr-defined] # noqa
|
||||
hs
|
||||
)
|
||||
synapse.logging.opentracing.init_tracer(hs) # type: ignore[attr-defined] # noqa
|
||||
|
||||
# It is now safe to start your Synapse.
|
||||
hs.start_listening(listeners)
|
||||
@ -370,8 +368,7 @@ def setup_sentry(hs):
|
||||
|
||||
|
||||
def setup_sdnotify(hs):
|
||||
"""Adds process state hooks to tell systemd what we are up to.
|
||||
"""
|
||||
"""Adds process state hooks to tell systemd what we are up to."""
|
||||
|
||||
# Tell systemd our state, if we're using it. This will silently fail if
|
||||
# we're not using systemd.
|
||||
@ -405,8 +402,7 @@ def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
|
||||
|
||||
|
||||
class _LimitedHostnameResolver:
|
||||
"""Wraps a IHostnameResolver, limiting the number of in-flight DNS lookups.
|
||||
"""
|
||||
"""Wraps a IHostnameResolver, limiting the number of in-flight DNS lookups."""
|
||||
|
||||
def __init__(self, resolver, max_dns_requests_in_flight):
|
||||
self._resolver = resolver
|
||||
|
@ -421,8 +421,7 @@ class GenericWorkerPresence(BasePresenceHandler):
|
||||
]
|
||||
|
||||
async def set_state(self, target_user, state, ignore_status_msg=False):
|
||||
"""Set the presence state of the user.
|
||||
"""
|
||||
"""Set the presence state of the user."""
|
||||
presence = state["presence"]
|
||||
|
||||
valid_presence = (
|
||||
|
@ -166,7 +166,10 @@ class ApplicationService:
|
||||
|
||||
@cached(num_args=1, cache_context=True)
|
||||
async def matches_user_in_member_list(
|
||||
self, room_id: str, store: "DataStore", cache_context: _CacheContext,
|
||||
self,
|
||||
room_id: str,
|
||||
store: "DataStore",
|
||||
cache_context: _CacheContext,
|
||||
) -> bool:
|
||||
"""Check if this service is interested a room based upon it's membership
|
||||
|
||||
|
@ -76,9 +76,6 @@ def _is_valid_3pe_result(r, field):
|
||||
fields = r["fields"]
|
||||
if not isinstance(fields, dict):
|
||||
return False
|
||||
for k in fields.keys():
|
||||
if not isinstance(fields[k], str):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
@ -230,7 +227,9 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
|
||||
try:
|
||||
await self.put_json(
|
||||
uri=uri, json_body=body, args={"access_token": service.hs_token},
|
||||
uri=uri,
|
||||
json_body=body,
|
||||
args={"access_token": service.hs_token},
|
||||
)
|
||||
sent_transactions_counter.labels(service.id).inc()
|
||||
sent_events_counter.labels(service.id).inc(len(events))
|
||||
|
@ -68,7 +68,7 @@ MAX_EPHEMERAL_EVENTS_PER_TRANSACTION = 100
|
||||
|
||||
|
||||
class ApplicationServiceScheduler:
|
||||
""" Public facing API for this module. Does the required DI to tie the
|
||||
"""Public facing API for this module. Does the required DI to tie the
|
||||
components together. This also serves as the "event_pool", which in this
|
||||
case is a simple array.
|
||||
"""
|
||||
|
@ -224,7 +224,9 @@ class Config:
|
||||
return self.read_templates([filename])[0]
|
||||
|
||||
def read_templates(
|
||||
self, filenames: List[str], custom_template_directory: Optional[str] = None,
|
||||
self,
|
||||
filenames: List[str],
|
||||
custom_template_directory: Optional[str] = None,
|
||||
) -> List[jinja2.Template]:
|
||||
"""Load a list of template files from disk using the given variables.
|
||||
|
||||
@ -264,7 +266,10 @@ class Config:
|
||||
|
||||
# TODO: switch to synapse.util.templates.build_jinja_env
|
||||
loader = jinja2.FileSystemLoader(search_directories)
|
||||
env = jinja2.Environment(loader=loader, autoescape=jinja2.select_autoescape(),)
|
||||
env = jinja2.Environment(
|
||||
loader=loader,
|
||||
autoescape=jinja2.select_autoescape(),
|
||||
)
|
||||
|
||||
# Update the environment with our custom filters
|
||||
env.filters.update(
|
||||
@ -825,8 +830,7 @@ class ShardedWorkerHandlingConfig:
|
||||
instances = attr.ib(type=List[str])
|
||||
|
||||
def should_handle(self, instance_name: str, key: str) -> bool:
|
||||
"""Whether this instance is responsible for handling the given key.
|
||||
"""
|
||||
"""Whether this instance is responsible for handling the given key."""
|
||||
# If multiple instances are not defined we always return true
|
||||
if not self.instances or len(self.instances) == 1:
|
||||
return True
|
||||
|
@ -18,8 +18,7 @@ from ._base import Config
|
||||
|
||||
|
||||
class AuthConfig(Config):
|
||||
"""Password and login configuration
|
||||
"""
|
||||
"""Password and login configuration"""
|
||||
|
||||
section = "auth"
|
||||
|
||||
@ -38,7 +37,9 @@ class AuthConfig(Config):
|
||||
|
||||
# User-interactive authentication
|
||||
ui_auth = config.get("ui_auth") or {}
|
||||
self.ui_auth_session_timeout = ui_auth.get("session_timeout", 0)
|
||||
self.ui_auth_session_timeout = self.parse_duration(
|
||||
ui_auth.get("session_timeout", 0)
|
||||
)
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """\
|
||||
@ -94,11 +95,11 @@ class AuthConfig(Config):
|
||||
#require_uppercase: true
|
||||
|
||||
ui_auth:
|
||||
# The number of milliseconds to allow a user-interactive authentication
|
||||
# session to be active.
|
||||
# The amount of time to allow a user-interactive authentication session
|
||||
# to be active.
|
||||
#
|
||||
# This defaults to 0, meaning the user is queried for their credentials
|
||||
# before every action, but this can be overridden to alow a single
|
||||
# before every action, but this can be overridden to allow a single
|
||||
# validation to be re-used. This weakens the protections afforded by
|
||||
# the user-interactive authentication process, by allowing for multiple
|
||||
# (and potentially different) operations to use the same validation session.
|
||||
@ -106,5 +107,5 @@ class AuthConfig(Config):
|
||||
# Uncomment below to allow for credential validation to last for 15
|
||||
# seconds.
|
||||
#
|
||||
#session_timeout: 15000
|
||||
#session_timeout: "15s"
|
||||
"""
|
||||
|
@ -13,7 +13,12 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import Any, List
|
||||
|
||||
from synapse.config.sso import SsoAttributeRequirement
|
||||
|
||||
from ._base import Config, ConfigError
|
||||
from ._util import validate_config
|
||||
|
||||
|
||||
class CasConfig(Config):
|
||||
@ -40,12 +45,16 @@ class CasConfig(Config):
|
||||
# TODO Update this to a _synapse URL.
|
||||
self.cas_service_url = public_baseurl + "_matrix/client/r0/login/cas/ticket"
|
||||
self.cas_displayname_attribute = cas_config.get("displayname_attribute")
|
||||
self.cas_required_attributes = cas_config.get("required_attributes") or {}
|
||||
required_attributes = cas_config.get("required_attributes") or {}
|
||||
self.cas_required_attributes = _parsed_required_attributes_def(
|
||||
required_attributes
|
||||
)
|
||||
|
||||
else:
|
||||
self.cas_server_url = None
|
||||
self.cas_service_url = None
|
||||
self.cas_displayname_attribute = None
|
||||
self.cas_required_attributes = {}
|
||||
self.cas_required_attributes = []
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """\
|
||||
@ -77,3 +86,22 @@ class CasConfig(Config):
|
||||
# userGroup: "staff"
|
||||
# department: None
|
||||
"""
|
||||
|
||||
|
||||
# CAS uses a legacy required attributes mapping, not the one provided by
|
||||
# SsoAttributeRequirement.
|
||||
REQUIRED_ATTRIBUTES_SCHEMA = {
|
||||
"type": "object",
|
||||
"additionalProperties": {"anyOf": [{"type": "string"}, {"type": "null"}]},
|
||||
}
|
||||
|
||||
|
||||
def _parsed_required_attributes_def(
|
||||
required_attributes: Any,
|
||||
) -> List[SsoAttributeRequirement]:
|
||||
validate_config(
|
||||
REQUIRED_ATTRIBUTES_SCHEMA,
|
||||
required_attributes,
|
||||
config_path=("cas_config", "required_attributes"),
|
||||
)
|
||||
return [SsoAttributeRequirement(k, v) for k, v in required_attributes.items()]
|
||||
|
@ -207,8 +207,7 @@ class DatabaseConfig(Config):
|
||||
)
|
||||
|
||||
def get_single_database(self) -> DatabaseConnectionConfig:
|
||||
"""Returns the database if there is only one, useful for e.g. tests
|
||||
"""
|
||||
"""Returns the database if there is only one, useful for e.g. tests"""
|
||||
if not self.databases:
|
||||
raise Exception("More than one database exists")
|
||||
|
||||
|
@ -289,7 +289,8 @@ class EmailConfig(Config):
|
||||
self.email_notif_template_html,
|
||||
self.email_notif_template_text,
|
||||
) = self.read_templates(
|
||||
[notif_template_html, notif_template_text], template_dir,
|
||||
[notif_template_html, notif_template_text],
|
||||
template_dir,
|
||||
)
|
||||
|
||||
self.email_notif_for_new_users = email_config.get(
|
||||
@ -311,7 +312,8 @@ class EmailConfig(Config):
|
||||
self.account_validity_template_html,
|
||||
self.account_validity_template_text,
|
||||
) = self.read_templates(
|
||||
[expiry_template_html, expiry_template_text], template_dir,
|
||||
[expiry_template_html, expiry_template_text],
|
||||
template_dir,
|
||||
)
|
||||
|
||||
subjects_config = email_config.get("subjects", {})
|
||||
|
@ -162,7 +162,10 @@ class LoggingConfig(Config):
|
||||
)
|
||||
|
||||
logging_group.add_argument(
|
||||
"-f", "--log-file", dest="log_file", help=argparse.SUPPRESS,
|
||||
"-f",
|
||||
"--log-file",
|
||||
dest="log_file",
|
||||
help=argparse.SUPPRESS,
|
||||
)
|
||||
|
||||
def generate_files(self, config, config_dir_path):
|
||||
|
@ -201,9 +201,9 @@ class OIDCConfig(Config):
|
||||
# user_mapping_provider:
|
||||
# config:
|
||||
# subject_claim: "id"
|
||||
# localpart_template: "{{ user.login }}"
|
||||
# display_name_template: "{{ user.name }}"
|
||||
# email_template: "{{ user.email }}"
|
||||
# localpart_template: "{{{{ user.login }}}}"
|
||||
# display_name_template: "{{{{ user.name }}}}"
|
||||
# email_template: "{{{{ user.email }}}}"
|
||||
|
||||
# For use with Keycloak
|
||||
#
|
||||
@ -230,8 +230,8 @@ class OIDCConfig(Config):
|
||||
# user_mapping_provider:
|
||||
# config:
|
||||
# subject_claim: "id"
|
||||
# localpart_template: "{{ user.login }}"
|
||||
# display_name_template: "{{ user.name }}"
|
||||
# localpart_template: "{{{{ user.login }}}}"
|
||||
# display_name_template: "{{{{ user.name }}}}"
|
||||
""".format(
|
||||
mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
|
||||
)
|
||||
@ -355,9 +355,10 @@ def _parse_oidc_config_dict(
|
||||
ump_config.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER)
|
||||
ump_config.setdefault("config", {})
|
||||
|
||||
(user_mapping_provider_class, user_mapping_provider_config,) = load_module(
|
||||
ump_config, config_path + ("user_mapping_provider",)
|
||||
)
|
||||
(
|
||||
user_mapping_provider_class,
|
||||
user_mapping_provider_config,
|
||||
) = load_module(ump_config, config_path + ("user_mapping_provider",))
|
||||
|
||||
# Ensure loaded user mapping module has defined all necessary methods
|
||||
required_methods = [
|
||||
@ -372,7 +373,11 @@ def _parse_oidc_config_dict(
|
||||
if missing_methods:
|
||||
raise ConfigError(
|
||||
"Class %s is missing required "
|
||||
"methods: %s" % (user_mapping_provider_class, ", ".join(missing_methods),),
|
||||
"methods: %s"
|
||||
% (
|
||||
user_mapping_provider_class,
|
||||
", ".join(missing_methods),
|
||||
),
|
||||
config_path + ("user_mapping_provider", "module"),
|
||||
)
|
||||
|
||||
|
@ -391,6 +391,8 @@ class RegistrationConfig(Config):
|
||||
# By default, any room aliases included in this list will be created
|
||||
# as a publicly joinable room when the first user registers for the
|
||||
# homeserver. This behaviour can be customised with the settings below.
|
||||
# If the room already exists, make certain it is a publicly joinable
|
||||
# room. The join rule of the room must be set to 'public'.
|
||||
#
|
||||
#auto_join_rooms:
|
||||
# - "#example:example.com"
|
||||
|
@ -17,9 +17,7 @@ import os
|
||||
from collections import namedtuple
|
||||
from typing import Dict, List
|
||||
|
||||
from netaddr import IPSet
|
||||
|
||||
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST
|
||||
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST, generate_ip_set
|
||||
from synapse.python_dependencies import DependencyException, check_requirements
|
||||
from synapse.util.module_loader import load_module
|
||||
|
||||
@ -54,7 +52,7 @@ MediaStorageProviderConfig = namedtuple(
|
||||
|
||||
|
||||
def parse_thumbnail_requirements(thumbnail_sizes):
|
||||
""" Takes a list of dictionaries with "width", "height", and "method" keys
|
||||
"""Takes a list of dictionaries with "width", "height", and "method" keys
|
||||
and creates a map from image media types to the thumbnail size, thumbnailing
|
||||
method, and thumbnail media type to precalculate
|
||||
|
||||
@ -188,16 +186,17 @@ class ContentRepositoryConfig(Config):
|
||||
"to work"
|
||||
)
|
||||
|
||||
self.url_preview_ip_range_blacklist = IPSet(
|
||||
config["url_preview_ip_range_blacklist"]
|
||||
)
|
||||
|
||||
# we always blacklist '0.0.0.0' and '::', which are supposed to be
|
||||
# unroutable addresses.
|
||||
self.url_preview_ip_range_blacklist.update(["0.0.0.0", "::"])
|
||||
self.url_preview_ip_range_blacklist = generate_ip_set(
|
||||
config["url_preview_ip_range_blacklist"],
|
||||
["0.0.0.0", "::"],
|
||||
config_path=("url_preview_ip_range_blacklist",),
|
||||
)
|
||||
|
||||
self.url_preview_ip_range_whitelist = IPSet(
|
||||
config.get("url_preview_ip_range_whitelist", ())
|
||||
self.url_preview_ip_range_whitelist = generate_ip_set(
|
||||
config.get("url_preview_ip_range_whitelist", ()),
|
||||
config_path=("url_preview_ip_range_whitelist",),
|
||||
)
|
||||
|
||||
self.url_preview_url_blacklist = config.get("url_preview_url_blacklist", ())
|
||||
|
@ -123,7 +123,7 @@ class RoomDirectoryConfig(Config):
|
||||
alias (str)
|
||||
|
||||
Returns:
|
||||
boolean: True if user is allowed to crate the alias
|
||||
boolean: True if user is allowed to create the alias
|
||||
"""
|
||||
for rule in self._alias_creation_rules:
|
||||
if rule.matches(user_id, room_id, [alias]):
|
||||
|
@ -17,8 +17,7 @@
|
||||
import logging
|
||||
from typing import Any, List
|
||||
|
||||
import attr
|
||||
|
||||
from synapse.config.sso import SsoAttributeRequirement
|
||||
from synapse.python_dependencies import DependencyException, check_requirements
|
||||
from synapse.util.module_loader import load_module, load_python_module
|
||||
|
||||
@ -398,32 +397,18 @@ class SAML2Config(Config):
|
||||
}
|
||||
|
||||
|
||||
@attr.s(frozen=True)
|
||||
class SamlAttributeRequirement:
|
||||
"""Object describing a single requirement for SAML attributes."""
|
||||
|
||||
attribute = attr.ib(type=str)
|
||||
value = attr.ib(type=str)
|
||||
|
||||
JSON_SCHEMA = {
|
||||
"type": "object",
|
||||
"properties": {"attribute": {"type": "string"}, "value": {"type": "string"}},
|
||||
"required": ["attribute", "value"],
|
||||
}
|
||||
|
||||
|
||||
ATTRIBUTE_REQUIREMENTS_SCHEMA = {
|
||||
"type": "array",
|
||||
"items": SamlAttributeRequirement.JSON_SCHEMA,
|
||||
"items": SsoAttributeRequirement.JSON_SCHEMA,
|
||||
}
|
||||
|
||||
|
||||
def _parse_attribute_requirements_def(
|
||||
attribute_requirements: Any,
|
||||
) -> List[SamlAttributeRequirement]:
|
||||
) -> List[SsoAttributeRequirement]:
|
||||
validate_config(
|
||||
ATTRIBUTE_REQUIREMENTS_SCHEMA,
|
||||
attribute_requirements,
|
||||
config_path=["saml2_config", "attribute_requirements"],
|
||||
config_path=("saml2_config", "attribute_requirements"),
|
||||
)
|
||||
return [SamlAttributeRequirement(**x) for x in attribute_requirements]
|
||||
return [SsoAttributeRequirement(**x) for x in attribute_requirements]
|
||||
|
@ -15,6 +15,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import itertools
|
||||
import logging
|
||||
import os.path
|
||||
import re
|
||||
@ -23,7 +24,7 @@ from typing import Any, Dict, Iterable, List, Optional, Set
|
||||
|
||||
import attr
|
||||
import yaml
|
||||
from netaddr import IPSet
|
||||
from netaddr import AddrFormatError, IPNetwork, IPSet
|
||||
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
@ -40,6 +41,71 @@ logger = logging.Logger(__name__)
|
||||
# in the list.
|
||||
DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
|
||||
|
||||
|
||||
def _6to4(network: IPNetwork) -> IPNetwork:
|
||||
"""Convert an IPv4 network into a 6to4 IPv6 network per RFC 3056."""
|
||||
|
||||
# 6to4 networks consist of:
|
||||
# * 2002 as the first 16 bits
|
||||
# * The first IPv4 address in the network hex-encoded as the next 32 bits
|
||||
# * The new prefix length needs to include the bits from the 2002 prefix.
|
||||
hex_network = hex(network.first)[2:]
|
||||
hex_network = ("0" * (8 - len(hex_network))) + hex_network
|
||||
return IPNetwork(
|
||||
"2002:%s:%s::/%d"
|
||||
% (
|
||||
hex_network[:4],
|
||||
hex_network[4:],
|
||||
16 + network.prefixlen,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def generate_ip_set(
|
||||
ip_addresses: Optional[Iterable[str]],
|
||||
extra_addresses: Optional[Iterable[str]] = None,
|
||||
config_path: Optional[Iterable[str]] = None,
|
||||
) -> IPSet:
|
||||
"""
|
||||
Generate an IPSet from a list of IP addresses or CIDRs.
|
||||
|
||||
Additionally, for each IPv4 network in the list of IP addresses, also
|
||||
includes the corresponding IPv6 networks.
|
||||
|
||||
This includes:
|
||||
|
||||
* IPv4-Compatible IPv6 Address (see RFC 4291, section 2.5.5.1)
|
||||
* IPv4-Mapped IPv6 Address (see RFC 4291, section 2.5.5.2)
|
||||
* 6to4 Address (see RFC 3056, section 2)
|
||||
|
||||
Args:
|
||||
ip_addresses: An iterable of IP addresses or CIDRs.
|
||||
extra_addresses: An iterable of IP addresses or CIDRs.
|
||||
config_path: The path in the configuration for error messages.
|
||||
|
||||
Returns:
|
||||
A new IP set.
|
||||
"""
|
||||
result = IPSet()
|
||||
for ip in itertools.chain(ip_addresses or (), extra_addresses or ()):
|
||||
try:
|
||||
network = IPNetwork(ip)
|
||||
except AddrFormatError as e:
|
||||
raise ConfigError(
|
||||
"Invalid IP range provided: %s." % (ip,), config_path
|
||||
) from e
|
||||
result.add(network)
|
||||
|
||||
# It is possible that these already exist in the set, but that's OK.
|
||||
if ":" not in str(network):
|
||||
result.add(IPNetwork(network).ipv6(ipv4_compatible=True))
|
||||
result.add(IPNetwork(network).ipv6(ipv4_compatible=False))
|
||||
result.add(_6to4(network))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# IP ranges that are considered private / unroutable / don't make sense.
|
||||
DEFAULT_IP_RANGE_BLACKLIST = [
|
||||
# Localhost
|
||||
"127.0.0.0/8",
|
||||
@ -53,6 +119,8 @@ DEFAULT_IP_RANGE_BLACKLIST = [
|
||||
"192.0.0.0/24",
|
||||
# Link-local networks.
|
||||
"169.254.0.0/16",
|
||||
# Formerly used for 6to4 relay.
|
||||
"192.88.99.0/24",
|
||||
# Testing networks.
|
||||
"198.18.0.0/15",
|
||||
"192.0.2.0/24",
|
||||
@ -66,6 +134,12 @@ DEFAULT_IP_RANGE_BLACKLIST = [
|
||||
"fe80::/10",
|
||||
# Unique local addresses.
|
||||
"fc00::/7",
|
||||
# Testing networks.
|
||||
"2001:db8::/32",
|
||||
# Multicast.
|
||||
"ff00::/8",
|
||||
# Site-local addresses
|
||||
"fec0::/10",
|
||||
]
|
||||
|
||||
DEFAULT_ROOM_VERSION = "6"
|
||||
@ -185,7 +259,8 @@ class ServerConfig(Config):
|
||||
# Whether to require sharing a room with a user to retrieve their
|
||||
# profile data
|
||||
self.limit_profile_requests_to_users_who_share_rooms = config.get(
|
||||
"limit_profile_requests_to_users_who_share_rooms", False,
|
||||
"limit_profile_requests_to_users_who_share_rooms",
|
||||
False,
|
||||
)
|
||||
|
||||
if "restrict_public_rooms_to_local_users" in config and (
|
||||
@ -290,17 +365,15 @@ class ServerConfig(Config):
|
||||
)
|
||||
|
||||
# Attempt to create an IPSet from the given ranges
|
||||
try:
|
||||
self.ip_range_blacklist = IPSet(ip_range_blacklist)
|
||||
except Exception as e:
|
||||
raise ConfigError("Invalid range(s) provided in ip_range_blacklist.") from e
|
||||
# Always blacklist 0.0.0.0, ::
|
||||
self.ip_range_blacklist.update(["0.0.0.0", "::"])
|
||||
|
||||
try:
|
||||
self.ip_range_whitelist = IPSet(config.get("ip_range_whitelist", ()))
|
||||
except Exception as e:
|
||||
raise ConfigError("Invalid range(s) provided in ip_range_whitelist.") from e
|
||||
# Always blacklist 0.0.0.0, ::
|
||||
self.ip_range_blacklist = generate_ip_set(
|
||||
ip_range_blacklist, ["0.0.0.0", "::"], config_path=("ip_range_blacklist",)
|
||||
)
|
||||
|
||||
self.ip_range_whitelist = generate_ip_set(
|
||||
config.get("ip_range_whitelist", ()), config_path=("ip_range_whitelist",)
|
||||
)
|
||||
|
||||
# The federation_ip_range_blacklist is used for backwards-compatibility
|
||||
# and only applies to federation and identity servers. If it is not given,
|
||||
@ -308,14 +381,12 @@ class ServerConfig(Config):
|
||||
federation_ip_range_blacklist = config.get(
|
||||
"federation_ip_range_blacklist", ip_range_blacklist
|
||||
)
|
||||
try:
|
||||
self.federation_ip_range_blacklist = IPSet(federation_ip_range_blacklist)
|
||||
except Exception as e:
|
||||
raise ConfigError(
|
||||
"Invalid range(s) provided in federation_ip_range_blacklist."
|
||||
) from e
|
||||
# Always blacklist 0.0.0.0, ::
|
||||
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
|
||||
self.federation_ip_range_blacklist = generate_ip_set(
|
||||
federation_ip_range_blacklist,
|
||||
["0.0.0.0", "::"],
|
||||
config_path=("federation_ip_range_blacklist",),
|
||||
)
|
||||
|
||||
if self.public_baseurl is not None:
|
||||
if self.public_baseurl[-1] != "/":
|
||||
@ -549,7 +620,9 @@ class ServerConfig(Config):
|
||||
if manhole:
|
||||
self.listeners.append(
|
||||
ListenerConfig(
|
||||
port=manhole, bind_addresses=["127.0.0.1"], type="manhole",
|
||||
port=manhole,
|
||||
bind_addresses=["127.0.0.1"],
|
||||
type="manhole",
|
||||
)
|
||||
)
|
||||
|
||||
@ -585,7 +658,8 @@ class ServerConfig(Config):
|
||||
# and letting the client know which email address is bound to an account and
|
||||
# which one isn't.
|
||||
self.request_token_inhibit_3pid_errors = config.get(
|
||||
"request_token_inhibit_3pid_errors", False,
|
||||
"request_token_inhibit_3pid_errors",
|
||||
False,
|
||||
)
|
||||
|
||||
# List of users trialing the new experimental default push rules. This setting is
|
||||
|
@ -12,14 +12,30 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import Any, Dict
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
import attr
|
||||
|
||||
from ._base import Config
|
||||
|
||||
|
||||
@attr.s(frozen=True)
|
||||
class SsoAttributeRequirement:
|
||||
"""Object describing a single requirement for SSO attributes."""
|
||||
|
||||
attribute = attr.ib(type=str)
|
||||
# If a value is not given, than the attribute must simply exist.
|
||||
value = attr.ib(type=Optional[str])
|
||||
|
||||
JSON_SCHEMA = {
|
||||
"type": "object",
|
||||
"properties": {"attribute": {"type": "string"}, "value": {"type": "string"}},
|
||||
"required": ["attribute", "value"],
|
||||
}
|
||||
|
||||
|
||||
class SSOConfig(Config):
|
||||
"""SSO Configuration
|
||||
"""
|
||||
"""SSO Configuration"""
|
||||
|
||||
section = "sso"
|
||||
|
||||
|
@ -33,8 +33,7 @@ def _instance_to_list_converter(obj: Union[str, List[str]]) -> List[str]:
|
||||
|
||||
@attr.s
|
||||
class InstanceLocationConfig:
|
||||
"""The host and port to talk to an instance via HTTP replication.
|
||||
"""
|
||||
"""The host and port to talk to an instance via HTTP replication."""
|
||||
|
||||
host = attr.ib(type=str)
|
||||
port = attr.ib(type=int)
|
||||
@ -54,13 +53,19 @@ class WriterLocations:
|
||||
)
|
||||
typing = attr.ib(default="master", type=str)
|
||||
to_device = attr.ib(
|
||||
default=["master"], type=List[str], converter=_instance_to_list_converter,
|
||||
default=["master"],
|
||||
type=List[str],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
account_data = attr.ib(
|
||||
default=["master"], type=List[str], converter=_instance_to_list_converter,
|
||||
default=["master"],
|
||||
type=List[str],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
receipts = attr.ib(
|
||||
default=["master"], type=List[str], converter=_instance_to_list_converter,
|
||||
default=["master"],
|
||||
type=List[str],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
|
||||
|
||||
@ -107,7 +112,9 @@ class WorkerConfig(Config):
|
||||
if manhole:
|
||||
self.worker_listeners.append(
|
||||
ListenerConfig(
|
||||
port=manhole, bind_addresses=["127.0.0.1"], type="manhole",
|
||||
port=manhole,
|
||||
bind_addresses=["127.0.0.1"],
|
||||
type="manhole",
|
||||
)
|
||||
)
|
||||
|
||||
|
@ -42,7 +42,7 @@ def check(
|
||||
do_sig_check: bool = True,
|
||||
do_size_check: bool = True,
|
||||
) -> None:
|
||||
""" Checks if this event is correctly authed.
|
||||
"""Checks if this event is correctly authed.
|
||||
|
||||
Args:
|
||||
room_version_obj: the version of the room
|
||||
@ -423,7 +423,9 @@ def _can_send_event(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
|
||||
|
||||
|
||||
def check_redaction(
|
||||
room_version_obj: RoomVersion, event: EventBase, auth_events: StateMap[EventBase],
|
||||
room_version_obj: RoomVersion,
|
||||
event: EventBase,
|
||||
auth_events: StateMap[EventBase],
|
||||
) -> bool:
|
||||
"""Check whether the event sender is allowed to redact the target event.
|
||||
|
||||
@ -459,7 +461,9 @@ def check_redaction(
|
||||
|
||||
|
||||
def _check_power_levels(
|
||||
room_version_obj: RoomVersion, event: EventBase, auth_events: StateMap[EventBase],
|
||||
room_version_obj: RoomVersion,
|
||||
event: EventBase,
|
||||
auth_events: StateMap[EventBase],
|
||||
) -> None:
|
||||
user_list = event.content.get("users", {})
|
||||
# Validate users
|
||||
|
@ -98,7 +98,9 @@ class EventBuilder:
|
||||
return self._state_key is not None
|
||||
|
||||
async def build(
|
||||
self, prev_event_ids: List[str], auth_event_ids: Optional[List[str]],
|
||||
self,
|
||||
prev_event_ids: List[str],
|
||||
auth_event_ids: Optional[List[str]],
|
||||
) -> EventBase:
|
||||
"""Transform into a fully signed and hashed event
|
||||
|
||||
|
@ -341,8 +341,7 @@ def _encode_state_dict(state_dict):
|
||||
|
||||
|
||||
def _decode_state_dict(input):
|
||||
"""Decodes a state dict encoded using `_encode_state_dict` above
|
||||
"""
|
||||
"""Decodes a state dict encoded using `_encode_state_dict` above"""
|
||||
if input is None:
|
||||
return None
|
||||
|
||||
|
@ -17,6 +17,8 @@
|
||||
import inspect
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
|
||||
|
||||
from synapse.rest.media.v1._base import FileInfo
|
||||
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
|
||||
from synapse.spam_checker_api import RegistrationBehaviour
|
||||
from synapse.types import Collection
|
||||
from synapse.util.async_helpers import maybe_awaitable
|
||||
@ -214,3 +216,48 @@ class SpamChecker:
|
||||
return behaviour
|
||||
|
||||
return RegistrationBehaviour.ALLOW
|
||||
|
||||
async def check_media_file_for_spam(
|
||||
self, file_wrapper: ReadableFileWrapper, file_info: FileInfo
|
||||
) -> bool:
|
||||
"""Checks if a piece of newly uploaded media should be blocked.
|
||||
|
||||
This will be called for local uploads, downloads of remote media, each
|
||||
thumbnail generated for those, and web pages/images used for URL
|
||||
previews.
|
||||
|
||||
Note that care should be taken to not do blocking IO operations in the
|
||||
main thread. For example, to get the contents of a file a module
|
||||
should do::
|
||||
|
||||
async def check_media_file_for_spam(
|
||||
self, file: ReadableFileWrapper, file_info: FileInfo
|
||||
) -> bool:
|
||||
buffer = BytesIO()
|
||||
await file.write_chunks_to(buffer.write)
|
||||
|
||||
if buffer.getvalue() == b"Hello World":
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
Args:
|
||||
file: An object that allows reading the contents of the media.
|
||||
file_info: Metadata about the file.
|
||||
|
||||
Returns:
|
||||
True if the media should be blocked or False if it should be
|
||||
allowed.
|
||||
"""
|
||||
|
||||
for spam_checker in self.spam_checkers:
|
||||
# For backwards compatibility, only run if the method exists on the
|
||||
# spam checker
|
||||
checker = getattr(spam_checker, "check_media_file_for_spam", None)
|
||||
if checker:
|
||||
spam = await maybe_awaitable(checker(file_wrapper, file_info))
|
||||
if spam:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
@ -40,7 +40,8 @@ class ThirdPartyEventRules:
|
||||
|
||||
if module is not None:
|
||||
self.third_party_rules = module(
|
||||
config=config, module_api=hs.get_module_api(),
|
||||
config=config,
|
||||
module_api=hs.get_module_api(),
|
||||
)
|
||||
|
||||
async def check_event_allowed(
|
||||
|
@ -34,7 +34,7 @@ SPLIT_FIELD_REGEX = re.compile(r"(?<!\\)\.")
|
||||
|
||||
|
||||
def prune_event(event: EventBase) -> EventBase:
|
||||
""" Returns a pruned version of the given event, which removes all keys we
|
||||
"""Returns a pruned version of the given event, which removes all keys we
|
||||
don't know about or think could potentially be dodgy.
|
||||
|
||||
This is used when we "redact" an event. We want to remove all fields that
|
||||
|
@ -750,7 +750,11 @@ class FederationClient(FederationBase):
|
||||
return resp[1]
|
||||
|
||||
async def send_invite(
|
||||
self, destination: str, room_id: str, event_id: str, pdu: EventBase,
|
||||
self,
|
||||
destination: str,
|
||||
room_id: str,
|
||||
event_id: str,
|
||||
pdu: EventBase,
|
||||
) -> EventBase:
|
||||
room_version = await self.store.get_room_version(room_id)
|
||||
|
||||
|
@ -85,7 +85,8 @@ received_queries_counter = Counter(
|
||||
)
|
||||
|
||||
pdu_process_time = Histogram(
|
||||
"synapse_federation_server_pdu_process_time", "Time taken to process an event",
|
||||
"synapse_federation_server_pdu_process_time",
|
||||
"Time taken to process an event",
|
||||
)
|
||||
|
||||
|
||||
@ -204,7 +205,7 @@ class FederationServer(FederationBase):
|
||||
async def _handle_incoming_transaction(
|
||||
self, origin: str, transaction: Transaction, request_time: int
|
||||
) -> Tuple[int, Dict[str, Any]]:
|
||||
""" Process an incoming transaction and return the HTTP response
|
||||
"""Process an incoming transaction and return the HTTP response
|
||||
|
||||
Args:
|
||||
origin: the server making the request
|
||||
@ -373,8 +374,7 @@ class FederationServer(FederationBase):
|
||||
return pdu_results
|
||||
|
||||
async def _handle_edus_in_txn(self, origin: str, transaction: Transaction):
|
||||
"""Process the EDUs in a received transaction.
|
||||
"""
|
||||
"""Process the EDUs in a received transaction."""
|
||||
|
||||
async def _process_edu(edu_dict):
|
||||
received_edus_counter.inc()
|
||||
@ -437,7 +437,10 @@ class FederationServer(FederationBase):
|
||||
raise AuthError(403, "Host not in room.")
|
||||
|
||||
resp = await self._state_ids_resp_cache.wrap(
|
||||
(room_id, event_id), self._on_state_ids_request_compute, room_id, event_id,
|
||||
(room_id, event_id),
|
||||
self._on_state_ids_request_compute,
|
||||
room_id,
|
||||
event_id,
|
||||
)
|
||||
|
||||
return 200, resp
|
||||
@ -679,7 +682,7 @@ class FederationServer(FederationBase):
|
||||
)
|
||||
|
||||
async def _handle_received_pdu(self, origin: str, pdu: EventBase) -> None:
|
||||
""" Process a PDU received in a federation /send/ transaction.
|
||||
"""Process a PDU received in a federation /send/ transaction.
|
||||
|
||||
If the event is invalid, then this method throws a FederationError.
|
||||
(The error will then be logged and sent back to the sender (which
|
||||
@ -906,13 +909,11 @@ class FederationHandlerRegistry:
|
||||
self.query_handlers[query_type] = handler
|
||||
|
||||
def register_instance_for_edu(self, edu_type: str, instance_name: str):
|
||||
"""Register that the EDU handler is on a different instance than master.
|
||||
"""
|
||||
"""Register that the EDU handler is on a different instance than master."""
|
||||
self._edu_type_to_instance[edu_type] = [instance_name]
|
||||
|
||||
def register_instances_for_edu(self, edu_type: str, instance_names: List[str]):
|
||||
"""Register that the EDU handler is on multiple instances.
|
||||
"""
|
||||
"""Register that the EDU handler is on multiple instances."""
|
||||
self._edu_type_to_instance[edu_type] = instance_names
|
||||
|
||||
async def on_edu(self, edu_type: str, origin: str, content: dict):
|
||||
|
@ -30,8 +30,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class TransactionActions:
|
||||
""" Defines persistence actions that relate to handling Transactions.
|
||||
"""
|
||||
"""Defines persistence actions that relate to handling Transactions."""
|
||||
|
||||
def __init__(self, datastore):
|
||||
self.store = datastore
|
||||
@ -57,8 +56,7 @@ class TransactionActions:
|
||||
async def set_response(
|
||||
self, origin: str, transaction: Transaction, code: int, response: JsonDict
|
||||
) -> None:
|
||||
"""Persist how we responded to a transaction.
|
||||
"""
|
||||
"""Persist how we responded to a transaction."""
|
||||
transaction_id = transaction.transaction_id # type: ignore
|
||||
if not transaction_id:
|
||||
raise RuntimeError("Cannot persist a transaction with no transaction_id")
|
||||
|
@ -468,8 +468,7 @@ class KeyedEduRow(
|
||||
|
||||
|
||||
class EduRow(BaseFederationRow, namedtuple("EduRow", ("edu",))): # Edu
|
||||
"""Streams EDUs that don't have keys. See KeyedEduRow
|
||||
"""
|
||||
"""Streams EDUs that don't have keys. See KeyedEduRow"""
|
||||
|
||||
TypeId = "e"
|
||||
|
||||
@ -519,7 +518,10 @@ def process_rows_for_federation(transaction_queue, rows):
|
||||
# them into the appropriate collection and then send them off.
|
||||
|
||||
buff = ParsedFederationStreamData(
|
||||
presence=[], presence_destinations=[], keyed_edus={}, edus={},
|
||||
presence=[],
|
||||
presence_destinations=[],
|
||||
keyed_edus={},
|
||||
edus={},
|
||||
)
|
||||
|
||||
# Parse the rows in the stream and add to the buffer
|
||||
|
@ -328,7 +328,9 @@ class FederationSender:
|
||||
# to allow us to perform catch-up later on if the remote is unreachable
|
||||
# for a while.
|
||||
await self.store.store_destination_rooms_entries(
|
||||
destinations, pdu.room_id, pdu.internal_metadata.stream_ordering,
|
||||
destinations,
|
||||
pdu.room_id,
|
||||
pdu.internal_metadata.stream_ordering,
|
||||
)
|
||||
|
||||
for destination in destinations:
|
||||
@ -475,7 +477,7 @@ class FederationSender:
|
||||
self, states: List[UserPresenceState], destinations: List[str]
|
||||
) -> None:
|
||||
"""Send the given presence states to the given destinations.
|
||||
destinations (list[str])
|
||||
destinations (list[str])
|
||||
"""
|
||||
|
||||
if not states or not self.hs.config.use_presence:
|
||||
@ -616,8 +618,8 @@ class FederationSender:
|
||||
last_processed = None # type: Optional[str]
|
||||
|
||||
while True:
|
||||
destinations_to_wake = await self.store.get_catch_up_outstanding_destinations(
|
||||
last_processed
|
||||
destinations_to_wake = (
|
||||
await self.store.get_catch_up_outstanding_destinations(last_processed)
|
||||
)
|
||||
|
||||
if not destinations_to_wake:
|
||||
|
@ -85,7 +85,8 @@ class PerDestinationQueue:
|
||||
# processing. We have a guard in `attempt_new_transaction` that
|
||||
# ensure we don't start sending stuff.
|
||||
logger.error(
|
||||
"Create a per destination queue for %s on wrong worker", destination,
|
||||
"Create a per destination queue for %s on wrong worker",
|
||||
destination,
|
||||
)
|
||||
self._should_send_on_this_instance = False
|
||||
|
||||
@ -440,8 +441,10 @@ class PerDestinationQueue:
|
||||
|
||||
if first_catch_up_check:
|
||||
# first catchup so get last_successful_stream_ordering from database
|
||||
self._last_successful_stream_ordering = await self._store.get_destination_last_successful_stream_ordering(
|
||||
self._destination
|
||||
self._last_successful_stream_ordering = (
|
||||
await self._store.get_destination_last_successful_stream_ordering(
|
||||
self._destination
|
||||
)
|
||||
)
|
||||
|
||||
if self._last_successful_stream_ordering is None:
|
||||
@ -457,7 +460,8 @@ class PerDestinationQueue:
|
||||
# get at most 50 catchup room/PDUs
|
||||
while True:
|
||||
event_ids = await self._store.get_catch_up_room_event_ids(
|
||||
self._destination, self._last_successful_stream_ordering,
|
||||
self._destination,
|
||||
self._last_successful_stream_ordering,
|
||||
)
|
||||
|
||||
if not event_ids:
|
||||
|
@ -65,7 +65,10 @@ class TransactionManager:
|
||||
|
||||
@measure_func("_send_new_transaction")
|
||||
async def send_new_transaction(
|
||||
self, destination: str, pdus: List[EventBase], edus: List[Edu],
|
||||
self,
|
||||
destination: str,
|
||||
pdus: List[EventBase],
|
||||
edus: List[Edu],
|
||||
) -> bool:
|
||||
"""
|
||||
Args:
|
||||
|
@ -39,7 +39,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_room_state_ids(self, destination, room_id, event_id):
|
||||
""" Requests all state for a given room from the given server at the
|
||||
"""Requests all state for a given room from the given server at the
|
||||
given event. Returns the state's event_id's
|
||||
|
||||
Args:
|
||||
@ -63,7 +63,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_event(self, destination, event_id, timeout=None):
|
||||
""" Requests the pdu with give id and origin from the given server.
|
||||
"""Requests the pdu with give id and origin from the given server.
|
||||
|
||||
Args:
|
||||
destination (str): The host name of the remote homeserver we want
|
||||
@ -84,7 +84,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def backfill(self, destination, room_id, event_tuples, limit):
|
||||
""" Requests `limit` previous PDUs in a given context before list of
|
||||
"""Requests `limit` previous PDUs in a given context before list of
|
||||
PDUs.
|
||||
|
||||
Args:
|
||||
@ -118,7 +118,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
async def send_transaction(self, transaction, json_data_callback=None):
|
||||
""" Sends the given Transaction to its destination
|
||||
"""Sends the given Transaction to its destination
|
||||
|
||||
Args:
|
||||
transaction (Transaction)
|
||||
@ -551,8 +551,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_group_profile(self, destination, group_id, requester_user_id):
|
||||
"""Get a group profile
|
||||
"""
|
||||
"""Get a group profile"""
|
||||
path = _create_v1_path("/groups/%s/profile", group_id)
|
||||
|
||||
return self.client.get_json(
|
||||
@ -584,8 +583,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_group_summary(self, destination, group_id, requester_user_id):
|
||||
"""Get a group summary
|
||||
"""
|
||||
"""Get a group summary"""
|
||||
path = _create_v1_path("/groups/%s/summary", group_id)
|
||||
|
||||
return self.client.get_json(
|
||||
@ -597,8 +595,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_rooms_in_group(self, destination, group_id, requester_user_id):
|
||||
"""Get all rooms in a group
|
||||
"""
|
||||
"""Get all rooms in a group"""
|
||||
path = _create_v1_path("/groups/%s/rooms", group_id)
|
||||
|
||||
return self.client.get_json(
|
||||
@ -611,8 +608,7 @@ class TransportLayerClient:
|
||||
def add_room_to_group(
|
||||
self, destination, group_id, requester_user_id, room_id, content
|
||||
):
|
||||
"""Add a room to a group
|
||||
"""
|
||||
"""Add a room to a group"""
|
||||
path = _create_v1_path("/groups/%s/room/%s", group_id, room_id)
|
||||
|
||||
return self.client.post_json(
|
||||
@ -626,8 +622,7 @@ class TransportLayerClient:
|
||||
def update_room_in_group(
|
||||
self, destination, group_id, requester_user_id, room_id, config_key, content
|
||||
):
|
||||
"""Update room in group
|
||||
"""
|
||||
"""Update room in group"""
|
||||
path = _create_v1_path(
|
||||
"/groups/%s/room/%s/config/%s", group_id, room_id, config_key
|
||||
)
|
||||
@ -641,8 +636,7 @@ class TransportLayerClient:
|
||||
)
|
||||
|
||||
def remove_room_from_group(self, destination, group_id, requester_user_id, room_id):
|
||||
"""Remove a room from a group
|
||||
"""
|
||||
"""Remove a room from a group"""
|
||||
path = _create_v1_path("/groups/%s/room/%s", group_id, room_id)
|
||||
|
||||
return self.client.delete_json(
|
||||
@ -654,8 +648,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_users_in_group(self, destination, group_id, requester_user_id):
|
||||
"""Get users in a group
|
||||
"""
|
||||
"""Get users in a group"""
|
||||
path = _create_v1_path("/groups/%s/users", group_id)
|
||||
|
||||
return self.client.get_json(
|
||||
@ -667,8 +660,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_invited_users_in_group(self, destination, group_id, requester_user_id):
|
||||
"""Get users that have been invited to a group
|
||||
"""
|
||||
"""Get users that have been invited to a group"""
|
||||
path = _create_v1_path("/groups/%s/invited_users", group_id)
|
||||
|
||||
return self.client.get_json(
|
||||
@ -680,8 +672,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def accept_group_invite(self, destination, group_id, user_id, content):
|
||||
"""Accept a group invite
|
||||
"""
|
||||
"""Accept a group invite"""
|
||||
path = _create_v1_path("/groups/%s/users/%s/accept_invite", group_id, user_id)
|
||||
|
||||
return self.client.post_json(
|
||||
@ -690,8 +681,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def join_group(self, destination, group_id, user_id, content):
|
||||
"""Attempts to join a group
|
||||
"""
|
||||
"""Attempts to join a group"""
|
||||
path = _create_v1_path("/groups/%s/users/%s/join", group_id, user_id)
|
||||
|
||||
return self.client.post_json(
|
||||
@ -702,8 +692,7 @@ class TransportLayerClient:
|
||||
def invite_to_group(
|
||||
self, destination, group_id, user_id, requester_user_id, content
|
||||
):
|
||||
"""Invite a user to a group
|
||||
"""
|
||||
"""Invite a user to a group"""
|
||||
path = _create_v1_path("/groups/%s/users/%s/invite", group_id, user_id)
|
||||
|
||||
return self.client.post_json(
|
||||
@ -730,8 +719,7 @@ class TransportLayerClient:
|
||||
def remove_user_from_group(
|
||||
self, destination, group_id, requester_user_id, user_id, content
|
||||
):
|
||||
"""Remove a user from a group
|
||||
"""
|
||||
"""Remove a user from a group"""
|
||||
path = _create_v1_path("/groups/%s/users/%s/remove", group_id, user_id)
|
||||
|
||||
return self.client.post_json(
|
||||
@ -772,8 +760,7 @@ class TransportLayerClient:
|
||||
def update_group_summary_room(
|
||||
self, destination, group_id, user_id, room_id, category_id, content
|
||||
):
|
||||
"""Update a room entry in a group summary
|
||||
"""
|
||||
"""Update a room entry in a group summary"""
|
||||
if category_id:
|
||||
path = _create_v1_path(
|
||||
"/groups/%s/summary/categories/%s/rooms/%s",
|
||||
@ -796,8 +783,7 @@ class TransportLayerClient:
|
||||
def delete_group_summary_room(
|
||||
self, destination, group_id, user_id, room_id, category_id
|
||||
):
|
||||
"""Delete a room entry in a group summary
|
||||
"""
|
||||
"""Delete a room entry in a group summary"""
|
||||
if category_id:
|
||||
path = _create_v1_path(
|
||||
"/groups/%s/summary/categories/%s/rooms/%s",
|
||||
@ -817,8 +803,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_group_categories(self, destination, group_id, requester_user_id):
|
||||
"""Get all categories in a group
|
||||
"""
|
||||
"""Get all categories in a group"""
|
||||
path = _create_v1_path("/groups/%s/categories", group_id)
|
||||
|
||||
return self.client.get_json(
|
||||
@ -830,8 +815,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_group_category(self, destination, group_id, requester_user_id, category_id):
|
||||
"""Get category info in a group
|
||||
"""
|
||||
"""Get category info in a group"""
|
||||
path = _create_v1_path("/groups/%s/categories/%s", group_id, category_id)
|
||||
|
||||
return self.client.get_json(
|
||||
@ -845,8 +829,7 @@ class TransportLayerClient:
|
||||
def update_group_category(
|
||||
self, destination, group_id, requester_user_id, category_id, content
|
||||
):
|
||||
"""Update a category in a group
|
||||
"""
|
||||
"""Update a category in a group"""
|
||||
path = _create_v1_path("/groups/%s/categories/%s", group_id, category_id)
|
||||
|
||||
return self.client.post_json(
|
||||
@ -861,8 +844,7 @@ class TransportLayerClient:
|
||||
def delete_group_category(
|
||||
self, destination, group_id, requester_user_id, category_id
|
||||
):
|
||||
"""Delete a category in a group
|
||||
"""
|
||||
"""Delete a category in a group"""
|
||||
path = _create_v1_path("/groups/%s/categories/%s", group_id, category_id)
|
||||
|
||||
return self.client.delete_json(
|
||||
@ -874,8 +856,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_group_roles(self, destination, group_id, requester_user_id):
|
||||
"""Get all roles in a group
|
||||
"""
|
||||
"""Get all roles in a group"""
|
||||
path = _create_v1_path("/groups/%s/roles", group_id)
|
||||
|
||||
return self.client.get_json(
|
||||
@ -887,8 +868,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def get_group_role(self, destination, group_id, requester_user_id, role_id):
|
||||
"""Get a roles info
|
||||
"""
|
||||
"""Get a roles info"""
|
||||
path = _create_v1_path("/groups/%s/roles/%s", group_id, role_id)
|
||||
|
||||
return self.client.get_json(
|
||||
@ -902,8 +882,7 @@ class TransportLayerClient:
|
||||
def update_group_role(
|
||||
self, destination, group_id, requester_user_id, role_id, content
|
||||
):
|
||||
"""Update a role in a group
|
||||
"""
|
||||
"""Update a role in a group"""
|
||||
path = _create_v1_path("/groups/%s/roles/%s", group_id, role_id)
|
||||
|
||||
return self.client.post_json(
|
||||
@ -916,8 +895,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def delete_group_role(self, destination, group_id, requester_user_id, role_id):
|
||||
"""Delete a role in a group
|
||||
"""
|
||||
"""Delete a role in a group"""
|
||||
path = _create_v1_path("/groups/%s/roles/%s", group_id, role_id)
|
||||
|
||||
return self.client.delete_json(
|
||||
@ -931,8 +909,7 @@ class TransportLayerClient:
|
||||
def update_group_summary_user(
|
||||
self, destination, group_id, requester_user_id, user_id, role_id, content
|
||||
):
|
||||
"""Update a users entry in a group
|
||||
"""
|
||||
"""Update a users entry in a group"""
|
||||
if role_id:
|
||||
path = _create_v1_path(
|
||||
"/groups/%s/summary/roles/%s/users/%s", group_id, role_id, user_id
|
||||
@ -950,8 +927,7 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
def set_group_join_policy(self, destination, group_id, requester_user_id, content):
|
||||
"""Sets the join policy for a group
|
||||
"""
|
||||
"""Sets the join policy for a group"""
|
||||
path = _create_v1_path("/groups/%s/settings/m.join_policy", group_id)
|
||||
|
||||
return self.client.put_json(
|
||||
@ -966,8 +942,7 @@ class TransportLayerClient:
|
||||
def delete_group_summary_user(
|
||||
self, destination, group_id, requester_user_id, user_id, role_id
|
||||
):
|
||||
"""Delete a users entry in a group
|
||||
"""
|
||||
"""Delete a users entry in a group"""
|
||||
if role_id:
|
||||
path = _create_v1_path(
|
||||
"/groups/%s/summary/roles/%s/users/%s", group_id, role_id, user_id
|
||||
@ -983,8 +958,7 @@ class TransportLayerClient:
|
||||
)
|
||||
|
||||
def bulk_get_publicised_groups(self, destination, user_ids):
|
||||
"""Get the groups a list of users are publicising
|
||||
"""
|
||||
"""Get the groups a list of users are publicising"""
|
||||
|
||||
path = _create_v1_path("/get_groups_publicised")
|
||||
|
||||
|
@ -21,6 +21,7 @@ import re
|
||||
from typing import Optional, Tuple, Type
|
||||
|
||||
import synapse
|
||||
from synapse.api.constants import MAX_GROUP_CATEGORYID_LENGTH, MAX_GROUP_ROLEID_LENGTH
|
||||
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
|
||||
from synapse.api.room_versions import RoomVersions
|
||||
from synapse.api.urls import (
|
||||
@ -364,7 +365,10 @@ class BaseFederationServlet:
|
||||
continue
|
||||
|
||||
server.register_paths(
|
||||
method, (pattern,), self._wrap(code), self.__class__.__name__,
|
||||
method,
|
||||
(pattern,),
|
||||
self._wrap(code),
|
||||
self.__class__.__name__,
|
||||
)
|
||||
|
||||
|
||||
@ -381,7 +385,7 @@ class FederationSendServlet(BaseFederationServlet):
|
||||
|
||||
# This is when someone is trying to send us a bunch of data.
|
||||
async def on_PUT(self, origin, content, query, transaction_id):
|
||||
""" Called on PUT /send/<transaction_id>/
|
||||
"""Called on PUT /send/<transaction_id>/
|
||||
|
||||
Args:
|
||||
request (twisted.web.http.Request): The HTTP request.
|
||||
@ -855,8 +859,7 @@ class FederationVersionServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsProfileServlet(BaseFederationServlet):
|
||||
"""Get/set the basic profile of a group on behalf of a user
|
||||
"""
|
||||
"""Get/set the basic profile of a group on behalf of a user"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/profile"
|
||||
|
||||
@ -895,8 +898,7 @@ class FederationGroupsSummaryServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsRoomsServlet(BaseFederationServlet):
|
||||
"""Get the rooms in a group on behalf of a user
|
||||
"""
|
||||
"""Get the rooms in a group on behalf of a user"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/rooms"
|
||||
|
||||
@ -911,8 +913,7 @@ class FederationGroupsRoomsServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsAddRoomsServlet(BaseFederationServlet):
|
||||
"""Add/remove room from group
|
||||
"""
|
||||
"""Add/remove room from group"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/room/(?P<room_id>[^/]*)"
|
||||
|
||||
@ -940,8 +941,7 @@ class FederationGroupsAddRoomsServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsAddRoomsConfigServlet(BaseFederationServlet):
|
||||
"""Update room config in group
|
||||
"""
|
||||
"""Update room config in group"""
|
||||
|
||||
PATH = (
|
||||
"/groups/(?P<group_id>[^/]*)/room/(?P<room_id>[^/]*)"
|
||||
@ -961,8 +961,7 @@ class FederationGroupsAddRoomsConfigServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsUsersServlet(BaseFederationServlet):
|
||||
"""Get the users in a group on behalf of a user
|
||||
"""
|
||||
"""Get the users in a group on behalf of a user"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/users"
|
||||
|
||||
@ -977,8 +976,7 @@ class FederationGroupsUsersServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsInvitedUsersServlet(BaseFederationServlet):
|
||||
"""Get the users that have been invited to a group
|
||||
"""
|
||||
"""Get the users that have been invited to a group"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/invited_users"
|
||||
|
||||
@ -995,8 +993,7 @@ class FederationGroupsInvitedUsersServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsInviteServlet(BaseFederationServlet):
|
||||
"""Ask a group server to invite someone to the group
|
||||
"""
|
||||
"""Ask a group server to invite someone to the group"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/invite"
|
||||
|
||||
@ -1013,8 +1010,7 @@ class FederationGroupsInviteServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsAcceptInviteServlet(BaseFederationServlet):
|
||||
"""Accept an invitation from the group server
|
||||
"""
|
||||
"""Accept an invitation from the group server"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/accept_invite"
|
||||
|
||||
@ -1028,8 +1024,7 @@ class FederationGroupsAcceptInviteServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsJoinServlet(BaseFederationServlet):
|
||||
"""Attempt to join a group
|
||||
"""
|
||||
"""Attempt to join a group"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/join"
|
||||
|
||||
@ -1043,8 +1038,7 @@ class FederationGroupsJoinServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsRemoveUserServlet(BaseFederationServlet):
|
||||
"""Leave or kick a user from the group
|
||||
"""
|
||||
"""Leave or kick a user from the group"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/remove"
|
||||
|
||||
@ -1061,8 +1055,7 @@ class FederationGroupsRemoveUserServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsLocalInviteServlet(BaseFederationServlet):
|
||||
"""A group server has invited a local user
|
||||
"""
|
||||
"""A group server has invited a local user"""
|
||||
|
||||
PATH = "/groups/local/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/invite"
|
||||
|
||||
@ -1076,8 +1069,7 @@ class FederationGroupsLocalInviteServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsRemoveLocalUserServlet(BaseFederationServlet):
|
||||
"""A group server has removed a local user
|
||||
"""
|
||||
"""A group server has removed a local user"""
|
||||
|
||||
PATH = "/groups/local/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/remove"
|
||||
|
||||
@ -1093,8 +1085,7 @@ class FederationGroupsRemoveLocalUserServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsRenewAttestaionServlet(BaseFederationServlet):
|
||||
"""A group or user's server renews their attestation
|
||||
"""
|
||||
"""A group or user's server renews their attestation"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/renew_attestation/(?P<user_id>[^/]*)"
|
||||
|
||||
@ -1128,7 +1119,17 @@ class FederationGroupsSummaryRoomsServlet(BaseFederationServlet):
|
||||
raise SynapseError(403, "requester_user_id doesn't match origin")
|
||||
|
||||
if category_id == "":
|
||||
raise SynapseError(400, "category_id cannot be empty string")
|
||||
raise SynapseError(
|
||||
400, "category_id cannot be empty string", Codes.INVALID_PARAM
|
||||
)
|
||||
|
||||
if len(category_id) > MAX_GROUP_CATEGORYID_LENGTH:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"category_id may not be longer than %s characters"
|
||||
% (MAX_GROUP_CATEGORYID_LENGTH,),
|
||||
Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
resp = await self.handler.update_group_summary_room(
|
||||
group_id,
|
||||
@ -1156,8 +1157,7 @@ class FederationGroupsSummaryRoomsServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsCategoriesServlet(BaseFederationServlet):
|
||||
"""Get all categories for a group
|
||||
"""
|
||||
"""Get all categories for a group"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/categories/?"
|
||||
|
||||
@ -1172,8 +1172,7 @@ class FederationGroupsCategoriesServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsCategoryServlet(BaseFederationServlet):
|
||||
"""Add/remove/get a category in a group
|
||||
"""
|
||||
"""Add/remove/get a category in a group"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/categories/(?P<category_id>[^/]+)"
|
||||
|
||||
@ -1196,6 +1195,14 @@ class FederationGroupsCategoryServlet(BaseFederationServlet):
|
||||
if category_id == "":
|
||||
raise SynapseError(400, "category_id cannot be empty string")
|
||||
|
||||
if len(category_id) > MAX_GROUP_CATEGORYID_LENGTH:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"category_id may not be longer than %s characters"
|
||||
% (MAX_GROUP_CATEGORYID_LENGTH,),
|
||||
Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
resp = await self.handler.upsert_group_category(
|
||||
group_id, requester_user_id, category_id, content
|
||||
)
|
||||
@ -1218,8 +1225,7 @@ class FederationGroupsCategoryServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsRolesServlet(BaseFederationServlet):
|
||||
"""Get roles in a group
|
||||
"""
|
||||
"""Get roles in a group"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/roles/?"
|
||||
|
||||
@ -1234,8 +1240,7 @@ class FederationGroupsRolesServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsRoleServlet(BaseFederationServlet):
|
||||
"""Add/remove/get a role in a group
|
||||
"""
|
||||
"""Add/remove/get a role in a group"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/roles/(?P<role_id>[^/]+)"
|
||||
|
||||
@ -1254,7 +1259,17 @@ class FederationGroupsRoleServlet(BaseFederationServlet):
|
||||
raise SynapseError(403, "requester_user_id doesn't match origin")
|
||||
|
||||
if role_id == "":
|
||||
raise SynapseError(400, "role_id cannot be empty string")
|
||||
raise SynapseError(
|
||||
400, "role_id cannot be empty string", Codes.INVALID_PARAM
|
||||
)
|
||||
|
||||
if len(role_id) > MAX_GROUP_ROLEID_LENGTH:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"role_id may not be longer than %s characters"
|
||||
% (MAX_GROUP_ROLEID_LENGTH,),
|
||||
Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
resp = await self.handler.update_group_role(
|
||||
group_id, requester_user_id, role_id, content
|
||||
@ -1299,6 +1314,14 @@ class FederationGroupsSummaryUsersServlet(BaseFederationServlet):
|
||||
if role_id == "":
|
||||
raise SynapseError(400, "role_id cannot be empty string")
|
||||
|
||||
if len(role_id) > MAX_GROUP_ROLEID_LENGTH:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"role_id may not be longer than %s characters"
|
||||
% (MAX_GROUP_ROLEID_LENGTH,),
|
||||
Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
resp = await self.handler.update_group_summary_user(
|
||||
group_id,
|
||||
requester_user_id,
|
||||
@ -1325,8 +1348,7 @@ class FederationGroupsSummaryUsersServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsBulkPublicisedServlet(BaseFederationServlet):
|
||||
"""Get roles in a group
|
||||
"""
|
||||
"""Get roles in a group"""
|
||||
|
||||
PATH = "/get_groups_publicised"
|
||||
|
||||
@ -1339,8 +1361,7 @@ class FederationGroupsBulkPublicisedServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
|
||||
"""Sets whether a group is joinable without an invite or knock
|
||||
"""
|
||||
"""Sets whether a group is joinable without an invite or knock"""
|
||||
|
||||
PATH = "/groups/(?P<group_id>[^/]*)/settings/m.join_policy"
|
||||
|
||||
|
@ -29,7 +29,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
@attr.s(slots=True)
|
||||
class Edu(JsonEncodedObject):
|
||||
""" An Edu represents a piece of data sent from one homeserver to another.
|
||||
"""An Edu represents a piece of data sent from one homeserver to another.
|
||||
|
||||
In comparison to Pdus, Edus are not persisted for a long time on disk, are
|
||||
not meaningful beyond a given pair of homeservers, and don't have an
|
||||
@ -63,7 +63,7 @@ class Edu(JsonEncodedObject):
|
||||
|
||||
|
||||
class Transaction(JsonEncodedObject):
|
||||
""" A transaction is a list of Pdus and Edus to be sent to a remote home
|
||||
"""A transaction is a list of Pdus and Edus to be sent to a remote home
|
||||
server with some extra metadata.
|
||||
|
||||
Example transaction::
|
||||
@ -99,7 +99,7 @@ class Transaction(JsonEncodedObject):
|
||||
]
|
||||
|
||||
def __init__(self, transaction_id=None, pdus=[], **kwargs):
|
||||
""" If we include a list of pdus then we decode then as PDU's
|
||||
"""If we include a list of pdus then we decode then as PDU's
|
||||
automatically.
|
||||
"""
|
||||
|
||||
@ -111,7 +111,7 @@ class Transaction(JsonEncodedObject):
|
||||
|
||||
@staticmethod
|
||||
def create_new(pdus, **kwargs):
|
||||
""" Used to create a new transaction. Will auto fill out
|
||||
"""Used to create a new transaction. Will auto fill out
|
||||
transaction_id and origin_server_ts keys.
|
||||
"""
|
||||
if "origin_server_ts" not in kwargs:
|
||||
|
@ -37,13 +37,16 @@ An attestation is a signed blob of json that looks like:
|
||||
|
||||
import logging
|
||||
import random
|
||||
from typing import Tuple
|
||||
from typing import TYPE_CHECKING, Optional, Tuple
|
||||
|
||||
from signedjson.sign import sign_json
|
||||
|
||||
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.types import get_domain_from_id
|
||||
from synapse.types import JsonDict, get_domain_from_id
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.app.homeserver import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -61,18 +64,21 @@ UPDATE_ATTESTATION_TIME_MS = 1 * 24 * 60 * 60 * 1000
|
||||
|
||||
|
||||
class GroupAttestationSigning:
|
||||
"""Creates and verifies group attestations.
|
||||
"""
|
||||
"""Creates and verifies group attestations."""
|
||||
|
||||
def __init__(self, hs):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.keyring = hs.get_keyring()
|
||||
self.clock = hs.get_clock()
|
||||
self.server_name = hs.hostname
|
||||
self.signing_key = hs.signing_key
|
||||
|
||||
async def verify_attestation(
|
||||
self, attestation, group_id, user_id, server_name=None
|
||||
):
|
||||
self,
|
||||
attestation: JsonDict,
|
||||
group_id: str,
|
||||
user_id: str,
|
||||
server_name: Optional[str] = None,
|
||||
) -> None:
|
||||
"""Verifies that the given attestation matches the given parameters.
|
||||
|
||||
An optional server_name can be supplied to explicitly set which server's
|
||||
@ -101,16 +107,18 @@ class GroupAttestationSigning:
|
||||
if valid_until_ms < now:
|
||||
raise SynapseError(400, "Attestation expired")
|
||||
|
||||
assert server_name is not None
|
||||
await self.keyring.verify_json_for_server(
|
||||
server_name, attestation, now, "Group attestation"
|
||||
)
|
||||
|
||||
def create_attestation(self, group_id, user_id):
|
||||
def create_attestation(self, group_id: str, user_id: str) -> JsonDict:
|
||||
"""Create an attestation for the group_id and user_id with default
|
||||
validity length.
|
||||
"""
|
||||
validity_period = DEFAULT_ATTESTATION_LENGTH_MS
|
||||
validity_period *= random.uniform(*DEFAULT_ATTESTATION_JITTER)
|
||||
validity_period = DEFAULT_ATTESTATION_LENGTH_MS * random.uniform(
|
||||
*DEFAULT_ATTESTATION_JITTER
|
||||
)
|
||||
valid_until_ms = int(self.clock.time_msec() + validity_period)
|
||||
|
||||
return sign_json(
|
||||
@ -125,10 +133,9 @@ class GroupAttestationSigning:
|
||||
|
||||
|
||||
class GroupAttestionRenewer:
|
||||
"""Responsible for sending and receiving attestation updates.
|
||||
"""
|
||||
"""Responsible for sending and receiving attestation updates."""
|
||||
|
||||
def __init__(self, hs):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.clock = hs.get_clock()
|
||||
self.store = hs.get_datastore()
|
||||
self.assestations = hs.get_groups_attestation_signing()
|
||||
@ -141,9 +148,10 @@ class GroupAttestionRenewer:
|
||||
self._start_renew_attestations, 30 * 60 * 1000
|
||||
)
|
||||
|
||||
async def on_renew_attestation(self, group_id, user_id, content):
|
||||
"""When a remote updates an attestation
|
||||
"""
|
||||
async def on_renew_attestation(
|
||||
self, group_id: str, user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""When a remote updates an attestation"""
|
||||
attestation = content["attestation"]
|
||||
|
||||
if not self.is_mine_id(group_id) and not self.is_mine_id(user_id):
|
||||
@ -157,12 +165,11 @@ class GroupAttestionRenewer:
|
||||
|
||||
return {}
|
||||
|
||||
def _start_renew_attestations(self):
|
||||
def _start_renew_attestations(self) -> None:
|
||||
return run_as_background_process("renew_attestations", self._renew_attestations)
|
||||
|
||||
async def _renew_attestations(self):
|
||||
"""Called periodically to check if we need to update any of our attestations
|
||||
"""
|
||||
async def _renew_attestations(self) -> None:
|
||||
"""Called periodically to check if we need to update any of our attestations"""
|
||||
|
||||
now = self.clock.time_msec()
|
||||
|
||||
@ -170,7 +177,7 @@ class GroupAttestionRenewer:
|
||||
now + UPDATE_ATTESTATION_TIME_MS
|
||||
)
|
||||
|
||||
async def _renew_attestation(group_user: Tuple[str, str]):
|
||||
async def _renew_attestation(group_user: Tuple[str, str]) -> None:
|
||||
group_id, user_id = group_user
|
||||
try:
|
||||
if not self.is_mine_id(group_id):
|
||||
|
@ -16,11 +16,17 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Optional
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
|
||||
from synapse.handlers.groups_local import GroupsLocalHandler
|
||||
from synapse.handlers.profile import MAX_AVATAR_URL_LEN, MAX_DISPLAYNAME_LEN
|
||||
from synapse.types import GroupID, JsonDict, RoomID, UserID, get_domain_from_id
|
||||
from synapse.util.async_helpers import concurrently_execute
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.app.homeserver import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@ -32,8 +38,13 @@ logger = logging.getLogger(__name__)
|
||||
# TODO: Flairs
|
||||
|
||||
|
||||
# Note that the maximum lengths are somewhat arbitrary.
|
||||
MAX_SHORT_DESC_LEN = 1000
|
||||
MAX_LONG_DESC_LEN = 10000
|
||||
|
||||
|
||||
class GroupsServerWorkerHandler:
|
||||
def __init__(self, hs):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.hs = hs
|
||||
self.store = hs.get_datastore()
|
||||
self.room_list_handler = hs.get_room_list_handler()
|
||||
@ -48,16 +59,21 @@ class GroupsServerWorkerHandler:
|
||||
self.profile_handler = hs.get_profile_handler()
|
||||
|
||||
async def check_group_is_ours(
|
||||
self, group_id, requester_user_id, and_exists=False, and_is_admin=None
|
||||
):
|
||||
self,
|
||||
group_id: str,
|
||||
requester_user_id: str,
|
||||
and_exists: bool = False,
|
||||
and_is_admin: Optional[str] = None,
|
||||
) -> Optional[dict]:
|
||||
"""Check that the group is ours, and optionally if it exists.
|
||||
|
||||
If group does exist then return group.
|
||||
|
||||
Args:
|
||||
group_id (str)
|
||||
and_exists (bool): whether to also check if group exists
|
||||
and_is_admin (str): whether to also check if given str is a user_id
|
||||
group_id: The group ID to check.
|
||||
requester_user_id: The user ID of the requester.
|
||||
and_exists: whether to also check if group exists
|
||||
and_is_admin: whether to also check if given str is a user_id
|
||||
that is an admin
|
||||
"""
|
||||
if not self.is_mine_id(group_id):
|
||||
@ -80,7 +96,9 @@ class GroupsServerWorkerHandler:
|
||||
|
||||
return group
|
||||
|
||||
async def get_group_summary(self, group_id, requester_user_id):
|
||||
async def get_group_summary(
|
||||
self, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
"""Get the summary for a group as seen by requester_user_id.
|
||||
|
||||
The group summary consists of the profile of the room, and a curated
|
||||
@ -113,6 +131,8 @@ class GroupsServerWorkerHandler:
|
||||
entry = await self.room_list_handler.generate_room_entry(
|
||||
room_id, len(joined_users), with_alias=False, allow_private=True
|
||||
)
|
||||
if entry is None:
|
||||
continue
|
||||
entry = dict(entry) # so we don't change what's cached
|
||||
entry.pop("room_id", None)
|
||||
|
||||
@ -120,22 +140,22 @@ class GroupsServerWorkerHandler:
|
||||
|
||||
rooms.sort(key=lambda e: e.get("order", 0))
|
||||
|
||||
for entry in users:
|
||||
user_id = entry["user_id"]
|
||||
for user in users:
|
||||
user_id = user["user_id"]
|
||||
|
||||
if not self.is_mine_id(requester_user_id):
|
||||
attestation = await self.store.get_remote_attestation(group_id, user_id)
|
||||
if not attestation:
|
||||
continue
|
||||
|
||||
entry["attestation"] = attestation
|
||||
user["attestation"] = attestation
|
||||
else:
|
||||
entry["attestation"] = self.attestations.create_attestation(
|
||||
user["attestation"] = self.attestations.create_attestation(
|
||||
group_id, user_id
|
||||
)
|
||||
|
||||
user_profile = await self.profile_handler.get_profile_from_cache(user_id)
|
||||
entry.update(user_profile)
|
||||
user.update(user_profile)
|
||||
|
||||
users.sort(key=lambda e: e.get("order", 0))
|
||||
|
||||
@ -158,46 +178,44 @@ class GroupsServerWorkerHandler:
|
||||
"user": membership_info,
|
||||
}
|
||||
|
||||
async def get_group_categories(self, group_id, requester_user_id):
|
||||
"""Get all categories in a group (as seen by user)
|
||||
"""
|
||||
async def get_group_categories(
|
||||
self, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
"""Get all categories in a group (as seen by user)"""
|
||||
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
|
||||
|
||||
categories = await self.store.get_group_categories(group_id=group_id)
|
||||
return {"categories": categories}
|
||||
|
||||
async def get_group_category(self, group_id, requester_user_id, category_id):
|
||||
"""Get a specific category in a group (as seen by user)
|
||||
"""
|
||||
async def get_group_category(
|
||||
self, group_id: str, requester_user_id: str, category_id: str
|
||||
) -> JsonDict:
|
||||
"""Get a specific category in a group (as seen by user)"""
|
||||
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
|
||||
|
||||
res = await self.store.get_group_category(
|
||||
return await self.store.get_group_category(
|
||||
group_id=group_id, category_id=category_id
|
||||
)
|
||||
|
||||
logger.info("group %s", res)
|
||||
|
||||
return res
|
||||
|
||||
async def get_group_roles(self, group_id, requester_user_id):
|
||||
"""Get all roles in a group (as seen by user)
|
||||
"""
|
||||
async def get_group_roles(self, group_id: str, requester_user_id: str) -> JsonDict:
|
||||
"""Get all roles in a group (as seen by user)"""
|
||||
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
|
||||
|
||||
roles = await self.store.get_group_roles(group_id=group_id)
|
||||
return {"roles": roles}
|
||||
|
||||
async def get_group_role(self, group_id, requester_user_id, role_id):
|
||||
"""Get a specific role in a group (as seen by user)
|
||||
"""
|
||||
async def get_group_role(
|
||||
self, group_id: str, requester_user_id: str, role_id: str
|
||||
) -> JsonDict:
|
||||
"""Get a specific role in a group (as seen by user)"""
|
||||
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
|
||||
|
||||
res = await self.store.get_group_role(group_id=group_id, role_id=role_id)
|
||||
return res
|
||||
return await self.store.get_group_role(group_id=group_id, role_id=role_id)
|
||||
|
||||
async def get_group_profile(self, group_id, requester_user_id):
|
||||
"""Get the group profile as seen by requester_user_id
|
||||
"""
|
||||
async def get_group_profile(
|
||||
self, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
"""Get the group profile as seen by requester_user_id"""
|
||||
|
||||
await self.check_group_is_ours(group_id, requester_user_id)
|
||||
|
||||
@ -218,7 +236,9 @@ class GroupsServerWorkerHandler:
|
||||
else:
|
||||
raise SynapseError(404, "Unknown group")
|
||||
|
||||
async def get_users_in_group(self, group_id, requester_user_id):
|
||||
async def get_users_in_group(
|
||||
self, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
"""Get the users in group as seen by requester_user_id.
|
||||
|
||||
The ordering is arbitrary at the moment
|
||||
@ -267,7 +287,9 @@ class GroupsServerWorkerHandler:
|
||||
|
||||
return {"chunk": chunk, "total_user_count_estimate": len(user_results)}
|
||||
|
||||
async def get_invited_users_in_group(self, group_id, requester_user_id):
|
||||
async def get_invited_users_in_group(
|
||||
self, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
"""Get the users that have been invited to a group as seen by requester_user_id.
|
||||
|
||||
The ordering is arbitrary at the moment
|
||||
@ -297,7 +319,9 @@ class GroupsServerWorkerHandler:
|
||||
|
||||
return {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
|
||||
|
||||
async def get_rooms_in_group(self, group_id, requester_user_id):
|
||||
async def get_rooms_in_group(
|
||||
self, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
"""Get the rooms in group as seen by requester_user_id
|
||||
|
||||
This returns rooms in order of decreasing number of joined users
|
||||
@ -335,17 +359,21 @@ class GroupsServerWorkerHandler:
|
||||
|
||||
|
||||
class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
def __init__(self, hs):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
|
||||
# Ensure attestations get renewed
|
||||
hs.get_groups_attestation_renewer()
|
||||
|
||||
async def update_group_summary_room(
|
||||
self, group_id, requester_user_id, room_id, category_id, content
|
||||
):
|
||||
"""Add/update a room to the group summary
|
||||
"""
|
||||
self,
|
||||
group_id: str,
|
||||
requester_user_id: str,
|
||||
room_id: str,
|
||||
category_id: str,
|
||||
content: JsonDict,
|
||||
) -> JsonDict:
|
||||
"""Add/update a room to the group summary"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
@ -367,10 +395,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
return {}
|
||||
|
||||
async def delete_group_summary_room(
|
||||
self, group_id, requester_user_id, room_id, category_id
|
||||
):
|
||||
"""Remove a room from the summary
|
||||
"""
|
||||
self, group_id: str, requester_user_id: str, room_id: str, category_id: str
|
||||
) -> JsonDict:
|
||||
"""Remove a room from the summary"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
@ -381,7 +408,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {}
|
||||
|
||||
async def set_group_join_policy(self, group_id, requester_user_id, content):
|
||||
async def set_group_join_policy(
|
||||
self, group_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Sets the group join policy.
|
||||
|
||||
Currently supported policies are:
|
||||
@ -401,10 +430,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
return {}
|
||||
|
||||
async def update_group_category(
|
||||
self, group_id, requester_user_id, category_id, content
|
||||
):
|
||||
"""Add/Update a group category
|
||||
"""
|
||||
self, group_id: str, requester_user_id: str, category_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Add/Update a group category"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
@ -421,9 +449,10 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {}
|
||||
|
||||
async def delete_group_category(self, group_id, requester_user_id, category_id):
|
||||
"""Delete a group category
|
||||
"""
|
||||
async def delete_group_category(
|
||||
self, group_id: str, requester_user_id: str, category_id: str
|
||||
) -> JsonDict:
|
||||
"""Delete a group category"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
@ -434,9 +463,10 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {}
|
||||
|
||||
async def update_group_role(self, group_id, requester_user_id, role_id, content):
|
||||
"""Add/update a role in a group
|
||||
"""
|
||||
async def update_group_role(
|
||||
self, group_id: str, requester_user_id: str, role_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Add/update a role in a group"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
@ -451,9 +481,10 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {}
|
||||
|
||||
async def delete_group_role(self, group_id, requester_user_id, role_id):
|
||||
"""Remove role from group
|
||||
"""
|
||||
async def delete_group_role(
|
||||
self, group_id: str, requester_user_id: str, role_id: str
|
||||
) -> JsonDict:
|
||||
"""Remove role from group"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
@ -463,10 +494,14 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
return {}
|
||||
|
||||
async def update_group_summary_user(
|
||||
self, group_id, requester_user_id, user_id, role_id, content
|
||||
):
|
||||
"""Add/update a users entry in the group summary
|
||||
"""
|
||||
self,
|
||||
group_id: str,
|
||||
requester_user_id: str,
|
||||
user_id: str,
|
||||
role_id: str,
|
||||
content: JsonDict,
|
||||
) -> JsonDict:
|
||||
"""Add/update a users entry in the group summary"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
@ -486,10 +521,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
return {}
|
||||
|
||||
async def delete_group_summary_user(
|
||||
self, group_id, requester_user_id, user_id, role_id
|
||||
):
|
||||
"""Remove a user from the group summary
|
||||
"""
|
||||
self, group_id: str, requester_user_id: str, user_id: str, role_id: str
|
||||
) -> JsonDict:
|
||||
"""Remove a user from the group summary"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
@ -500,26 +534,43 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {}
|
||||
|
||||
async def update_group_profile(self, group_id, requester_user_id, content):
|
||||
"""Update the group profile
|
||||
"""
|
||||
async def update_group_profile(
|
||||
self, group_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> None:
|
||||
"""Update the group profile"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
|
||||
profile = {}
|
||||
for keyname in ("name", "avatar_url", "short_description", "long_description"):
|
||||
for keyname, max_length in (
|
||||
("name", MAX_DISPLAYNAME_LEN),
|
||||
("avatar_url", MAX_AVATAR_URL_LEN),
|
||||
("short_description", MAX_SHORT_DESC_LEN),
|
||||
("long_description", MAX_LONG_DESC_LEN),
|
||||
):
|
||||
if keyname in content:
|
||||
value = content[keyname]
|
||||
if not isinstance(value, str):
|
||||
raise SynapseError(400, "%r value is not a string" % (keyname,))
|
||||
raise SynapseError(
|
||||
400,
|
||||
"%r value is not a string" % (keyname,),
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
if len(value) > max_length:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Invalid %s parameter" % (keyname,),
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
profile[keyname] = value
|
||||
|
||||
await self.store.update_group_profile(group_id, profile)
|
||||
|
||||
async def add_room_to_group(self, group_id, requester_user_id, room_id, content):
|
||||
"""Add room to group
|
||||
"""
|
||||
async def add_room_to_group(
|
||||
self, group_id: str, requester_user_id: str, room_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Add room to group"""
|
||||
RoomID.from_string(room_id) # Ensure valid room id
|
||||
|
||||
await self.check_group_is_ours(
|
||||
@ -533,10 +584,14 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
return {}
|
||||
|
||||
async def update_room_in_group(
|
||||
self, group_id, requester_user_id, room_id, config_key, content
|
||||
):
|
||||
"""Update room in group
|
||||
"""
|
||||
self,
|
||||
group_id: str,
|
||||
requester_user_id: str,
|
||||
room_id: str,
|
||||
config_key: str,
|
||||
content: JsonDict,
|
||||
) -> JsonDict:
|
||||
"""Update room in group"""
|
||||
RoomID.from_string(room_id) # Ensure valid room id
|
||||
|
||||
await self.check_group_is_ours(
|
||||
@ -554,9 +609,10 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {}
|
||||
|
||||
async def remove_room_from_group(self, group_id, requester_user_id, room_id):
|
||||
"""Remove room from group
|
||||
"""
|
||||
async def remove_room_from_group(
|
||||
self, group_id: str, requester_user_id: str, room_id: str
|
||||
) -> JsonDict:
|
||||
"""Remove room from group"""
|
||||
await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
@ -565,13 +621,16 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {}
|
||||
|
||||
async def invite_to_group(self, group_id, user_id, requester_user_id, content):
|
||||
"""Invite user to group
|
||||
"""
|
||||
async def invite_to_group(
|
||||
self, group_id: str, user_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Invite user to group"""
|
||||
|
||||
group = await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
|
||||
)
|
||||
if not group:
|
||||
raise SynapseError(400, "Group does not exist", errcode=Codes.BAD_STATE)
|
||||
|
||||
# TODO: Check if user knocked
|
||||
|
||||
@ -594,6 +653,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
if self.hs.is_mine_id(user_id):
|
||||
groups_local = self.hs.get_groups_local_handler()
|
||||
assert isinstance(
|
||||
groups_local, GroupsLocalHandler
|
||||
), "Workers cannot invites users to groups."
|
||||
res = await groups_local.on_invite(group_id, user_id, content)
|
||||
local_attestation = None
|
||||
else:
|
||||
@ -629,6 +691,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
local_attestation=local_attestation,
|
||||
remote_attestation=remote_attestation,
|
||||
)
|
||||
return {"state": "join"}
|
||||
elif res["state"] == "invite":
|
||||
await self.store.add_group_invite(group_id, user_id)
|
||||
return {"state": "invite"}
|
||||
@ -637,13 +700,17 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
else:
|
||||
raise SynapseError(502, "Unknown state returned by HS")
|
||||
|
||||
async def _add_user(self, group_id, user_id, content):
|
||||
async def _add_user(
|
||||
self, group_id: str, user_id: str, content: JsonDict
|
||||
) -> Optional[JsonDict]:
|
||||
"""Add a user to a group based on a content dict.
|
||||
|
||||
See accept_invite, join_group.
|
||||
"""
|
||||
if not self.hs.is_mine_id(user_id):
|
||||
local_attestation = self.attestations.create_attestation(group_id, user_id)
|
||||
local_attestation = self.attestations.create_attestation(
|
||||
group_id, user_id
|
||||
) # type: Optional[JsonDict]
|
||||
|
||||
remote_attestation = content["attestation"]
|
||||
|
||||
@ -667,7 +734,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return local_attestation
|
||||
|
||||
async def accept_invite(self, group_id, requester_user_id, content):
|
||||
async def accept_invite(
|
||||
self, group_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""User tries to accept an invite to the group.
|
||||
|
||||
This is different from them asking to join, and so should error if no
|
||||
@ -686,7 +755,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {"state": "join", "attestation": local_attestation}
|
||||
|
||||
async def join_group(self, group_id, requester_user_id, content):
|
||||
async def join_group(
|
||||
self, group_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""User tries to join the group.
|
||||
|
||||
This will error if the group requires an invite/knock to join
|
||||
@ -695,6 +766,8 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
group_info = await self.check_group_is_ours(
|
||||
group_id, requester_user_id, and_exists=True
|
||||
)
|
||||
if not group_info:
|
||||
raise SynapseError(404, "Group does not exist", errcode=Codes.NOT_FOUND)
|
||||
if group_info["join_policy"] != "open":
|
||||
raise SynapseError(403, "Group is not publicly joinable")
|
||||
|
||||
@ -702,26 +775,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {"state": "join", "attestation": local_attestation}
|
||||
|
||||
async def knock(self, group_id, requester_user_id, content):
|
||||
"""A user requests becoming a member of the group
|
||||
"""
|
||||
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
async def accept_knock(self, group_id, requester_user_id, content):
|
||||
"""Accept a users knock to the room.
|
||||
|
||||
Errors if the user hasn't knocked, rather than inviting them.
|
||||
"""
|
||||
|
||||
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
async def remove_user_from_group(
|
||||
self, group_id, user_id, requester_user_id, content
|
||||
):
|
||||
self, group_id: str, user_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Remove a user from the group; either a user is leaving or an admin
|
||||
kicked them.
|
||||
"""
|
||||
@ -743,6 +799,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
if is_kick:
|
||||
if self.hs.is_mine_id(user_id):
|
||||
groups_local = self.hs.get_groups_local_handler()
|
||||
assert isinstance(
|
||||
groups_local, GroupsLocalHandler
|
||||
), "Workers cannot remove users from groups."
|
||||
await groups_local.user_removed_from_group(group_id, user_id, {})
|
||||
else:
|
||||
await self.transport_client.remove_user_from_group_notification(
|
||||
@ -759,14 +818,15 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {}
|
||||
|
||||
async def create_group(self, group_id, requester_user_id, content):
|
||||
group = await self.check_group_is_ours(group_id, requester_user_id)
|
||||
|
||||
async def create_group(
|
||||
self, group_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
logger.info("Attempting to create group with ID: %r", group_id)
|
||||
|
||||
# parsing the id into a GroupID validates it.
|
||||
group_id_obj = GroupID.from_string(group_id)
|
||||
|
||||
group = await self.check_group_is_ours(group_id, requester_user_id)
|
||||
if group:
|
||||
raise SynapseError(400, "Group already exists")
|
||||
|
||||
@ -811,7 +871,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
local_attestation = self.attestations.create_attestation(
|
||||
group_id, requester_user_id
|
||||
)
|
||||
) # type: Optional[JsonDict]
|
||||
else:
|
||||
local_attestation = None
|
||||
remote_attestation = None
|
||||
@ -834,15 +894,14 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
|
||||
return {"group_id": group_id}
|
||||
|
||||
async def delete_group(self, group_id, requester_user_id):
|
||||
async def delete_group(self, group_id: str, requester_user_id: str) -> None:
|
||||
"""Deletes a group, kicking out all current members.
|
||||
|
||||
Only group admins or server admins can call this request
|
||||
|
||||
Args:
|
||||
group_id (str)
|
||||
request_user_id (str)
|
||||
|
||||
group_id: The group ID to delete.
|
||||
requester_user_id: The user requesting to delete the group.
|
||||
"""
|
||||
|
||||
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
|
||||
@ -865,6 +924,9 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
async def _kick_user_from_group(user_id):
|
||||
if self.hs.is_mine_id(user_id):
|
||||
groups_local = self.hs.get_groups_local_handler()
|
||||
assert isinstance(
|
||||
groups_local, GroupsLocalHandler
|
||||
), "Workers cannot kick users from groups."
|
||||
await groups_local.user_removed_from_group(group_id, user_id, {})
|
||||
else:
|
||||
await self.transport_client.remove_user_from_group_notification(
|
||||
@ -896,9 +958,8 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
await self.store.delete_group(group_id)
|
||||
|
||||
|
||||
def _parse_join_policy_from_contents(content):
|
||||
"""Given a content for a request, return the specified join policy or None
|
||||
"""
|
||||
def _parse_join_policy_from_contents(content: JsonDict) -> Optional[str]:
|
||||
"""Given a content for a request, return the specified join policy or None"""
|
||||
|
||||
join_policy_dict = content.get("m.join_policy")
|
||||
if join_policy_dict:
|
||||
@ -907,9 +968,8 @@ def _parse_join_policy_from_contents(content):
|
||||
return None
|
||||
|
||||
|
||||
def _parse_join_policy_dict(join_policy_dict):
|
||||
"""Given a dict for the "m.join_policy" config return the join policy specified
|
||||
"""
|
||||
def _parse_join_policy_dict(join_policy_dict: JsonDict) -> str:
|
||||
"""Given a dict for the "m.join_policy" config return the join policy specified"""
|
||||
join_policy_type = join_policy_dict.get("type")
|
||||
if not join_policy_type:
|
||||
return "invite"
|
||||
@ -919,7 +979,7 @@ def _parse_join_policy_dict(join_policy_dict):
|
||||
return join_policy_type
|
||||
|
||||
|
||||
def _parse_visibility_from_contents(content):
|
||||
def _parse_visibility_from_contents(content: JsonDict) -> bool:
|
||||
"""Given a content for a request parse out whether the entity should be
|
||||
public or not
|
||||
"""
|
||||
@ -933,7 +993,7 @@ def _parse_visibility_from_contents(content):
|
||||
return is_public
|
||||
|
||||
|
||||
def _parse_visibility_dict(visibility):
|
||||
def _parse_visibility_dict(visibility: JsonDict) -> bool:
|
||||
"""Given a dict for the "m.visibility" config return if the entity should
|
||||
be public or not
|
||||
"""
|
||||
|
@ -203,13 +203,11 @@ class AdminHandler(BaseHandler):
|
||||
|
||||
|
||||
class ExfiltrationWriter(metaclass=abc.ABCMeta):
|
||||
"""Interface used to specify how to write exported data.
|
||||
"""
|
||||
"""Interface used to specify how to write exported data."""
|
||||
|
||||
@abc.abstractmethod
|
||||
def write_events(self, room_id: str, events: List[EventBase]) -> None:
|
||||
"""Write a batch of events for a room.
|
||||
"""
|
||||
"""Write a batch of events for a room."""
|
||||
raise NotImplementedError()
|
||||
|
||||
@abc.abstractmethod
|
||||
|
@ -290,7 +290,9 @@ class ApplicationServicesHandler:
|
||||
if not interested:
|
||||
continue
|
||||
presence_events, _ = await presence_source.get_new_events(
|
||||
user=user, service=service, from_key=from_key,
|
||||
user=user,
|
||||
service=service,
|
||||
from_key=from_key,
|
||||
)
|
||||
time_now = self.clock.time_msec()
|
||||
events.extend(
|
||||
|
@ -120,7 +120,9 @@ def convert_client_dict_legacy_fields_to_identifier(
|
||||
# Ensure the identifier has a type
|
||||
if "type" not in identifier:
|
||||
raise SynapseError(
|
||||
400, "'identifier' dict has no key 'type'", errcode=Codes.MISSING_PARAM,
|
||||
400,
|
||||
"'identifier' dict has no key 'type'",
|
||||
errcode=Codes.MISSING_PARAM,
|
||||
)
|
||||
|
||||
return identifier
|
||||
@ -351,7 +353,11 @@ class AuthHandler(BaseHandler):
|
||||
|
||||
try:
|
||||
result, params, session_id = await self.check_ui_auth(
|
||||
flows, request, request_body, description, get_new_session_data,
|
||||
flows,
|
||||
request,
|
||||
request_body,
|
||||
description,
|
||||
get_new_session_data,
|
||||
)
|
||||
except LoginError:
|
||||
# Update the ratelimiter to say we failed (`can_do_action` doesn't raise).
|
||||
@ -379,8 +385,7 @@ class AuthHandler(BaseHandler):
|
||||
return params, session_id
|
||||
|
||||
async def _get_available_ui_auth_types(self, user: UserID) -> Iterable[str]:
|
||||
"""Get a list of the authentication types this user can use
|
||||
"""
|
||||
"""Get a list of the authentication types this user can use"""
|
||||
|
||||
ui_auth_types = set()
|
||||
|
||||
@ -723,7 +728,9 @@ class AuthHandler(BaseHandler):
|
||||
}
|
||||
|
||||
def _auth_dict_for_flows(
|
||||
self, flows: List[List[str]], session_id: str,
|
||||
self,
|
||||
flows: List[List[str]],
|
||||
session_id: str,
|
||||
) -> Dict[str, Any]:
|
||||
public_flows = []
|
||||
for f in flows:
|
||||
@ -880,7 +887,9 @@ class AuthHandler(BaseHandler):
|
||||
return self._supported_login_types
|
||||
|
||||
async def validate_login(
|
||||
self, login_submission: Dict[str, Any], ratelimit: bool = False,
|
||||
self,
|
||||
login_submission: Dict[str, Any],
|
||||
ratelimit: bool = False,
|
||||
) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
|
||||
"""Authenticates the user for the /login API
|
||||
|
||||
@ -1023,7 +1032,9 @@ class AuthHandler(BaseHandler):
|
||||
raise
|
||||
|
||||
async def _validate_userid_login(
|
||||
self, username: str, login_submission: Dict[str, Any],
|
||||
self,
|
||||
username: str,
|
||||
login_submission: Dict[str, Any],
|
||||
) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
|
||||
"""Helper for validate_login
|
||||
|
||||
@ -1446,7 +1457,8 @@ class AuthHandler(BaseHandler):
|
||||
# is considered OK since the newest SSO attributes should be most valid.
|
||||
if extra_attributes:
|
||||
self._extra_attributes[registered_user_id] = SsoLoginExtraAttributes(
|
||||
self._clock.time_msec(), extra_attributes,
|
||||
self._clock.time_msec(),
|
||||
extra_attributes,
|
||||
)
|
||||
|
||||
# Create a login token
|
||||
@ -1472,10 +1484,22 @@ class AuthHandler(BaseHandler):
|
||||
# Remove the query parameters from the redirect URL to get a shorter version of
|
||||
# it. This is only to display a human-readable URL in the template, but not the
|
||||
# URL we redirect users to.
|
||||
redirect_url_no_params = client_redirect_url.split("?")[0]
|
||||
url_parts = urllib.parse.urlsplit(client_redirect_url)
|
||||
|
||||
if url_parts.scheme == "https":
|
||||
# for an https uri, just show the netloc (ie, the hostname. Specifically,
|
||||
# the bit between "//" and "/"; this includes any potential
|
||||
# "username:password@" prefix.)
|
||||
display_url = url_parts.netloc
|
||||
else:
|
||||
# for other uris, strip the query-params (including the login token) and
|
||||
# fragment.
|
||||
display_url = urllib.parse.urlunsplit(
|
||||
(url_parts.scheme, url_parts.netloc, url_parts.path, "", "")
|
||||
)
|
||||
|
||||
html = self._sso_redirect_confirm_template.render(
|
||||
display_url=redirect_url_no_params,
|
||||
display_url=display_url,
|
||||
redirect_url=redirect_url,
|
||||
server_name=self._server_name,
|
||||
new_user=new_user,
|
||||
@ -1690,5 +1714,9 @@ class PasswordProvider:
|
||||
# This might return an awaitable, if it does block the log out
|
||||
# until it completes.
|
||||
await maybe_awaitable(
|
||||
g(user_id=user_id, device_id=device_id, access_token=access_token,)
|
||||
g(
|
||||
user_id=user_id,
|
||||
device_id=device_id,
|
||||
access_token=access_token,
|
||||
)
|
||||
)
|
||||
|
@ -14,7 +14,7 @@
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import urllib.parse
|
||||
from typing import TYPE_CHECKING, Dict, Optional
|
||||
from typing import TYPE_CHECKING, Dict, List, Optional
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
import attr
|
||||
@ -33,8 +33,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class CasError(Exception):
|
||||
"""Used to catch errors when validating the CAS ticket.
|
||||
"""
|
||||
"""Used to catch errors when validating the CAS ticket."""
|
||||
|
||||
def __init__(self, error, error_description=None):
|
||||
self.error = error
|
||||
@ -49,7 +48,7 @@ class CasError(Exception):
|
||||
@attr.s(slots=True, frozen=True)
|
||||
class CasResponse:
|
||||
username = attr.ib(type=str)
|
||||
attributes = attr.ib(type=Dict[str, Optional[str]])
|
||||
attributes = attr.ib(type=Dict[str, List[Optional[str]]])
|
||||
|
||||
|
||||
class CasHandler:
|
||||
@ -100,7 +99,10 @@ class CasHandler:
|
||||
Returns:
|
||||
The URL to use as a "service" parameter.
|
||||
"""
|
||||
return "%s?%s" % (self._cas_service_url, urllib.parse.urlencode(args),)
|
||||
return "%s?%s" % (
|
||||
self._cas_service_url,
|
||||
urllib.parse.urlencode(args),
|
||||
)
|
||||
|
||||
async def _validate_ticket(
|
||||
self, ticket: str, service_args: Dict[str, str]
|
||||
@ -169,7 +171,7 @@ class CasHandler:
|
||||
|
||||
# Iterate through the nodes and pull out the user and any extra attributes.
|
||||
user = None
|
||||
attributes = {}
|
||||
attributes = {} # type: Dict[str, List[Optional[str]]]
|
||||
for child in root[0]:
|
||||
if child.tag.endswith("user"):
|
||||
user = child.text
|
||||
@ -182,7 +184,7 @@ class CasHandler:
|
||||
tag = attribute.tag
|
||||
if "}" in tag:
|
||||
tag = tag.split("}")[1]
|
||||
attributes[tag] = attribute.text
|
||||
attributes.setdefault(tag, []).append(attribute.text)
|
||||
|
||||
# Ensure a user was found.
|
||||
if user is None:
|
||||
@ -296,36 +298,20 @@ class CasHandler:
|
||||
# first check if we're doing a UIA
|
||||
if session:
|
||||
return await self._sso_handler.complete_sso_ui_auth_request(
|
||||
self.idp_id, cas_response.username, session, request,
|
||||
self.idp_id,
|
||||
cas_response.username,
|
||||
session,
|
||||
request,
|
||||
)
|
||||
|
||||
# otherwise, we're handling a login request.
|
||||
|
||||
# Ensure that the attributes of the logged in user meet the required
|
||||
# attributes.
|
||||
for required_attribute, required_value in self._cas_required_attributes.items():
|
||||
# If required attribute was not in CAS Response - Forbidden
|
||||
if required_attribute not in cas_response.attributes:
|
||||
self._sso_handler.render_error(
|
||||
request,
|
||||
"unauthorised",
|
||||
"You are not authorised to log in here.",
|
||||
401,
|
||||
)
|
||||
return
|
||||
|
||||
# Also need to check value
|
||||
if required_value is not None:
|
||||
actual_value = cas_response.attributes[required_attribute]
|
||||
# If required attribute value does not match expected - Forbidden
|
||||
if required_value != actual_value:
|
||||
self._sso_handler.render_error(
|
||||
request,
|
||||
"unauthorised",
|
||||
"You are not authorised to log in here.",
|
||||
401,
|
||||
)
|
||||
return
|
||||
if not self._sso_handler.check_required_attributes(
|
||||
request, cas_response.attributes, self._cas_required_attributes
|
||||
):
|
||||
return
|
||||
|
||||
# Call the mapper to register/login the user
|
||||
|
||||
@ -372,9 +358,10 @@ class CasHandler:
|
||||
if failures:
|
||||
raise RuntimeError("CAS is not expected to de-duplicate Matrix IDs")
|
||||
|
||||
# Arbitrarily use the first attribute found.
|
||||
display_name = cas_response.attributes.get(
|
||||
self._cas_displayname_attribute, None
|
||||
)
|
||||
self._cas_displayname_attribute, [None]
|
||||
)[0]
|
||||
|
||||
return UserAttributes(localpart=localpart, display_name=display_name)
|
||||
|
||||
@ -384,7 +371,8 @@ class CasHandler:
|
||||
user_id = UserID(localpart, self._hostname).to_string()
|
||||
|
||||
logger.debug(
|
||||
"Looking for existing account based on mapped %s", user_id,
|
||||
"Looking for existing account based on mapped %s",
|
||||
user_id,
|
||||
)
|
||||
|
||||
users = await self._store.get_users_by_id_case_insensitive(user_id)
|
||||
|
@ -196,8 +196,7 @@ class DeactivateAccountHandler(BaseHandler):
|
||||
run_as_background_process("user_parter_loop", self._user_parter_loop)
|
||||
|
||||
async def _user_parter_loop(self) -> None:
|
||||
"""Loop that parts deactivated users from rooms
|
||||
"""
|
||||
"""Loop that parts deactivated users from rooms"""
|
||||
self._user_parter_running = True
|
||||
logger.info("Starting user parter")
|
||||
try:
|
||||
@ -214,8 +213,7 @@ class DeactivateAccountHandler(BaseHandler):
|
||||
self._user_parter_running = False
|
||||
|
||||
async def _part_user(self, user_id: str) -> None:
|
||||
"""Causes the given user_id to leave all the rooms they're joined to
|
||||
"""
|
||||
"""Causes the given user_id to leave all the rooms they're joined to"""
|
||||
user = UserID.from_string(user_id)
|
||||
|
||||
rooms_for_user = await self.store.get_rooms_for_user(user_id)
|
||||
|
@ -86,7 +86,7 @@ class DeviceWorkerHandler(BaseHandler):
|
||||
|
||||
@trace
|
||||
async def get_device(self, user_id: str, device_id: str) -> JsonDict:
|
||||
""" Retrieve the given device
|
||||
"""Retrieve the given device
|
||||
|
||||
Args:
|
||||
user_id: The user to get the device from
|
||||
@ -341,7 +341,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
|
||||
@trace
|
||||
async def delete_device(self, user_id: str, device_id: str) -> None:
|
||||
""" Delete the given device
|
||||
"""Delete the given device
|
||||
|
||||
Args:
|
||||
user_id: The user to delete the device from.
|
||||
@ -386,7 +386,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
await self.delete_devices(user_id, device_ids)
|
||||
|
||||
async def delete_devices(self, user_id: str, device_ids: List[str]) -> None:
|
||||
""" Delete several devices
|
||||
"""Delete several devices
|
||||
|
||||
Args:
|
||||
user_id: The user to delete devices from.
|
||||
@ -417,7 +417,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
await self.notify_device_update(user_id, device_ids)
|
||||
|
||||
async def update_device(self, user_id: str, device_id: str, content: dict) -> None:
|
||||
""" Update the given device
|
||||
"""Update the given device
|
||||
|
||||
Args:
|
||||
user_id: The user to update devices of.
|
||||
@ -534,7 +534,9 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
device id of the dehydrated device
|
||||
"""
|
||||
device_id = await self.check_device_registered(
|
||||
user_id, None, initial_device_display_name,
|
||||
user_id,
|
||||
None,
|
||||
initial_device_display_name,
|
||||
)
|
||||
old_device_id = await self.store.store_dehydrated_device(
|
||||
user_id, device_id, device_data
|
||||
@ -803,7 +805,8 @@ class DeviceListUpdater:
|
||||
try:
|
||||
# Try to resync the current user's devices list.
|
||||
result = await self.user_device_resync(
|
||||
user_id=user_id, mark_failed_as_stale=False,
|
||||
user_id=user_id,
|
||||
mark_failed_as_stale=False,
|
||||
)
|
||||
|
||||
# user_device_resync only returns a result if it managed to
|
||||
@ -813,14 +816,17 @@ class DeviceListUpdater:
|
||||
# self.store.update_remote_device_list_cache).
|
||||
if result:
|
||||
logger.debug(
|
||||
"Successfully resynced the device list for %s", user_id,
|
||||
"Successfully resynced the device list for %s",
|
||||
user_id,
|
||||
)
|
||||
except Exception as e:
|
||||
# If there was an issue resyncing this user, e.g. if the remote
|
||||
# server sent a malformed result, just log the error instead of
|
||||
# aborting all the subsequent resyncs.
|
||||
logger.debug(
|
||||
"Could not resync the device list for %s: %s", user_id, e,
|
||||
"Could not resync the device list for %s: %s",
|
||||
user_id,
|
||||
e,
|
||||
)
|
||||
finally:
|
||||
# Allow future calls to retry resyncinc out of sync device lists.
|
||||
@ -855,7 +861,9 @@ class DeviceListUpdater:
|
||||
return None
|
||||
except (RequestSendFailed, HttpResponseException) as e:
|
||||
logger.warning(
|
||||
"Failed to handle device list update for %s: %s", user_id, e,
|
||||
"Failed to handle device list update for %s: %s",
|
||||
user_id,
|
||||
e,
|
||||
)
|
||||
|
||||
if mark_failed_as_stale:
|
||||
@ -931,7 +939,9 @@ class DeviceListUpdater:
|
||||
|
||||
# Handle cross-signing keys.
|
||||
cross_signing_device_ids = await self.process_cross_signing_key_update(
|
||||
user_id, master_key, self_signing_key,
|
||||
user_id,
|
||||
master_key,
|
||||
self_signing_key,
|
||||
)
|
||||
device_ids = device_ids + cross_signing_device_ids
|
||||
|
||||
|
@ -62,7 +62,8 @@ class DeviceMessageHandler:
|
||||
)
|
||||
else:
|
||||
hs.get_federation_registry().register_instances_for_edu(
|
||||
"m.direct_to_device", hs.config.worker.writers.to_device,
|
||||
"m.direct_to_device",
|
||||
hs.config.worker.writers.to_device,
|
||||
)
|
||||
|
||||
# The handler to call when we think a user's device list might be out of
|
||||
@ -73,8 +74,8 @@ class DeviceMessageHandler:
|
||||
hs.get_device_handler().device_list_updater.user_device_resync
|
||||
)
|
||||
else:
|
||||
self._user_device_resync = ReplicationUserDevicesResyncRestServlet.make_client(
|
||||
hs
|
||||
self._user_device_resync = (
|
||||
ReplicationUserDevicesResyncRestServlet.make_client(hs)
|
||||
)
|
||||
|
||||
async def on_direct_to_device_edu(self, origin: str, content: JsonDict) -> None:
|
||||
|
@ -61,8 +61,8 @@ class E2eKeysHandler:
|
||||
|
||||
self._is_master = hs.config.worker_app is None
|
||||
if not self._is_master:
|
||||
self._user_device_resync_client = ReplicationUserDevicesResyncRestServlet.make_client(
|
||||
hs
|
||||
self._user_device_resync_client = (
|
||||
ReplicationUserDevicesResyncRestServlet.make_client(hs)
|
||||
)
|
||||
else:
|
||||
# Only register this edu handler on master as it requires writing
|
||||
@ -85,7 +85,7 @@ class E2eKeysHandler:
|
||||
async def query_devices(
|
||||
self, query_body: JsonDict, timeout: int, from_user_id: str
|
||||
) -> JsonDict:
|
||||
""" Handle a device key query from a client
|
||||
"""Handle a device key query from a client
|
||||
|
||||
{
|
||||
"device_keys": {
|
||||
@ -391,8 +391,7 @@ class E2eKeysHandler:
|
||||
async def on_federation_query_client_keys(
|
||||
self, query_body: Dict[str, Dict[str, Optional[List[str]]]]
|
||||
) -> JsonDict:
|
||||
""" Handle a device key query from a federated server
|
||||
"""
|
||||
"""Handle a device key query from a federated server"""
|
||||
device_keys_query = query_body.get(
|
||||
"device_keys", {}
|
||||
) # type: Dict[str, Optional[List[str]]]
|
||||
@ -1065,7 +1064,9 @@ class E2eKeysHandler:
|
||||
return key, key_id, verify_key
|
||||
|
||||
async def _retrieve_cross_signing_keys_for_remote_user(
|
||||
self, user: UserID, desired_key_type: str,
|
||||
self,
|
||||
user: UserID,
|
||||
desired_key_type: str,
|
||||
) -> Tuple[Optional[dict], Optional[str], Optional[VerifyKey]]:
|
||||
"""Queries cross-signing keys for a remote user and saves them to the database
|
||||
|
||||
@ -1269,8 +1270,7 @@ def _one_time_keys_match(old_key_json: str, new_key: JsonDict) -> bool:
|
||||
|
||||
@attr.s(slots=True)
|
||||
class SignatureListItem:
|
||||
"""An item in the signature list as used by upload_signatures_for_device_keys.
|
||||
"""
|
||||
"""An item in the signature list as used by upload_signatures_for_device_keys."""
|
||||
|
||||
signing_key_id = attr.ib(type=str)
|
||||
target_user_id = attr.ib(type=str)
|
||||
@ -1355,8 +1355,12 @@ class SigningKeyEduUpdater:
|
||||
logger.info("pending updates: %r", pending_updates)
|
||||
|
||||
for master_key, self_signing_key in pending_updates:
|
||||
new_device_ids = await device_list_updater.process_cross_signing_key_update(
|
||||
user_id, master_key, self_signing_key,
|
||||
new_device_ids = (
|
||||
await device_list_updater.process_cross_signing_key_update(
|
||||
user_id,
|
||||
master_key,
|
||||
self_signing_key,
|
||||
)
|
||||
)
|
||||
device_ids = device_ids + new_device_ids
|
||||
|
||||
|
@ -57,8 +57,7 @@ class EventStreamHandler(BaseHandler):
|
||||
room_id: Optional[str] = None,
|
||||
is_guest: bool = False,
|
||||
) -> JsonDict:
|
||||
"""Fetches the events stream for a given user.
|
||||
"""
|
||||
"""Fetches the events stream for a given user."""
|
||||
|
||||
if room_id:
|
||||
blocked = await self.store.is_room_blocked(room_id)
|
||||
|
@ -111,13 +111,13 @@ class _NewEventInfo:
|
||||
|
||||
class FederationHandler(BaseHandler):
|
||||
"""Handles events that originated from federation.
|
||||
Responsible for:
|
||||
a) handling received Pdus before handing them on as Events to the rest
|
||||
of the homeserver (including auth and state conflict resolutions)
|
||||
b) converting events that were produced by local clients that may need
|
||||
to be sent to remote homeservers.
|
||||
c) doing the necessary dances to invite remote users and join remote
|
||||
rooms.
|
||||
Responsible for:
|
||||
a) handling received Pdus before handing them on as Events to the rest
|
||||
of the homeserver (including auth and state conflict resolutions)
|
||||
b) converting events that were produced by local clients that may need
|
||||
to be sent to remote homeservers.
|
||||
c) doing the necessary dances to invite remote users and join remote
|
||||
rooms.
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
@ -150,11 +150,11 @@ class FederationHandler(BaseHandler):
|
||||
)
|
||||
|
||||
if hs.config.worker_app:
|
||||
self._user_device_resync = ReplicationUserDevicesResyncRestServlet.make_client(
|
||||
hs
|
||||
self._user_device_resync = (
|
||||
ReplicationUserDevicesResyncRestServlet.make_client(hs)
|
||||
)
|
||||
self._maybe_store_room_on_outlier_membership = ReplicationStoreRoomOnOutlierMembershipRestServlet.make_client(
|
||||
hs
|
||||
self._maybe_store_room_on_outlier_membership = (
|
||||
ReplicationStoreRoomOnOutlierMembershipRestServlet.make_client(hs)
|
||||
)
|
||||
else:
|
||||
self._device_list_updater = hs.get_device_handler().device_list_updater
|
||||
@ -172,7 +172,7 @@ class FederationHandler(BaseHandler):
|
||||
self._ephemeral_messages_enabled = hs.config.enable_ephemeral_messages
|
||||
|
||||
async def on_receive_pdu(self, origin, pdu, sent_to_us_directly=False) -> None:
|
||||
""" Process a PDU received via a federation /send/ transaction, or
|
||||
"""Process a PDU received via a federation /send/ transaction, or
|
||||
via backfill of missing prev_events
|
||||
|
||||
Args:
|
||||
@ -368,7 +368,8 @@ class FederationHandler(BaseHandler):
|
||||
# know about
|
||||
for p in prevs - seen:
|
||||
logger.info(
|
||||
"Requesting state at missing prev_event %s", event_id,
|
||||
"Requesting state at missing prev_event %s",
|
||||
event_id,
|
||||
)
|
||||
|
||||
with nested_logging_context(p):
|
||||
@ -388,12 +389,14 @@ class FederationHandler(BaseHandler):
|
||||
event_map[x.event_id] = x
|
||||
|
||||
room_version = await self.store.get_room_version_id(room_id)
|
||||
state_map = await self._state_resolution_handler.resolve_events_with_store(
|
||||
room_id,
|
||||
room_version,
|
||||
state_maps,
|
||||
event_map,
|
||||
state_res_store=StateResolutionStore(self.store),
|
||||
state_map = (
|
||||
await self._state_resolution_handler.resolve_events_with_store(
|
||||
room_id,
|
||||
room_version,
|
||||
state_maps,
|
||||
event_map,
|
||||
state_res_store=StateResolutionStore(self.store),
|
||||
)
|
||||
)
|
||||
|
||||
# We need to give _process_received_pdu the actual state events
|
||||
@ -687,9 +690,12 @@ class FederationHandler(BaseHandler):
|
||||
return fetched_events
|
||||
|
||||
async def _process_received_pdu(
|
||||
self, origin: str, event: EventBase, state: Optional[Iterable[EventBase]],
|
||||
self,
|
||||
origin: str,
|
||||
event: EventBase,
|
||||
state: Optional[Iterable[EventBase]],
|
||||
):
|
||||
""" Called when we have a new pdu. We need to do auth checks and put it
|
||||
"""Called when we have a new pdu. We need to do auth checks and put it
|
||||
through the StateHandler.
|
||||
|
||||
Args:
|
||||
@ -801,7 +807,7 @@ class FederationHandler(BaseHandler):
|
||||
|
||||
@log_function
|
||||
async def backfill(self, dest, room_id, limit, extremities):
|
||||
""" Trigger a backfill request to `dest` for the given `room_id`
|
||||
"""Trigger a backfill request to `dest` for the given `room_id`
|
||||
|
||||
This will attempt to get more events from the remote. If the other side
|
||||
has no new events to offer, this will return an empty list.
|
||||
@ -1204,11 +1210,16 @@ class FederationHandler(BaseHandler):
|
||||
with nested_logging_context(event_id):
|
||||
try:
|
||||
event = await self.federation_client.get_pdu(
|
||||
[destination], event_id, room_version, outlier=True,
|
||||
[destination],
|
||||
event_id,
|
||||
room_version,
|
||||
outlier=True,
|
||||
)
|
||||
if event is None:
|
||||
logger.warning(
|
||||
"Server %s didn't return event %s", destination, event_id,
|
||||
"Server %s didn't return event %s",
|
||||
destination,
|
||||
event_id,
|
||||
)
|
||||
return
|
||||
|
||||
@ -1235,7 +1246,8 @@ class FederationHandler(BaseHandler):
|
||||
if aid not in event_map
|
||||
]
|
||||
persisted_events = await self.store.get_events(
|
||||
auth_events, allow_rejected=True,
|
||||
auth_events,
|
||||
allow_rejected=True,
|
||||
)
|
||||
|
||||
event_infos = []
|
||||
@ -1251,7 +1263,9 @@ class FederationHandler(BaseHandler):
|
||||
event_infos.append(_NewEventInfo(event, None, auth))
|
||||
|
||||
await self._handle_new_events(
|
||||
destination, room_id, event_infos,
|
||||
destination,
|
||||
room_id,
|
||||
event_infos,
|
||||
)
|
||||
|
||||
def _sanity_check_event(self, ev):
|
||||
@ -1287,7 +1301,7 @@ class FederationHandler(BaseHandler):
|
||||
raise SynapseError(HTTPStatus.BAD_REQUEST, "Too many auth_events")
|
||||
|
||||
async def send_invite(self, target_host, event):
|
||||
""" Sends the invite to the remote server for signing.
|
||||
"""Sends the invite to the remote server for signing.
|
||||
|
||||
Invites must be signed by the invitee's server before distribution.
|
||||
"""
|
||||
@ -1310,7 +1324,7 @@ class FederationHandler(BaseHandler):
|
||||
async def do_invite_join(
|
||||
self, target_hosts: Iterable[str], room_id: str, joinee: str, content: JsonDict
|
||||
) -> Tuple[str, int]:
|
||||
""" Attempts to join the `joinee` to the room `room_id` via the
|
||||
"""Attempts to join the `joinee` to the room `room_id` via the
|
||||
servers contained in `target_hosts`.
|
||||
|
||||
This first triggers a /make_join/ request that returns a partial
|
||||
@ -1354,8 +1368,6 @@ class FederationHandler(BaseHandler):
|
||||
|
||||
await self._clean_room_for_join(room_id)
|
||||
|
||||
handled_events = set()
|
||||
|
||||
try:
|
||||
# Try the host we successfully got a response to /make_join/
|
||||
# request first.
|
||||
@ -1375,10 +1387,6 @@ class FederationHandler(BaseHandler):
|
||||
auth_chain = ret["auth_chain"]
|
||||
auth_chain.sort(key=lambda e: e.depth)
|
||||
|
||||
handled_events.update([s.event_id for s in state])
|
||||
handled_events.update([a.event_id for a in auth_chain])
|
||||
handled_events.add(event.event_id)
|
||||
|
||||
logger.debug("do_invite_join auth_chain: %s", auth_chain)
|
||||
logger.debug("do_invite_join state: %s", state)
|
||||
|
||||
@ -1394,7 +1402,8 @@ class FederationHandler(BaseHandler):
|
||||
# so we can rely on it now.
|
||||
#
|
||||
await self.store.upsert_room_on_join(
|
||||
room_id=room_id, room_version=room_version_obj,
|
||||
room_id=room_id,
|
||||
room_version=room_version_obj,
|
||||
)
|
||||
|
||||
max_stream_id = await self._persist_auth_tree(
|
||||
@ -1464,7 +1473,7 @@ class FederationHandler(BaseHandler):
|
||||
async def on_make_join_request(
|
||||
self, origin: str, room_id: str, user_id: str
|
||||
) -> EventBase:
|
||||
""" We've received a /make_join/ request, so we create a partial
|
||||
"""We've received a /make_join/ request, so we create a partial
|
||||
join event for the room and return that. We do *not* persist or
|
||||
process it until the other server has signed it and sent it back.
|
||||
|
||||
@ -1489,7 +1498,8 @@ class FederationHandler(BaseHandler):
|
||||
is_in_room = await self.auth.check_host_in_room(room_id, self.server_name)
|
||||
if not is_in_room:
|
||||
logger.info(
|
||||
"Got /make_join request for room %s we are no longer in", room_id,
|
||||
"Got /make_join request for room %s we are no longer in",
|
||||
room_id,
|
||||
)
|
||||
raise NotFoundError("Not an active room on this server")
|
||||
|
||||
@ -1523,7 +1533,7 @@ class FederationHandler(BaseHandler):
|
||||
return event
|
||||
|
||||
async def on_send_join_request(self, origin, pdu):
|
||||
""" We have received a join event for a room. Fully process it and
|
||||
"""We have received a join event for a room. Fully process it and
|
||||
respond with the current state and auth chains.
|
||||
"""
|
||||
event = pdu
|
||||
@ -1579,7 +1589,7 @@ class FederationHandler(BaseHandler):
|
||||
async def on_invite_request(
|
||||
self, origin: str, event: EventBase, room_version: RoomVersion
|
||||
):
|
||||
""" We've got an invite event. Process and persist it. Sign it.
|
||||
"""We've got an invite event. Process and persist it. Sign it.
|
||||
|
||||
Respond with the now signed event.
|
||||
"""
|
||||
@ -1706,7 +1716,7 @@ class FederationHandler(BaseHandler):
|
||||
async def on_make_leave_request(
|
||||
self, origin: str, room_id: str, user_id: str
|
||||
) -> EventBase:
|
||||
""" We've received a /make_leave/ request, so we create a partial
|
||||
"""We've received a /make_leave/ request, so we create a partial
|
||||
leave event for the room and return that. We do *not* persist or
|
||||
process it until the other server has signed it and sent it back.
|
||||
|
||||
@ -1782,8 +1792,7 @@ class FederationHandler(BaseHandler):
|
||||
return None
|
||||
|
||||
async def get_state_for_pdu(self, room_id: str, event_id: str) -> List[EventBase]:
|
||||
"""Returns the state at the event. i.e. not including said event.
|
||||
"""
|
||||
"""Returns the state at the event. i.e. not including said event."""
|
||||
|
||||
event = await self.store.get_event(event_id, check_room_id=room_id)
|
||||
|
||||
@ -1809,8 +1818,7 @@ class FederationHandler(BaseHandler):
|
||||
return []
|
||||
|
||||
async def get_state_ids_for_pdu(self, room_id: str, event_id: str) -> List[str]:
|
||||
"""Returns the state at the event. i.e. not including said event.
|
||||
"""
|
||||
"""Returns the state at the event. i.e. not including said event."""
|
||||
event = await self.store.get_event(event_id, check_room_id=room_id)
|
||||
|
||||
state_groups = await self.state_store.get_state_groups_ids(room_id, [event_id])
|
||||
@ -2016,7 +2024,11 @@ class FederationHandler(BaseHandler):
|
||||
|
||||
for e_id in missing_auth_events:
|
||||
m_ev = await self.federation_client.get_pdu(
|
||||
[origin], e_id, room_version=room_version, outlier=True, timeout=10000,
|
||||
[origin],
|
||||
e_id,
|
||||
room_version=room_version,
|
||||
outlier=True,
|
||||
timeout=10000,
|
||||
)
|
||||
if m_ev and m_ev.event_id == e_id:
|
||||
event_map[e_id] = m_ev
|
||||
@ -2166,7 +2178,9 @@ class FederationHandler(BaseHandler):
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
"Doing soft-fail check for %s: state %s", event.event_id, current_state_ids,
|
||||
"Doing soft-fail check for %s: state %s",
|
||||
event.event_id,
|
||||
current_state_ids,
|
||||
)
|
||||
|
||||
# Now check if event pass auth against said current state
|
||||
@ -2519,7 +2533,7 @@ class FederationHandler(BaseHandler):
|
||||
async def construct_auth_difference(
|
||||
self, local_auth: Iterable[EventBase], remote_auth: Iterable[EventBase]
|
||||
) -> Dict:
|
||||
""" Given a local and remote auth chain, find the differences. This
|
||||
"""Given a local and remote auth chain, find the differences. This
|
||||
assumes that we have already processed all events in remote_auth
|
||||
|
||||
Params:
|
||||
|
@ -146,8 +146,7 @@ class GroupsLocalWorkerHandler:
|
||||
async def get_users_in_group(
|
||||
self, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
"""Get users in a group
|
||||
"""
|
||||
"""Get users in a group"""
|
||||
if self.is_mine_id(group_id):
|
||||
return await self.groups_server_handler.get_users_in_group(
|
||||
group_id, requester_user_id
|
||||
@ -283,8 +282,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||
async def create_group(
|
||||
self, group_id: str, user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Create a group
|
||||
"""
|
||||
"""Create a group"""
|
||||
|
||||
logger.info("Asking to create group with ID: %r", group_id)
|
||||
|
||||
@ -314,8 +312,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||
async def join_group(
|
||||
self, group_id: str, user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Request to join a group
|
||||
"""
|
||||
"""Request to join a group"""
|
||||
if self.is_mine_id(group_id):
|
||||
await self.groups_server_handler.join_group(group_id, user_id, content)
|
||||
local_attestation = None
|
||||
@ -361,8 +358,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||
async def accept_invite(
|
||||
self, group_id: str, user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Accept an invite to a group
|
||||
"""
|
||||
"""Accept an invite to a group"""
|
||||
if self.is_mine_id(group_id):
|
||||
await self.groups_server_handler.accept_invite(group_id, user_id, content)
|
||||
local_attestation = None
|
||||
@ -408,8 +404,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||
async def invite(
|
||||
self, group_id: str, user_id: str, requester_user_id: str, config: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Invite a user to a group
|
||||
"""
|
||||
"""Invite a user to a group"""
|
||||
content = {"requester_user_id": requester_user_id, "config": config}
|
||||
if self.is_mine_id(group_id):
|
||||
res = await self.groups_server_handler.invite_to_group(
|
||||
@ -434,8 +429,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||
async def on_invite(
|
||||
self, group_id: str, user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""One of our users were invited to a group
|
||||
"""
|
||||
"""One of our users were invited to a group"""
|
||||
# TODO: Support auto join and rejection
|
||||
|
||||
if not self.is_mine_id(user_id):
|
||||
@ -466,8 +460,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||
async def remove_user_from_group(
|
||||
self, group_id: str, user_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
"""Remove a user from a group
|
||||
"""
|
||||
"""Remove a user from a group"""
|
||||
if user_id == requester_user_id:
|
||||
token = await self.store.register_user_group_membership(
|
||||
group_id, user_id, membership="leave"
|
||||
@ -501,8 +494,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||
async def user_removed_from_group(
|
||||
self, group_id: str, user_id: str, content: JsonDict
|
||||
) -> None:
|
||||
"""One of our users was removed/kicked from a group
|
||||
"""
|
||||
"""One of our users was removed/kicked from a group"""
|
||||
# TODO: Check if user in group
|
||||
token = await self.store.register_user_group_membership(
|
||||
group_id, user_id, membership="leave"
|
||||
|
@ -72,7 +72,10 @@ class IdentityHandler(BaseHandler):
|
||||
)
|
||||
|
||||
def ratelimit_request_token_requests(
|
||||
self, request: SynapseRequest, medium: str, address: str,
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
medium: str,
|
||||
address: str,
|
||||
):
|
||||
"""Used to ratelimit requests to `/requestToken` by IP and address.
|
||||
|
||||
|
@ -124,7 +124,8 @@ class InitialSyncHandler(BaseHandler):
|
||||
|
||||
joined_rooms = [r.room_id for r in room_list if r.membership == Membership.JOIN]
|
||||
receipt = await self.store.get_linearized_receipts_for_rooms(
|
||||
joined_rooms, to_key=int(now_token.receipt_key),
|
||||
joined_rooms,
|
||||
to_key=int(now_token.receipt_key),
|
||||
)
|
||||
|
||||
tags_by_room = await self.store.get_tags_for_user(user_id)
|
||||
@ -169,7 +170,10 @@ class InitialSyncHandler(BaseHandler):
|
||||
self.state_handler.get_current_state, event.room_id
|
||||
)
|
||||
elif event.membership == Membership.LEAVE:
|
||||
room_end_token = RoomStreamToken(None, event.stream_ordering,)
|
||||
room_end_token = RoomStreamToken(
|
||||
None,
|
||||
event.stream_ordering,
|
||||
)
|
||||
deferred_room_state = run_in_background(
|
||||
self.state_store.get_state_for_events, [event.event_id]
|
||||
)
|
||||
@ -284,7 +288,9 @@ class InitialSyncHandler(BaseHandler):
|
||||
membership,
|
||||
member_event_id,
|
||||
) = await self.auth.check_user_in_room_or_world_readable(
|
||||
room_id, user_id, allow_departed_users=True,
|
||||
room_id,
|
||||
user_id,
|
||||
allow_departed_users=True,
|
||||
)
|
||||
is_peeking = member_event_id is None
|
||||
|
||||
|
@ -65,8 +65,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MessageHandler:
|
||||
"""Contains some read only APIs to get state about a room
|
||||
"""
|
||||
"""Contains some read only APIs to get state about a room"""
|
||||
|
||||
def __init__(self, hs):
|
||||
self.auth = hs.get_auth()
|
||||
@ -88,9 +87,13 @@ class MessageHandler:
|
||||
)
|
||||
|
||||
async def get_room_data(
|
||||
self, user_id: str, room_id: str, event_type: str, state_key: str,
|
||||
self,
|
||||
user_id: str,
|
||||
room_id: str,
|
||||
event_type: str,
|
||||
state_key: str,
|
||||
) -> dict:
|
||||
""" Get data from a room.
|
||||
"""Get data from a room.
|
||||
|
||||
Args:
|
||||
user_id
|
||||
@ -174,7 +177,10 @@ class MessageHandler:
|
||||
raise NotFoundError("Can't find event for token %s" % (at_token,))
|
||||
|
||||
visible_events = await filter_events_for_client(
|
||||
self.storage, user_id, last_events, filter_send_to_client=False,
|
||||
self.storage,
|
||||
user_id,
|
||||
last_events,
|
||||
filter_send_to_client=False,
|
||||
)
|
||||
|
||||
event = last_events[0]
|
||||
@ -573,7 +579,7 @@ class EventCreationHandler:
|
||||
async def _is_exempt_from_privacy_policy(
|
||||
self, builder: EventBuilder, requester: Requester
|
||||
) -> bool:
|
||||
""""Determine if an event to be sent is exempt from having to consent
|
||||
""" "Determine if an event to be sent is exempt from having to consent
|
||||
to the privacy policy
|
||||
|
||||
Args:
|
||||
@ -795,9 +801,10 @@ class EventCreationHandler:
|
||||
"""
|
||||
|
||||
if prev_event_ids is not None:
|
||||
assert len(prev_event_ids) <= 10, (
|
||||
"Attempting to create an event with %i prev_events"
|
||||
% (len(prev_event_ids),)
|
||||
assert (
|
||||
len(prev_event_ids) <= 10
|
||||
), "Attempting to create an event with %i prev_events" % (
|
||||
len(prev_event_ids),
|
||||
)
|
||||
else:
|
||||
prev_event_ids = await self.store.get_prev_events_for_room(builder.room_id)
|
||||
@ -823,7 +830,8 @@ class EventCreationHandler:
|
||||
)
|
||||
if not third_party_result:
|
||||
logger.info(
|
||||
"Event %s forbidden by third-party rules", event,
|
||||
"Event %s forbidden by third-party rules",
|
||||
event,
|
||||
)
|
||||
raise SynapseError(
|
||||
403, "This event is not allowed in this context", Codes.FORBIDDEN
|
||||
@ -1319,7 +1327,11 @@ class EventCreationHandler:
|
||||
# Since this is a dummy-event it is OK if it is sent by a
|
||||
# shadow-banned user.
|
||||
await self.handle_new_client_event(
|
||||
requester, event, context, ratelimit=False, ignore_shadow_ban=True,
|
||||
requester,
|
||||
event,
|
||||
context,
|
||||
ratelimit=False,
|
||||
ignore_shadow_ban=True,
|
||||
)
|
||||
return True
|
||||
except AuthError:
|
||||
|
@ -41,13 +41,33 @@ from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
SESSION_COOKIE_NAME = b"oidc_session"
|
||||
# we want the cookie to be returned to us even when the request is the POSTed
|
||||
# result of a form on another domain, as is used with `response_mode=form_post`.
|
||||
#
|
||||
# Modern browsers will not do so unless we set SameSite=None; however *older*
|
||||
# browsers (including all versions of Safari on iOS 12?) don't support
|
||||
# SameSite=None, and interpret it as SameSite=Strict:
|
||||
# https://bugs.webkit.org/show_bug.cgi?id=198181
|
||||
#
|
||||
# As a rather painful workaround, we set *two* cookies, one with SameSite=None
|
||||
# and one with no SameSite, in the hope that at least one of them will get
|
||||
# back to us.
|
||||
#
|
||||
# Secure is necessary for SameSite=None (and, empirically, also breaks things
|
||||
# on iOS 12.)
|
||||
#
|
||||
# Here we have the names of the cookies, and the options we use to set them.
|
||||
_SESSION_COOKIES = [
|
||||
(b"oidc_session", b"Path=/_synapse/client/oidc; HttpOnly; Secure; SameSite=None"),
|
||||
(b"oidc_session_no_samesite", b"Path=/_synapse/client/oidc; HttpOnly"),
|
||||
]
|
||||
|
||||
#: A token exchanged from the token endpoint, as per RFC6749 sec 5.1. and
|
||||
#: OpenID.Core sec 3.1.3.3.
|
||||
@ -72,8 +92,7 @@ JWKS = TypedDict("JWKS", {"keys": List[JWK]})
|
||||
|
||||
|
||||
class OidcHandler:
|
||||
"""Handles requests related to the OpenID Connect login flow.
|
||||
"""
|
||||
"""Handles requests related to the OpenID Connect login flow."""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self._sso_handler = hs.get_sso_handler()
|
||||
@ -123,7 +142,6 @@ class OidcHandler:
|
||||
Args:
|
||||
request: the incoming request from the browser.
|
||||
"""
|
||||
|
||||
# The provider might redirect with an error.
|
||||
# In that case, just display it as-is.
|
||||
if b"error" in request.args:
|
||||
@ -137,8 +155,12 @@ class OidcHandler:
|
||||
# either the provider misbehaving or Synapse being misconfigured.
|
||||
# The only exception of that is "access_denied", where the user
|
||||
# probably cancelled the login flow. In other cases, log those errors.
|
||||
if error != "access_denied":
|
||||
logger.error("Error from the OIDC provider: %s %s", error, description)
|
||||
logger.log(
|
||||
logging.INFO if error == "access_denied" else logging.ERROR,
|
||||
"Received OIDC callback with error: %s %s",
|
||||
error,
|
||||
description,
|
||||
)
|
||||
|
||||
self._sso_handler.render_error(request, error, description)
|
||||
return
|
||||
@ -146,30 +168,37 @@ class OidcHandler:
|
||||
# otherwise, it is presumably a successful response. see:
|
||||
# https://tools.ietf.org/html/rfc6749#section-4.1.2
|
||||
|
||||
# Fetch the session cookie
|
||||
session = request.getCookie(SESSION_COOKIE_NAME) # type: Optional[bytes]
|
||||
if session is None:
|
||||
logger.info("No session cookie found")
|
||||
# Fetch the session cookie. See the comments on SESSION_COOKIES for why there
|
||||
# are two.
|
||||
|
||||
for cookie_name, _ in _SESSION_COOKIES:
|
||||
session = request.getCookie(cookie_name) # type: Optional[bytes]
|
||||
if session is not None:
|
||||
break
|
||||
else:
|
||||
logger.info("Received OIDC callback, with no session cookie")
|
||||
self._sso_handler.render_error(
|
||||
request, "missing_session", "No session cookie found"
|
||||
)
|
||||
return
|
||||
|
||||
# Remove the cookie. There is a good chance that if the callback failed
|
||||
# Remove the cookies. There is a good chance that if the callback failed
|
||||
# once, it will fail next time and the code will already be exchanged.
|
||||
# Removing it early avoids spamming the provider with token requests.
|
||||
request.addCookie(
|
||||
SESSION_COOKIE_NAME,
|
||||
b"",
|
||||
path="/_synapse/oidc",
|
||||
expires="Thu, Jan 01 1970 00:00:00 UTC",
|
||||
httpOnly=True,
|
||||
sameSite="lax",
|
||||
)
|
||||
# Removing the cookies early avoids spamming the provider with token requests.
|
||||
#
|
||||
# we have to build the header by hand rather than calling request.addCookie
|
||||
# because the latter does not support SameSite=None
|
||||
# (https://twistedmatrix.com/trac/ticket/10088)
|
||||
|
||||
for cookie_name, options in _SESSION_COOKIES:
|
||||
request.cookies.append(
|
||||
b"%s=; Expires=Thu, Jan 01 1970 00:00:00 UTC; %s"
|
||||
% (cookie_name, options)
|
||||
)
|
||||
|
||||
# Check for the state query parameter
|
||||
if b"state" not in request.args:
|
||||
logger.info("State parameter is missing")
|
||||
logger.info("Received OIDC callback, with no state parameter")
|
||||
self._sso_handler.render_error(
|
||||
request, "invalid_request", "State parameter is missing"
|
||||
)
|
||||
@ -183,14 +212,16 @@ class OidcHandler:
|
||||
session, state
|
||||
)
|
||||
except (MacaroonDeserializationException, ValueError) as e:
|
||||
logger.exception("Invalid session")
|
||||
logger.exception("Invalid session for OIDC callback")
|
||||
self._sso_handler.render_error(request, "invalid_session", str(e))
|
||||
return
|
||||
except MacaroonInvalidSignatureException as e:
|
||||
logger.exception("Could not verify session")
|
||||
logger.exception("Could not verify session for OIDC callback")
|
||||
self._sso_handler.render_error(request, "mismatching_session", str(e))
|
||||
return
|
||||
|
||||
logger.info("Received OIDC callback for IdP %s", session_data.idp_id)
|
||||
|
||||
oidc_provider = self._providers.get(session_data.idp_id)
|
||||
if not oidc_provider:
|
||||
logger.error("OIDC session uses unknown IdP %r", oidc_provider)
|
||||
@ -210,8 +241,7 @@ class OidcHandler:
|
||||
|
||||
|
||||
class OidcError(Exception):
|
||||
"""Used to catch errors when calling the token_endpoint
|
||||
"""
|
||||
"""Used to catch errors when calling the token_endpoint"""
|
||||
|
||||
def __init__(self, error, error_description=None):
|
||||
self.error = error
|
||||
@ -240,22 +270,27 @@ class OidcProvider:
|
||||
|
||||
self._token_generator = token_generator
|
||||
|
||||
self._config = provider
|
||||
self._callback_url = hs.config.oidc_callback_url # type: str
|
||||
|
||||
self._scopes = provider.scopes
|
||||
self._user_profile_method = provider.user_profile_method
|
||||
self._client_auth = ClientAuth(
|
||||
provider.client_id, provider.client_secret, provider.client_auth_method,
|
||||
provider.client_id,
|
||||
provider.client_secret,
|
||||
provider.client_auth_method,
|
||||
) # type: ClientAuth
|
||||
self._client_auth_method = provider.client_auth_method
|
||||
self._provider_metadata = OpenIDProviderMetadata(
|
||||
issuer=provider.issuer,
|
||||
authorization_endpoint=provider.authorization_endpoint,
|
||||
token_endpoint=provider.token_endpoint,
|
||||
userinfo_endpoint=provider.userinfo_endpoint,
|
||||
jwks_uri=provider.jwks_uri,
|
||||
) # type: OpenIDProviderMetadata
|
||||
self._provider_needs_discovery = provider.discover
|
||||
|
||||
# cache of metadata for the identity provider (endpoint uris, mostly). This is
|
||||
# loaded on-demand from the discovery endpoint (if discovery is enabled), with
|
||||
# possible overrides from the config. Access via `load_metadata`.
|
||||
self._provider_metadata = RetryOnExceptionCachedCall(self._load_metadata)
|
||||
|
||||
# cache of JWKs used by the identity provider to sign tokens. Loaded on demand
|
||||
# from the IdP's jwks_uri, if required.
|
||||
self._jwks = RetryOnExceptionCachedCall(self._load_jwks)
|
||||
|
||||
self._user_mapping_provider = provider.user_mapping_provider_class(
|
||||
provider.user_mapping_provider_config
|
||||
)
|
||||
@ -281,7 +316,7 @@ class OidcProvider:
|
||||
|
||||
self._sso_handler.register_identity_provider(self)
|
||||
|
||||
def _validate_metadata(self):
|
||||
def _validate_metadata(self, m: OpenIDProviderMetadata) -> None:
|
||||
"""Verifies the provider metadata.
|
||||
|
||||
This checks the validity of the currently loaded provider. Not
|
||||
@ -300,7 +335,6 @@ class OidcProvider:
|
||||
if self._skip_verification is True:
|
||||
return
|
||||
|
||||
m = self._provider_metadata
|
||||
m.validate_issuer()
|
||||
m.validate_authorization_endpoint()
|
||||
m.validate_token_endpoint()
|
||||
@ -335,11 +369,7 @@ class OidcProvider:
|
||||
)
|
||||
else:
|
||||
# If we're not using userinfo, we need a valid jwks to validate the ID token
|
||||
if m.get("jwks") is None:
|
||||
if m.get("jwks_uri") is not None:
|
||||
m.validate_jwks_uri()
|
||||
else:
|
||||
raise ValueError('"jwks_uri" must be set')
|
||||
m.validate_jwks_uri()
|
||||
|
||||
@property
|
||||
def _uses_userinfo(self) -> bool:
|
||||
@ -356,11 +386,15 @@ class OidcProvider:
|
||||
or self._user_profile_method == "userinfo_endpoint"
|
||||
)
|
||||
|
||||
async def load_metadata(self) -> OpenIDProviderMetadata:
|
||||
"""Load and validate the provider metadata.
|
||||
async def load_metadata(self, force: bool = False) -> OpenIDProviderMetadata:
|
||||
"""Return the provider metadata.
|
||||
|
||||
The values metadatas are discovered if ``oidc_config.discovery`` is
|
||||
``True`` and then cached.
|
||||
If this is the first call, the metadata is built from the config and from the
|
||||
metadata discovery endpoint (if enabled), and then validated. If the metadata
|
||||
is successfully validated, it is then cached for future use.
|
||||
|
||||
Args:
|
||||
force: If true, any cached metadata is discarded to force a reload.
|
||||
|
||||
Raises:
|
||||
ValueError: if something in the provider is not valid
|
||||
@ -368,18 +402,41 @@ class OidcProvider:
|
||||
Returns:
|
||||
The provider's metadata.
|
||||
"""
|
||||
# If we are using the OpenID Discovery documents, it needs to be loaded once
|
||||
# FIXME: should there be a lock here?
|
||||
if self._provider_needs_discovery:
|
||||
url = get_well_known_url(self._provider_metadata["issuer"], external=True)
|
||||
if force:
|
||||
# reset the cached call to ensure we get a new result
|
||||
self._provider_metadata = RetryOnExceptionCachedCall(self._load_metadata)
|
||||
|
||||
return await self._provider_metadata.get()
|
||||
|
||||
async def _load_metadata(self) -> OpenIDProviderMetadata:
|
||||
# start out with just the issuer (unlike the other settings, discovered issuer
|
||||
# takes precedence over configured issuer, because configured issuer is
|
||||
# required for discovery to take place.)
|
||||
#
|
||||
metadata = OpenIDProviderMetadata(issuer=self._config.issuer)
|
||||
|
||||
# load any data from the discovery endpoint, if enabled
|
||||
if self._config.discover:
|
||||
url = get_well_known_url(self._config.issuer, external=True)
|
||||
metadata_response = await self._http_client.get_json(url)
|
||||
# TODO: maybe update the other way around to let user override some values?
|
||||
self._provider_metadata.update(metadata_response)
|
||||
self._provider_needs_discovery = False
|
||||
metadata.update(metadata_response)
|
||||
|
||||
self._validate_metadata()
|
||||
# override any discovered data with any settings in our config
|
||||
if self._config.authorization_endpoint:
|
||||
metadata["authorization_endpoint"] = self._config.authorization_endpoint
|
||||
|
||||
return self._provider_metadata
|
||||
if self._config.token_endpoint:
|
||||
metadata["token_endpoint"] = self._config.token_endpoint
|
||||
|
||||
if self._config.userinfo_endpoint:
|
||||
metadata["userinfo_endpoint"] = self._config.userinfo_endpoint
|
||||
|
||||
if self._config.jwks_uri:
|
||||
metadata["jwks_uri"] = self._config.jwks_uri
|
||||
|
||||
self._validate_metadata(metadata)
|
||||
|
||||
return metadata
|
||||
|
||||
async def load_jwks(self, force: bool = False) -> JWKS:
|
||||
"""Load the JSON Web Key Set used to sign ID tokens.
|
||||
@ -409,27 +466,27 @@ class OidcProvider:
|
||||
]
|
||||
}
|
||||
"""
|
||||
if force:
|
||||
# reset the cached call to ensure we get a new result
|
||||
self._jwks = RetryOnExceptionCachedCall(self._load_jwks)
|
||||
return await self._jwks.get()
|
||||
|
||||
async def _load_jwks(self) -> JWKS:
|
||||
if self._uses_userinfo:
|
||||
# We're not using jwt signing, return an empty jwk set
|
||||
return {"keys": []}
|
||||
|
||||
# First check if the JWKS are loaded in the provider metadata.
|
||||
# It can happen either if the provider gives its JWKS in the discovery
|
||||
# document directly or if it was already loaded once.
|
||||
metadata = await self.load_metadata()
|
||||
jwk_set = metadata.get("jwks")
|
||||
if jwk_set is not None and not force:
|
||||
return jwk_set
|
||||
|
||||
# Loading the JWKS using the `jwks_uri` metadata
|
||||
# Load the JWKS using the `jwks_uri` metadata.
|
||||
uri = metadata.get("jwks_uri")
|
||||
if not uri:
|
||||
# this should be unreachable: load_metadata validates that
|
||||
# there is a jwks_uri in the metadata if _uses_userinfo is unset
|
||||
raise RuntimeError('Missing "jwks_uri" in metadata')
|
||||
|
||||
jwk_set = await self._http_client.get_json(uri)
|
||||
|
||||
# Caching the JWKS in the provider's metadata
|
||||
self._provider_metadata["jwks"] = jwk_set
|
||||
return jwk_set
|
||||
|
||||
async def _exchange_code(self, code: str) -> Token:
|
||||
@ -487,7 +544,10 @@ class OidcProvider:
|
||||
# We're not using the SimpleHttpClient util methods as we don't want to
|
||||
# check the HTTP status code and we do the body encoding ourself.
|
||||
response = await self._http_client.request(
|
||||
method="POST", uri=uri, data=body.encode("utf-8"), headers=headers,
|
||||
method="POST",
|
||||
uri=uri,
|
||||
data=body.encode("utf-8"),
|
||||
headers=headers,
|
||||
)
|
||||
|
||||
# This is used in multiple error messages below
|
||||
@ -565,6 +625,7 @@ class OidcProvider:
|
||||
Returns:
|
||||
UserInfo: an object representing the user.
|
||||
"""
|
||||
logger.debug("Using the OAuth2 access_token to request userinfo")
|
||||
metadata = await self.load_metadata()
|
||||
|
||||
resp = await self._http_client.get_json(
|
||||
@ -572,6 +633,8 @@ class OidcProvider:
|
||||
headers={"Authorization": ["Bearer {}".format(token["access_token"])]},
|
||||
)
|
||||
|
||||
logger.debug("Retrieved user info from userinfo endpoint: %r", resp)
|
||||
|
||||
return UserInfo(resp)
|
||||
|
||||
async def _parse_id_token(self, token: Token, nonce: str) -> UserInfo:
|
||||
@ -600,17 +663,19 @@ class OidcProvider:
|
||||
claims_cls = ImplicitIDToken
|
||||
|
||||
alg_values = metadata.get("id_token_signing_alg_values_supported", ["RS256"])
|
||||
|
||||
jwt = JsonWebToken(alg_values)
|
||||
|
||||
claim_options = {"iss": {"values": [metadata["issuer"]]}}
|
||||
|
||||
id_token = token["id_token"]
|
||||
logger.debug("Attempting to decode JWT id_token %r", id_token)
|
||||
|
||||
# Try to decode the keys in cache first, then retry by forcing the keys
|
||||
# to be reloaded
|
||||
jwk_set = await self.load_jwks()
|
||||
try:
|
||||
claims = jwt.decode(
|
||||
token["id_token"],
|
||||
id_token,
|
||||
key=jwk_set,
|
||||
claims_cls=claims_cls,
|
||||
claims_options=claim_options,
|
||||
@ -620,13 +685,15 @@ class OidcProvider:
|
||||
logger.info("Reloading JWKS after decode error")
|
||||
jwk_set = await self.load_jwks(force=True) # try reloading the jwks
|
||||
claims = jwt.decode(
|
||||
token["id_token"],
|
||||
id_token,
|
||||
key=jwk_set,
|
||||
claims_cls=claims_cls,
|
||||
claims_options=claim_options,
|
||||
claims_params=claims_params,
|
||||
)
|
||||
|
||||
logger.debug("Decoded id_token JWT %r; validating", claims)
|
||||
|
||||
claims.validate(leeway=120) # allows 2 min of clock skew
|
||||
return UserInfo(claims)
|
||||
|
||||
@ -681,14 +748,18 @@ class OidcProvider:
|
||||
ui_auth_session_id=ui_auth_session_id,
|
||||
),
|
||||
)
|
||||
request.addCookie(
|
||||
SESSION_COOKIE_NAME,
|
||||
cookie,
|
||||
path="/_synapse/client/oidc",
|
||||
max_age="3600",
|
||||
httpOnly=True,
|
||||
sameSite="lax",
|
||||
)
|
||||
|
||||
# Set the cookies. See the comments on _SESSION_COOKIES for why there are two.
|
||||
#
|
||||
# we have to build the header by hand rather than calling request.addCookie
|
||||
# because the latter does not support SameSite=None
|
||||
# (https://twistedmatrix.com/trac/ticket/10088)
|
||||
|
||||
for cookie_name, options in _SESSION_COOKIES:
|
||||
request.cookies.append(
|
||||
b"%s=%s; Max-Age=3600; %s"
|
||||
% (cookie_name, cookie.encode("utf-8"), options)
|
||||
)
|
||||
|
||||
metadata = await self.load_metadata()
|
||||
authorization_endpoint = metadata.get("authorization_endpoint")
|
||||
@ -726,19 +797,18 @@ class OidcProvider:
|
||||
"""
|
||||
# Exchange the code with the provider
|
||||
try:
|
||||
logger.debug("Exchanging code")
|
||||
logger.debug("Exchanging OAuth2 code for a token")
|
||||
token = await self._exchange_code(code)
|
||||
except OidcError as e:
|
||||
logger.exception("Could not exchange code")
|
||||
logger.exception("Could not exchange OAuth2 code")
|
||||
self._sso_handler.render_error(request, e.error, e.error_description)
|
||||
return
|
||||
|
||||
logger.debug("Successfully obtained OAuth2 access token")
|
||||
logger.debug("Successfully obtained OAuth2 token data: %r", token)
|
||||
|
||||
# Now that we have a token, get the userinfo, either by decoding the
|
||||
# `id_token` or by fetching the `userinfo_endpoint`.
|
||||
if self._uses_userinfo:
|
||||
logger.debug("Fetching userinfo")
|
||||
try:
|
||||
userinfo = await self._fetch_userinfo(token)
|
||||
except Exception as e:
|
||||
@ -746,7 +816,6 @@ class OidcProvider:
|
||||
self._sso_handler.render_error(request, "fetch_error", str(e))
|
||||
return
|
||||
else:
|
||||
logger.debug("Extracting userinfo from id_token")
|
||||
try:
|
||||
userinfo = await self._parse_id_token(token, nonce=session_data.nonce)
|
||||
except Exception as e:
|
||||
@ -939,7 +1008,9 @@ class OidcSessionTokenGenerator:
|
||||
A signed macaroon token with the session information.
|
||||
"""
|
||||
macaroon = pymacaroons.Macaroon(
|
||||
location=self._server_name, identifier="key", key=self._macaroon_secret_key,
|
||||
location=self._server_name,
|
||||
identifier="key",
|
||||
key=self._macaroon_secret_key,
|
||||
)
|
||||
macaroon.add_first_party_caveat("gen = 1")
|
||||
macaroon.add_first_party_caveat("type = session")
|
||||
|
@ -197,7 +197,8 @@ class PaginationHandler:
|
||||
stream_ordering = await self.store.find_first_stream_ordering_after_ts(ts)
|
||||
|
||||
r = await self.store.get_room_event_before_stream_ordering(
|
||||
room_id, stream_ordering,
|
||||
room_id,
|
||||
stream_ordering,
|
||||
)
|
||||
if not r:
|
||||
logger.warning(
|
||||
@ -223,7 +224,12 @@ class PaginationHandler:
|
||||
# the background so that it's not blocking any other operation apart from
|
||||
# other purges in the same room.
|
||||
run_as_background_process(
|
||||
"_purge_history", self._purge_history, purge_id, room_id, token, True,
|
||||
"_purge_history",
|
||||
self._purge_history,
|
||||
purge_id,
|
||||
room_id,
|
||||
token,
|
||||
True,
|
||||
)
|
||||
|
||||
def start_purge_history(
|
||||
@ -389,7 +395,9 @@ class PaginationHandler:
|
||||
)
|
||||
|
||||
await self.hs.get_federation_handler().maybe_backfill(
|
||||
room_id, curr_topo, limit=pagin_config.limit,
|
||||
room_id,
|
||||
curr_topo,
|
||||
limit=pagin_config.limit,
|
||||
)
|
||||
|
||||
to_room_key = None
|
||||
|
@ -349,10 +349,13 @@ class PresenceHandler(BasePresenceHandler):
|
||||
[self.user_to_current_state[user_id] for user_id in unpersisted]
|
||||
)
|
||||
|
||||
async def _update_states(self, new_states):
|
||||
async def _update_states(self, new_states: Iterable[UserPresenceState]) -> None:
|
||||
"""Updates presence of users. Sets the appropriate timeouts. Pokes
|
||||
the notifier and federation if and only if the changed presence state
|
||||
should be sent to clients/servers.
|
||||
|
||||
Args:
|
||||
new_states: The new user presence state updates to process.
|
||||
"""
|
||||
now = self.clock.time_msec()
|
||||
|
||||
@ -368,7 +371,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
new_states_dict = {}
|
||||
for new_state in new_states:
|
||||
new_states_dict[new_state.user_id] = new_state
|
||||
new_state = new_states_dict.values()
|
||||
new_states = new_states_dict.values()
|
||||
|
||||
for new_state in new_states:
|
||||
user_id = new_state.user_id
|
||||
@ -635,8 +638,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
self.external_process_last_updated_ms.pop(process_id, None)
|
||||
|
||||
async def current_state_for_user(self, user_id):
|
||||
"""Get the current presence state for a user.
|
||||
"""
|
||||
"""Get the current presence state for a user."""
|
||||
res = await self.current_state_for_users([user_id])
|
||||
return res[user_id]
|
||||
|
||||
@ -658,17 +660,6 @@ class PresenceHandler(BasePresenceHandler):
|
||||
|
||||
self._push_to_remotes(states)
|
||||
|
||||
async def notify_for_states(self, state, stream_id):
|
||||
parties = await get_interested_parties(self.store, [state])
|
||||
room_ids_to_states, users_to_states = parties
|
||||
|
||||
self.notifier.on_new_event(
|
||||
"presence_key",
|
||||
stream_id,
|
||||
rooms=room_ids_to_states.keys(),
|
||||
users=[UserID.from_string(u) for u in users_to_states],
|
||||
)
|
||||
|
||||
def _push_to_remotes(self, states):
|
||||
"""Sends state updates to remote servers.
|
||||
|
||||
@ -678,8 +669,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
self.federation.send_presence(states)
|
||||
|
||||
async def incoming_presence(self, origin, content):
|
||||
"""Called when we receive a `m.presence` EDU from a remote server.
|
||||
"""
|
||||
"""Called when we receive a `m.presence` EDU from a remote server."""
|
||||
if not self._presence_enabled:
|
||||
return
|
||||
|
||||
@ -729,8 +719,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
await self._update_states(updates)
|
||||
|
||||
async def set_state(self, target_user, state, ignore_status_msg=False):
|
||||
"""Set the presence state of the user.
|
||||
"""
|
||||
"""Set the presence state of the user."""
|
||||
status_msg = state.get("status_msg", None)
|
||||
presence = state["presence"]
|
||||
|
||||
@ -758,8 +747,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
await self._update_states([prev_state.copy_and_replace(**new_fields)])
|
||||
|
||||
async def is_visible(self, observed_user, observer_user):
|
||||
"""Returns whether a user can see another user's presence.
|
||||
"""
|
||||
"""Returns whether a user can see another user's presence."""
|
||||
observer_room_ids = await self.store.get_rooms_for_user(
|
||||
observer_user.to_string()
|
||||
)
|
||||
@ -953,8 +941,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
|
||||
|
||||
def should_notify(old_state, new_state):
|
||||
"""Decides if a presence state change should be sent to interested parties.
|
||||
"""
|
||||
"""Decides if a presence state change should be sent to interested parties."""
|
||||
if old_state == new_state:
|
||||
return False
|
||||
|
||||
|
@ -207,7 +207,8 @@ class ProfileHandler(BaseHandler):
|
||||
# This must be done by the target user himself.
|
||||
if by_admin:
|
||||
requester = create_requester(
|
||||
target_user, authenticated_entity=requester.authenticated_entity,
|
||||
target_user,
|
||||
authenticated_entity=requester.authenticated_entity,
|
||||
)
|
||||
|
||||
await self.store.set_profile_displayname(
|
||||
|
@ -49,15 +49,15 @@ class ReceiptsHandler(BaseHandler):
|
||||
)
|
||||
else:
|
||||
hs.get_federation_registry().register_instances_for_edu(
|
||||
"m.receipt", hs.config.worker.writers.receipts,
|
||||
"m.receipt",
|
||||
hs.config.worker.writers.receipts,
|
||||
)
|
||||
|
||||
self.clock = self.hs.get_clock()
|
||||
self.state = hs.get_state_handler()
|
||||
|
||||
async def _received_remote_receipt(self, origin: str, content: JsonDict) -> None:
|
||||
"""Called when we receive an EDU of type m.receipt from a remote HS.
|
||||
"""
|
||||
"""Called when we receive an EDU of type m.receipt from a remote HS."""
|
||||
receipts = []
|
||||
for room_id, room_values in content.items():
|
||||
for receipt_type, users in room_values.items():
|
||||
@ -83,8 +83,7 @@ class ReceiptsHandler(BaseHandler):
|
||||
await self._handle_new_receipts(receipts)
|
||||
|
||||
async def _handle_new_receipts(self, receipts: List[ReadReceipt]) -> bool:
|
||||
"""Takes a list of receipts, stores them and informs the notifier.
|
||||
"""
|
||||
"""Takes a list of receipts, stores them and informs the notifier."""
|
||||
min_batch_id = None # type: Optional[int]
|
||||
max_batch_id = None # type: Optional[int]
|
||||
|
||||
|
@ -62,8 +62,8 @@ class RegistrationHandler(BaseHandler):
|
||||
self._register_device_client = RegisterDeviceReplicationServlet.make_client(
|
||||
hs
|
||||
)
|
||||
self._post_registration_client = ReplicationPostRegisterActionsServlet.make_client(
|
||||
hs
|
||||
self._post_registration_client = (
|
||||
ReplicationPostRegisterActionsServlet.make_client(hs)
|
||||
)
|
||||
else:
|
||||
self.device_handler = hs.get_device_handler()
|
||||
@ -194,12 +194,15 @@ class RegistrationHandler(BaseHandler):
|
||||
self.check_registration_ratelimit(address)
|
||||
|
||||
result = await self.spam_checker.check_registration_for_spam(
|
||||
threepid, localpart, user_agent_ips or [],
|
||||
threepid,
|
||||
localpart,
|
||||
user_agent_ips or [],
|
||||
)
|
||||
|
||||
if result == RegistrationBehaviour.DENY:
|
||||
logger.info(
|
||||
"Blocked registration of %r", localpart,
|
||||
"Blocked registration of %r",
|
||||
localpart,
|
||||
)
|
||||
# We return a 429 to make it not obvious that they've been
|
||||
# denied.
|
||||
@ -208,7 +211,8 @@ class RegistrationHandler(BaseHandler):
|
||||
shadow_banned = result == RegistrationBehaviour.SHADOW_BAN
|
||||
if shadow_banned:
|
||||
logger.info(
|
||||
"Shadow banning registration of %r", localpart,
|
||||
"Shadow banning registration of %r",
|
||||
localpart,
|
||||
)
|
||||
|
||||
# do not check_auth_blocking if the call is coming through the Admin API
|
||||
@ -376,7 +380,9 @@ class RegistrationHandler(BaseHandler):
|
||||
config["room_alias_name"] = room_alias.localpart
|
||||
|
||||
info, _ = await room_creation_handler.create_room(
|
||||
fake_requester, config=config, ratelimit=False,
|
||||
fake_requester,
|
||||
config=config,
|
||||
ratelimit=False,
|
||||
)
|
||||
|
||||
# If the room does not require an invite, but another user
|
||||
@ -760,7 +766,10 @@ class RegistrationHandler(BaseHandler):
|
||||
return
|
||||
|
||||
await self._auth_handler.add_threepid(
|
||||
user_id, threepid["medium"], threepid["address"], threepid["validated_at"],
|
||||
user_id,
|
||||
threepid["medium"],
|
||||
threepid["address"],
|
||||
threepid["validated_at"],
|
||||
)
|
||||
|
||||
# And we add an email pusher for them by default, but only
|
||||
@ -812,5 +821,8 @@ class RegistrationHandler(BaseHandler):
|
||||
raise
|
||||
|
||||
await self._auth_handler.add_threepid(
|
||||
user_id, threepid["medium"], threepid["address"], threepid["validated_at"],
|
||||
user_id,
|
||||
threepid["medium"],
|
||||
threepid["address"],
|
||||
threepid["validated_at"],
|
||||
)
|
||||
|
@ -38,6 +38,7 @@ from synapse.api.filtering import Filter
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
|
||||
from synapse.events import EventBase
|
||||
from synapse.events.utils import copy_power_levels_contents
|
||||
from synapse.rest.admin._base import assert_user_is_admin
|
||||
from synapse.storage.state import StateFilter
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
@ -197,7 +198,9 @@ class RoomCreationHandler(BaseHandler):
|
||||
if r is None:
|
||||
raise NotFoundError("Unknown room id %s" % (old_room_id,))
|
||||
new_room_id = await self._generate_room_id(
|
||||
creator_id=user_id, is_public=r["is_public"], room_version=new_version,
|
||||
creator_id=user_id,
|
||||
is_public=r["is_public"],
|
||||
room_version=new_version,
|
||||
)
|
||||
|
||||
logger.info("Creating new room %s to replace %s", new_room_id, old_room_id)
|
||||
@ -235,7 +238,9 @@ class RoomCreationHandler(BaseHandler):
|
||||
|
||||
# now send the tombstone
|
||||
await self.event_creation_handler.handle_new_client_event(
|
||||
requester=requester, event=tombstone_event, context=tombstone_context,
|
||||
requester=requester,
|
||||
event=tombstone_event,
|
||||
context=tombstone_context,
|
||||
)
|
||||
|
||||
old_room_state = await tombstone_context.get_current_state_ids()
|
||||
@ -256,7 +261,10 @@ class RoomCreationHandler(BaseHandler):
|
||||
# finally, shut down the PLs in the old room, and update them in the new
|
||||
# room.
|
||||
await self._update_upgraded_room_pls(
|
||||
requester, old_room_id, new_room_id, old_room_state,
|
||||
requester,
|
||||
old_room_id,
|
||||
new_room_id,
|
||||
old_room_state,
|
||||
)
|
||||
|
||||
return new_room_id
|
||||
@ -424,17 +432,20 @@ class RoomCreationHandler(BaseHandler):
|
||||
|
||||
# Copy over user power levels now as this will not be possible with >100PL users once
|
||||
# the room has been created
|
||||
|
||||
# Calculate the minimum power level needed to clone the room
|
||||
event_power_levels = power_levels.get("events", {})
|
||||
state_default = power_levels.get("state_default", 0)
|
||||
ban = power_levels.get("ban")
|
||||
state_default = power_levels.get("state_default", 50)
|
||||
ban = power_levels.get("ban", 50)
|
||||
needed_power_level = max(state_default, ban, max(event_power_levels.values()))
|
||||
|
||||
# Get the user's current power level, this matches the logic in get_user_power_level,
|
||||
# but without the entire state map.
|
||||
user_power_levels = power_levels.setdefault("users", {})
|
||||
users_default = power_levels.get("users_default", 0)
|
||||
current_power_level = user_power_levels.get(user_id, users_default)
|
||||
# Raise the requester's power level in the new room if necessary
|
||||
current_power_level = power_levels["users"][user_id]
|
||||
if current_power_level < needed_power_level:
|
||||
power_levels["users"][user_id] = needed_power_level
|
||||
user_power_levels[user_id] = needed_power_level
|
||||
|
||||
await self._send_events_for_new_room(
|
||||
requester,
|
||||
@ -566,7 +577,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
ratelimit: bool = True,
|
||||
creator_join_profile: Optional[JsonDict] = None,
|
||||
) -> Tuple[dict, int]:
|
||||
""" Creates a new room.
|
||||
"""Creates a new room.
|
||||
|
||||
Args:
|
||||
requester:
|
||||
@ -840,7 +851,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
if room_alias:
|
||||
result["room_alias"] = room_alias.to_string()
|
||||
|
||||
# Always wait for room creation to progate before returning
|
||||
# Always wait for room creation to propagate before returning
|
||||
await self._replication.wait_for_stream_position(
|
||||
self.hs.config.worker.events_shard_config.get_instance(room_id),
|
||||
"events",
|
||||
@ -892,7 +903,10 @@ class RoomCreationHandler(BaseHandler):
|
||||
_,
|
||||
last_stream_id,
|
||||
) = await self.event_creation_handler.create_and_send_nonmember_event(
|
||||
creator, event, ratelimit=False, ignore_shadow_ban=True,
|
||||
creator,
|
||||
event,
|
||||
ratelimit=False,
|
||||
ignore_shadow_ban=True,
|
||||
)
|
||||
return last_stream_id
|
||||
|
||||
@ -992,7 +1006,10 @@ class RoomCreationHandler(BaseHandler):
|
||||
return last_sent_stream_id
|
||||
|
||||
async def _generate_room_id(
|
||||
self, creator_id: str, is_public: bool, room_version: RoomVersion,
|
||||
self,
|
||||
creator_id: str,
|
||||
is_public: bool,
|
||||
room_version: RoomVersion,
|
||||
):
|
||||
# autogen room IDs and try to create it. We may clash, so just
|
||||
# try a few times till one goes through, giving up eventually.
|
||||
@ -1016,41 +1033,51 @@ class RoomCreationHandler(BaseHandler):
|
||||
class RoomContextHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.store = hs.get_datastore()
|
||||
self.storage = hs.get_storage()
|
||||
self.state_store = self.storage.state
|
||||
|
||||
async def get_event_context(
|
||||
self,
|
||||
user: UserID,
|
||||
requester: Requester,
|
||||
room_id: str,
|
||||
event_id: str,
|
||||
limit: int,
|
||||
event_filter: Optional[Filter],
|
||||
use_admin_priviledge: bool = False,
|
||||
) -> Optional[JsonDict]:
|
||||
"""Retrieves events, pagination tokens and state around a given event
|
||||
in a room.
|
||||
|
||||
Args:
|
||||
user
|
||||
requester
|
||||
room_id
|
||||
event_id
|
||||
limit: The maximum number of events to return in total
|
||||
(excluding state).
|
||||
event_filter: the filter to apply to the events returned
|
||||
(excluding the target event_id)
|
||||
|
||||
use_admin_priviledge: if `True`, return all events, regardless
|
||||
of whether `user` has access to them. To be used **ONLY**
|
||||
from the admin API.
|
||||
Returns:
|
||||
dict, or None if the event isn't found
|
||||
"""
|
||||
user = requester.user
|
||||
if use_admin_priviledge:
|
||||
await assert_user_is_admin(self.auth, requester.user)
|
||||
|
||||
before_limit = math.floor(limit / 2.0)
|
||||
after_limit = limit - before_limit
|
||||
|
||||
users = await self.store.get_users_in_room(room_id)
|
||||
is_peeking = user.to_string() not in users
|
||||
|
||||
def filter_evts(events):
|
||||
return filter_events_for_client(
|
||||
async def filter_evts(events):
|
||||
if use_admin_priviledge:
|
||||
return events
|
||||
return await filter_events_for_client(
|
||||
self.storage, user.to_string(), events, is_peeking=is_peeking
|
||||
)
|
||||
|
||||
|
@ -191,7 +191,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
# do it up front for efficiency.)
|
||||
if txn_id and requester.access_token_id:
|
||||
existing_event_id = await self.store.get_event_id_from_transaction_id(
|
||||
room_id, requester.user.to_string(), requester.access_token_id, txn_id,
|
||||
room_id,
|
||||
requester.user.to_string(),
|
||||
requester.access_token_id,
|
||||
txn_id,
|
||||
)
|
||||
if existing_event_id:
|
||||
event_pos = await self.store.get_position_for_event(existing_event_id)
|
||||
@ -238,7 +241,11 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
)
|
||||
|
||||
result_event = await self.event_creation_handler.handle_new_client_event(
|
||||
requester, event, context, extra_users=[target], ratelimit=ratelimit,
|
||||
requester,
|
||||
event,
|
||||
context,
|
||||
extra_users=[target],
|
||||
ratelimit=ratelimit,
|
||||
)
|
||||
|
||||
if event.membership == Membership.LEAVE:
|
||||
@ -583,7 +590,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
# send the rejection to the inviter's HS (with fallback to
|
||||
# local event)
|
||||
return await self.remote_reject_invite(
|
||||
invite.event_id, txn_id, requester, content,
|
||||
invite.event_id,
|
||||
txn_id,
|
||||
requester,
|
||||
content,
|
||||
)
|
||||
|
||||
# the inviter was on our server, but has now left. Carry on
|
||||
@ -1056,8 +1066,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
|
||||
user: UserID,
|
||||
content: dict,
|
||||
) -> Tuple[str, int]:
|
||||
"""Implements RoomMemberHandler._remote_join
|
||||
"""
|
||||
"""Implements RoomMemberHandler._remote_join"""
|
||||
# filter ourselves out of remote_room_hosts: do_invite_join ignores it
|
||||
# and if it is the only entry we'd like to return a 404 rather than a
|
||||
# 500.
|
||||
@ -1211,7 +1220,10 @@ class RoomMemberMasterHandler(RoomMemberHandler):
|
||||
event.internal_metadata.out_of_band_membership = True
|
||||
|
||||
result_event = await self.event_creation_handler.handle_new_client_event(
|
||||
requester, event, context, extra_users=[UserID.from_string(target_user)],
|
||||
requester,
|
||||
event,
|
||||
context,
|
||||
extra_users=[UserID.from_string(target_user)],
|
||||
)
|
||||
# we know it was persisted, so must have a stream ordering
|
||||
assert result_event.internal_metadata.stream_ordering
|
||||
@ -1219,8 +1231,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
|
||||
return result_event.event_id, result_event.internal_metadata.stream_ordering
|
||||
|
||||
async def _user_left_room(self, target: UserID, room_id: str) -> None:
|
||||
"""Implements RoomMemberHandler._user_left_room
|
||||
"""
|
||||
"""Implements RoomMemberHandler._user_left_room"""
|
||||
user_left_room(self.distributor, target, room_id)
|
||||
|
||||
async def forget(self, user: UserID, room_id: str) -> None:
|
||||
|
@ -44,8 +44,7 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
|
||||
user: UserID,
|
||||
content: dict,
|
||||
) -> Tuple[str, int]:
|
||||
"""Implements RoomMemberHandler._remote_join
|
||||
"""
|
||||
"""Implements RoomMemberHandler._remote_join"""
|
||||
if len(remote_room_hosts) == 0:
|
||||
raise SynapseError(404, "No known servers")
|
||||
|
||||
@ -80,8 +79,7 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
|
||||
return ret["event_id"], ret["stream_id"]
|
||||
|
||||
async def _user_left_room(self, target: UserID, room_id: str) -> None:
|
||||
"""Implements RoomMemberHandler._user_left_room
|
||||
"""
|
||||
"""Implements RoomMemberHandler._user_left_room"""
|
||||
await self._notify_change_client(
|
||||
user_id=target.to_string(), room_id=room_id, change="left"
|
||||
)
|
||||
|
@ -23,7 +23,6 @@ from saml2.client import Saml2Client
|
||||
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.config import ConfigError
|
||||
from synapse.config.saml2_config import SamlAttributeRequirement
|
||||
from synapse.handlers._base import BaseHandler
|
||||
from synapse.handlers.sso import MappingException, UserAttributes
|
||||
from synapse.http.servlet import parse_string
|
||||
@ -122,7 +121,8 @@ class SamlHandler(BaseHandler):
|
||||
|
||||
now = self.clock.time_msec()
|
||||
self._outstanding_requests_dict[reqid] = Saml2SessionData(
|
||||
creation_time=now, ui_auth_session_id=ui_auth_session_id,
|
||||
creation_time=now,
|
||||
ui_auth_session_id=ui_auth_session_id,
|
||||
)
|
||||
|
||||
for key, value in info["headers"]:
|
||||
@ -239,12 +239,10 @@ class SamlHandler(BaseHandler):
|
||||
|
||||
# Ensure that the attributes of the logged in user meet the required
|
||||
# attributes.
|
||||
for requirement in self._saml2_attribute_requirements:
|
||||
if not _check_attribute_requirement(saml2_auth.ava, requirement):
|
||||
self._sso_handler.render_error(
|
||||
request, "unauthorised", "You are not authorised to log in here."
|
||||
)
|
||||
return
|
||||
if not self._sso_handler.check_required_attributes(
|
||||
request, saml2_auth.ava, self._saml2_attribute_requirements
|
||||
):
|
||||
return
|
||||
|
||||
# Call the mapper to register/login the user
|
||||
try:
|
||||
@ -373,21 +371,6 @@ class SamlHandler(BaseHandler):
|
||||
del self._outstanding_requests_dict[reqid]
|
||||
|
||||
|
||||
def _check_attribute_requirement(ava: dict, req: SamlAttributeRequirement) -> bool:
|
||||
values = ava.get(req.attribute, [])
|
||||
for v in values:
|
||||
if v == req.value:
|
||||
return True
|
||||
|
||||
logger.info(
|
||||
"SAML2 attribute %s did not match required value '%s' (was '%s')",
|
||||
req.attribute,
|
||||
req.value,
|
||||
values,
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
DOT_REPLACE_PATTERN = re.compile(
|
||||
("[^%s]" % (re.escape("".join(mxid_localpart_allowed_characters)),))
|
||||
)
|
||||
@ -468,7 +451,8 @@ class DefaultSamlMappingProvider:
|
||||
mxid_source = saml_response.ava[self._mxid_source_attribute][0]
|
||||
except KeyError:
|
||||
logger.warning(
|
||||
"SAML2 response lacks a '%s' attestation", self._mxid_source_attribute,
|
||||
"SAML2 response lacks a '%s' attestation",
|
||||
self._mxid_source_attribute,
|
||||
)
|
||||
raise SynapseError(
|
||||
400, "%s not in SAML2 response" % (self._mxid_source_attribute,)
|
||||
|
@ -16,10 +16,12 @@ import abc
|
||||
import logging
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Awaitable,
|
||||
Callable,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Set,
|
||||
@ -34,6 +36,7 @@ from twisted.web.iweb import IRequest
|
||||
|
||||
from synapse.api.constants import LoginType
|
||||
from synapse.api.errors import Codes, NotFoundError, RedirectException, SynapseError
|
||||
from synapse.config.sso import SsoAttributeRequirement
|
||||
from synapse.handlers.ui_auth import UIAuthSessionDataConstants
|
||||
from synapse.http import get_request_user_agent
|
||||
from synapse.http.server import respond_with_html, respond_with_redirect
|
||||
@ -324,7 +327,8 @@ class SsoHandler:
|
||||
|
||||
# Check if we already have a mapping for this user.
|
||||
previously_registered_user_id = await self._store.get_user_by_external_id(
|
||||
auth_provider_id, remote_user_id,
|
||||
auth_provider_id,
|
||||
remote_user_id,
|
||||
)
|
||||
|
||||
# A match was found, return the user ID.
|
||||
@ -413,7 +417,8 @@ class SsoHandler:
|
||||
with await self._mapping_lock.queue(auth_provider_id):
|
||||
# first of all, check if we already have a mapping for this user
|
||||
user_id = await self.get_sso_user_by_remote_user_id(
|
||||
auth_provider_id, remote_user_id,
|
||||
auth_provider_id,
|
||||
remote_user_id,
|
||||
)
|
||||
|
||||
# Check for grandfathering of users.
|
||||
@ -458,7 +463,8 @@ class SsoHandler:
|
||||
)
|
||||
|
||||
async def _call_attribute_mapper(
|
||||
self, sso_to_matrix_id_mapper: Callable[[int], Awaitable[UserAttributes]],
|
||||
self,
|
||||
sso_to_matrix_id_mapper: Callable[[int], Awaitable[UserAttributes]],
|
||||
) -> UserAttributes:
|
||||
"""Call the attribute mapper function in a loop, until we get a unique userid"""
|
||||
for i in range(self._MAP_USERNAME_RETRIES):
|
||||
@ -629,7 +635,8 @@ class SsoHandler:
|
||||
"""
|
||||
|
||||
user_id = await self.get_sso_user_by_remote_user_id(
|
||||
auth_provider_id, remote_user_id,
|
||||
auth_provider_id,
|
||||
remote_user_id,
|
||||
)
|
||||
|
||||
user_id_to_verify = await self._auth_handler.get_session_data(
|
||||
@ -668,7 +675,8 @@ class SsoHandler:
|
||||
|
||||
# render an error page.
|
||||
html = self._bad_user_template.render(
|
||||
server_name=self._server_name, user_id_to_verify=user_id_to_verify,
|
||||
server_name=self._server_name,
|
||||
user_id_to_verify=user_id_to_verify,
|
||||
)
|
||||
respond_with_html(request, 200, html)
|
||||
|
||||
@ -692,7 +700,9 @@ class SsoHandler:
|
||||
raise SynapseError(400, "unknown session")
|
||||
|
||||
async def check_username_availability(
|
||||
self, localpart: str, session_id: str,
|
||||
self,
|
||||
localpart: str,
|
||||
session_id: str,
|
||||
) -> bool:
|
||||
"""Handle an "is username available" callback check
|
||||
|
||||
@ -742,7 +752,11 @@ class SsoHandler:
|
||||
use_display_name: whether the user wants to use the suggested display name
|
||||
emails_to_use: emails that the user would like to use
|
||||
"""
|
||||
session = self.get_mapping_session(session_id)
|
||||
try:
|
||||
session = self.get_mapping_session(session_id)
|
||||
except SynapseError as e:
|
||||
self.render_error(request, "bad_session", e.msg, code=e.code)
|
||||
return
|
||||
|
||||
# update the session with the user's choices
|
||||
session.chosen_localpart = localpart
|
||||
@ -793,7 +807,12 @@ class SsoHandler:
|
||||
session_id,
|
||||
terms_version,
|
||||
)
|
||||
session = self.get_mapping_session(session_id)
|
||||
try:
|
||||
session = self.get_mapping_session(session_id)
|
||||
except SynapseError as e:
|
||||
self.render_error(request, "bad_session", e.msg, code=e.code)
|
||||
return
|
||||
|
||||
session.terms_accepted_version = terms_version
|
||||
|
||||
# we're done; now we can register the user
|
||||
@ -808,7 +827,11 @@ class SsoHandler:
|
||||
request: HTTP request
|
||||
session_id: ID of the username mapping session, extracted from a cookie
|
||||
"""
|
||||
session = self.get_mapping_session(session_id)
|
||||
try:
|
||||
session = self.get_mapping_session(session_id)
|
||||
except SynapseError as e:
|
||||
self.render_error(request, "bad_session", e.msg, code=e.code)
|
||||
return
|
||||
|
||||
logger.info(
|
||||
"[session %s] Registering localpart %s",
|
||||
@ -817,7 +840,8 @@ class SsoHandler:
|
||||
)
|
||||
|
||||
attributes = UserAttributes(
|
||||
localpart=session.chosen_localpart, emails=session.emails_to_use,
|
||||
localpart=session.chosen_localpart,
|
||||
emails=session.emails_to_use,
|
||||
)
|
||||
|
||||
if session.use_display_name:
|
||||
@ -880,6 +904,41 @@ class SsoHandler:
|
||||
logger.info("Expiring mapping session %s", session_id)
|
||||
del self._username_mapping_sessions[session_id]
|
||||
|
||||
def check_required_attributes(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
attributes: Mapping[str, List[Any]],
|
||||
attribute_requirements: Iterable[SsoAttributeRequirement],
|
||||
) -> bool:
|
||||
"""
|
||||
Confirm that the required attributes were present in the SSO response.
|
||||
|
||||
If all requirements are met, this will return True.
|
||||
|
||||
If any requirement is not met, then the request will be finalized by
|
||||
showing an error page to the user and False will be returned.
|
||||
|
||||
Args:
|
||||
request: The request to (potentially) respond to.
|
||||
attributes: The attributes from the SSO IdP.
|
||||
attribute_requirements: The requirements that attributes must meet.
|
||||
|
||||
Returns:
|
||||
True if all requirements are met, False if any attribute fails to
|
||||
meet the requirement.
|
||||
|
||||
"""
|
||||
# Ensure that the attributes of the logged in user meet the required
|
||||
# attributes.
|
||||
for requirement in attribute_requirements:
|
||||
if not _check_attribute_requirement(attributes, requirement):
|
||||
self.render_error(
|
||||
request, "unauthorised", "You are not authorised to log in here."
|
||||
)
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def get_username_mapping_session_cookie_from_request(request: IRequest) -> str:
|
||||
"""Extract the session ID from the cookie
|
||||
@ -890,3 +949,36 @@ def get_username_mapping_session_cookie_from_request(request: IRequest) -> str:
|
||||
if not session_id:
|
||||
raise SynapseError(code=400, msg="missing session_id")
|
||||
return session_id.decode("ascii", errors="replace")
|
||||
|
||||
|
||||
def _check_attribute_requirement(
|
||||
attributes: Mapping[str, List[Any]], req: SsoAttributeRequirement
|
||||
) -> bool:
|
||||
"""Check if SSO attributes meet the proper requirements.
|
||||
|
||||
Args:
|
||||
attributes: A mapping of attributes to an iterable of one or more values.
|
||||
requirement: The configured requirement to check.
|
||||
|
||||
Returns:
|
||||
True if the required attribute was found and had a proper value.
|
||||
"""
|
||||
if req.attribute not in attributes:
|
||||
logger.info("SSO attribute missing: %s", req.attribute)
|
||||
return False
|
||||
|
||||
# If the requirement is None, the attribute existing is enough.
|
||||
if req.value is None:
|
||||
return True
|
||||
|
||||
values = attributes[req.attribute]
|
||||
if req.value in values:
|
||||
return True
|
||||
|
||||
logger.info(
|
||||
"SSO attribute %s did not match required value '%s' (was '%s')",
|
||||
req.attribute,
|
||||
req.value,
|
||||
values,
|
||||
)
|
||||
return False
|
||||
|
@ -63,8 +63,7 @@ class StatsHandler:
|
||||
self.clock.call_later(0, self.notify_new_event)
|
||||
|
||||
def notify_new_event(self) -> None:
|
||||
"""Called when there may be more deltas to process
|
||||
"""
|
||||
"""Called when there may be more deltas to process"""
|
||||
if not self.stats_enabled or self._is_processing:
|
||||
return
|
||||
|
||||
|
@ -339,8 +339,7 @@ class SyncHandler:
|
||||
since_token: Optional[StreamToken] = None,
|
||||
full_state: bool = False,
|
||||
) -> SyncResult:
|
||||
"""Get the sync for client needed to match what the server has now.
|
||||
"""
|
||||
"""Get the sync for client needed to match what the server has now."""
|
||||
return await self.generate_sync_result(sync_config, since_token, full_state)
|
||||
|
||||
async def push_rules_for_user(self, user: UserID) -> JsonDict:
|
||||
@ -564,7 +563,7 @@ class SyncHandler:
|
||||
stream_position: StreamToken,
|
||||
state_filter: StateFilter = StateFilter.all(),
|
||||
) -> StateMap[str]:
|
||||
""" Get the room state at a particular stream position
|
||||
"""Get the room state at a particular stream position
|
||||
|
||||
Args:
|
||||
room_id: room for which to get state
|
||||
@ -598,7 +597,7 @@ class SyncHandler:
|
||||
state: MutableStateMap[EventBase],
|
||||
now_token: StreamToken,
|
||||
) -> Optional[JsonDict]:
|
||||
""" Works out a room summary block for this room, summarising the number
|
||||
"""Works out a room summary block for this room, summarising the number
|
||||
of joined members in the room, and providing the 'hero' members if the
|
||||
room has no name so clients can consistently name rooms. Also adds
|
||||
state events to 'state' if needed to describe the heroes.
|
||||
@ -743,7 +742,7 @@ class SyncHandler:
|
||||
now_token: StreamToken,
|
||||
full_state: bool,
|
||||
) -> MutableStateMap[EventBase]:
|
||||
""" Works out the difference in state between the start of the timeline
|
||||
"""Works out the difference in state between the start of the timeline
|
||||
and the previous sync.
|
||||
|
||||
Args:
|
||||
@ -820,8 +819,10 @@ class SyncHandler:
|
||||
)
|
||||
elif batch.limited:
|
||||
if batch:
|
||||
state_at_timeline_start = await self.state_store.get_state_ids_for_event(
|
||||
batch.events[0].event_id, state_filter=state_filter
|
||||
state_at_timeline_start = (
|
||||
await self.state_store.get_state_ids_for_event(
|
||||
batch.events[0].event_id, state_filter=state_filter
|
||||
)
|
||||
)
|
||||
else:
|
||||
# We can get here if the user has ignored the senders of all
|
||||
@ -954,8 +955,7 @@ class SyncHandler:
|
||||
since_token: Optional[StreamToken] = None,
|
||||
full_state: bool = False,
|
||||
) -> SyncResult:
|
||||
"""Generates a sync result.
|
||||
"""
|
||||
"""Generates a sync result."""
|
||||
# NB: The now_token gets changed by some of the generate_sync_* methods,
|
||||
# this is due to some of the underlying streams not supporting the ability
|
||||
# to query up to a given point.
|
||||
@ -1029,8 +1029,8 @@ class SyncHandler:
|
||||
one_time_key_counts = await self.store.count_e2e_one_time_keys(
|
||||
user_id, device_id
|
||||
)
|
||||
unused_fallback_key_types = await self.store.get_e2e_unused_fallback_key_types(
|
||||
user_id, device_id
|
||||
unused_fallback_key_types = (
|
||||
await self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
|
||||
)
|
||||
|
||||
logger.debug("Fetching group data")
|
||||
@ -1175,8 +1175,10 @@ class SyncHandler:
|
||||
# weren't in the previous sync *or* they left and rejoined.
|
||||
users_that_have_changed.update(newly_joined_or_invited_users)
|
||||
|
||||
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
|
||||
user_id, since_token.device_list_key
|
||||
user_signatures_changed = (
|
||||
await self.store.get_users_whose_signatures_changed(
|
||||
user_id, since_token.device_list_key
|
||||
)
|
||||
)
|
||||
users_that_have_changed.update(user_signatures_changed)
|
||||
|
||||
@ -1392,8 +1394,10 @@ class SyncHandler:
|
||||
logger.debug("no-oping sync")
|
||||
return set(), set(), set(), set()
|
||||
|
||||
ignored_account_data = await self.store.get_global_account_data_by_type_for_user(
|
||||
AccountDataTypes.IGNORED_USER_LIST, user_id=user_id
|
||||
ignored_account_data = (
|
||||
await self.store.get_global_account_data_by_type_for_user(
|
||||
AccountDataTypes.IGNORED_USER_LIST, user_id=user_id
|
||||
)
|
||||
)
|
||||
|
||||
# If there is ignored users account data and it matches the proper type,
|
||||
@ -1498,8 +1502,7 @@ class SyncHandler:
|
||||
async def _get_rooms_changed(
|
||||
self, sync_result_builder: "SyncResultBuilder", ignored_users: FrozenSet[str]
|
||||
) -> _RoomChanges:
|
||||
"""Gets the the changes that have happened since the last sync.
|
||||
"""
|
||||
"""Gets the the changes that have happened since the last sync."""
|
||||
user_id = sync_result_builder.sync_config.user.to_string()
|
||||
since_token = sync_result_builder.since_token
|
||||
now_token = sync_result_builder.now_token
|
||||
|
@ -61,7 +61,8 @@ class FollowerTypingHandler:
|
||||
|
||||
if hs.config.worker.writers.typing != hs.get_instance_name():
|
||||
hs.get_federation_registry().register_instance_for_edu(
|
||||
"m.typing", hs.config.worker.writers.typing,
|
||||
"m.typing",
|
||||
hs.config.worker.writers.typing,
|
||||
)
|
||||
|
||||
# map room IDs to serial numbers
|
||||
@ -76,8 +77,7 @@ class FollowerTypingHandler:
|
||||
self.clock.looping_call(self._handle_timeouts, 5000)
|
||||
|
||||
def _reset(self) -> None:
|
||||
"""Reset the typing handler's data caches.
|
||||
"""
|
||||
"""Reset the typing handler's data caches."""
|
||||
# map room IDs to serial numbers
|
||||
self._room_serials = {}
|
||||
# map room IDs to sets of users currently typing
|
||||
@ -149,8 +149,7 @@ class FollowerTypingHandler:
|
||||
def process_replication_rows(
|
||||
self, token: int, rows: List[TypingStream.TypingStreamRow]
|
||||
) -> None:
|
||||
"""Should be called whenever we receive updates for typing stream.
|
||||
"""
|
||||
"""Should be called whenever we receive updates for typing stream."""
|
||||
|
||||
if self._latest_room_serial > token:
|
||||
# The master has gone backwards. To prevent inconsistent data, just
|
||||
|
@ -97,8 +97,7 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
return results
|
||||
|
||||
def notify_new_event(self) -> None:
|
||||
"""Called when there may be more deltas to process
|
||||
"""
|
||||
"""Called when there may be more deltas to process"""
|
||||
if not self.update_user_directory:
|
||||
return
|
||||
|
||||
@ -134,8 +133,7 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
)
|
||||
|
||||
async def handle_user_deactivated(self, user_id: str) -> None:
|
||||
"""Called when a user ID is deactivated
|
||||
"""
|
||||
"""Called when a user ID is deactivated"""
|
||||
# FIXME(#3714): We should probably do this in the same worker as all
|
||||
# the other changes.
|
||||
await self.store.remove_from_user_dir(user_id)
|
||||
@ -145,6 +143,10 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
if self.pos is None:
|
||||
self.pos = await self.store.get_user_directory_stream_pos()
|
||||
|
||||
# If still None then the initial background update hasn't happened yet.
|
||||
if self.pos is None:
|
||||
return None
|
||||
|
||||
# Loop round handling deltas until we're up to date
|
||||
while True:
|
||||
with Measure(self.clock, "user_dir_delta"):
|
||||
@ -172,8 +174,7 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
await self.store.update_user_directory_stream_pos(max_pos)
|
||||
|
||||
async def _handle_deltas(self, deltas: List[Dict[str, Any]]) -> None:
|
||||
"""Called with the state deltas to process
|
||||
"""
|
||||
"""Called with the state deltas to process"""
|
||||
for delta in deltas:
|
||||
typ = delta["type"]
|
||||
state_key = delta["state_key"]
|
||||
|
@ -54,8 +54,7 @@ class QuieterFileBodyProducer(FileBodyProducer):
|
||||
|
||||
|
||||
def get_request_user_agent(request: IRequest, default: str = "") -> str:
|
||||
"""Return the last User-Agent header, or the given default.
|
||||
"""
|
||||
"""Return the last User-Agent header, or the given default."""
|
||||
# There could be raw utf-8 bytes in the User-Agent header.
|
||||
|
||||
# N.B. if you don't do this, the logger explodes cryptically
|
||||
|
@ -56,7 +56,7 @@ from twisted.web.client import (
|
||||
)
|
||||
from twisted.web.http import PotentialDataLoss
|
||||
from twisted.web.http_headers import Headers
|
||||
from twisted.web.iweb import IAgent, IBodyProducer, IResponse
|
||||
from twisted.web.iweb import UNKNOWN_LENGTH, IAgent, IBodyProducer, IResponse
|
||||
|
||||
from synapse.api.errors import Codes, HttpResponseException, SynapseError
|
||||
from synapse.http import QuieterFileBodyProducer, RequestTimedOutError, redact_uri
|
||||
@ -399,7 +399,8 @@ class SimpleHttpClient:
|
||||
body_producer = None
|
||||
if data is not None:
|
||||
body_producer = QuieterFileBodyProducer(
|
||||
BytesIO(data), cooperator=self._cooperator,
|
||||
BytesIO(data),
|
||||
cooperator=self._cooperator,
|
||||
)
|
||||
|
||||
request_deferred = treq.request(
|
||||
@ -408,13 +409,18 @@ class SimpleHttpClient:
|
||||
agent=self.agent,
|
||||
data=body_producer,
|
||||
headers=headers,
|
||||
# Avoid buffering the body in treq since we do not reuse
|
||||
# response bodies.
|
||||
unbuffered=True,
|
||||
**self._extra_treq_args,
|
||||
) # type: defer.Deferred
|
||||
|
||||
# we use our own timeout mechanism rather than treq's as a workaround
|
||||
# for https://twistedmatrix.com/trac/ticket/9534.
|
||||
request_deferred = timeout_deferred(
|
||||
request_deferred, 60, self.hs.get_reactor(),
|
||||
request_deferred,
|
||||
60,
|
||||
self.hs.get_reactor(),
|
||||
)
|
||||
|
||||
# turn timeouts into RequestTimedOutErrors
|
||||
@ -700,18 +706,6 @@ class SimpleHttpClient:
|
||||
|
||||
resp_headers = dict(response.headers.getAllRawHeaders())
|
||||
|
||||
if (
|
||||
b"Content-Length" in resp_headers
|
||||
and max_size
|
||||
and int(resp_headers[b"Content-Length"][0]) > max_size
|
||||
):
|
||||
logger.warning("Requested URL is too large > %r bytes" % (max_size,))
|
||||
raise SynapseError(
|
||||
502,
|
||||
"Requested file is too large > %r bytes" % (max_size,),
|
||||
Codes.TOO_LARGE,
|
||||
)
|
||||
|
||||
if response.code > 299:
|
||||
logger.warning("Got %d when downloading %s" % (response.code, url))
|
||||
raise SynapseError(502, "Got error %d" % (response.code,), Codes.UNKNOWN)
|
||||
@ -778,7 +772,9 @@ class _ReadBodyWithMaxSizeProtocol(protocol.Protocol):
|
||||
# in the meantime.
|
||||
if self.max_size is not None and self.length >= self.max_size:
|
||||
self.deferred.errback(BodyExceededMaxSize())
|
||||
self.transport.loseConnection()
|
||||
# Close the connection (forcefully) since all the data will get
|
||||
# discarded anyway.
|
||||
self.transport.abortConnection()
|
||||
|
||||
def connectionLost(self, reason: Failure) -> None:
|
||||
# If the maximum size was already exceeded, there's nothing to do.
|
||||
@ -812,6 +808,11 @@ def read_body_with_max_size(
|
||||
Returns:
|
||||
A Deferred which resolves to the length of the read body.
|
||||
"""
|
||||
# If the Content-Length header gives a size larger than the maximum allowed
|
||||
# size, do not bother downloading the body.
|
||||
if max_size is not None and response.length != UNKNOWN_LENGTH:
|
||||
if response.length > max_size:
|
||||
return defer.fail(BodyExceededMaxSize())
|
||||
|
||||
d = defer.Deferred()
|
||||
response.deliverBody(_ReadBodyWithMaxSizeProtocol(stream, d, max_size))
|
||||
|
@ -195,8 +195,7 @@ class MatrixFederationAgent:
|
||||
|
||||
@implementer(IAgentEndpointFactory)
|
||||
class MatrixHostnameEndpointFactory:
|
||||
"""Factory for MatrixHostnameEndpoint for parsing to an Agent.
|
||||
"""
|
||||
"""Factory for MatrixHostnameEndpoint for parsing to an Agent."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@ -261,8 +260,7 @@ class MatrixHostnameEndpoint:
|
||||
self._srv_resolver = srv_resolver
|
||||
|
||||
def connect(self, protocol_factory: IProtocolFactory) -> defer.Deferred:
|
||||
"""Implements IStreamClientEndpoint interface
|
||||
"""
|
||||
"""Implements IStreamClientEndpoint interface"""
|
||||
|
||||
return run_in_background(self._do_connect, protocol_factory)
|
||||
|
||||
@ -323,12 +321,19 @@ class MatrixHostnameEndpoint:
|
||||
if port or _is_ip_literal(host):
|
||||
return [Server(host, port or 8448)]
|
||||
|
||||
logger.debug("Looking up SRV record for %s", host.decode(errors="replace"))
|
||||
server_list = await self._srv_resolver.resolve_service(b"_matrix._tcp." + host)
|
||||
|
||||
if server_list:
|
||||
logger.debug(
|
||||
"Got %s from SRV lookup for %s",
|
||||
", ".join(map(str, server_list)),
|
||||
host.decode(errors="replace"),
|
||||
)
|
||||
return server_list
|
||||
|
||||
# No SRV records, so we fallback to host and 8448
|
||||
logger.debug("No SRV records for %s", host.decode(errors="replace"))
|
||||
return [Server(host, 8448)]
|
||||
|
||||
|
||||
|
@ -81,8 +81,7 @@ class WellKnownLookupResult:
|
||||
|
||||
|
||||
class WellKnownResolver:
|
||||
"""Handles well-known lookups for matrix servers.
|
||||
"""
|
||||
"""Handles well-known lookups for matrix servers."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user