Merge remote-tracking branch 'upstream/release-v1.38'

This commit is contained in:
Tulir Asokan 2021-07-07 13:41:28 +03:00
commit fa8ec8051b
88 changed files with 4940 additions and 2441 deletions

View file

@ -7,6 +7,8 @@ on:
- develop - develop
# For documentation specific to a release # For documentation specific to a release
- 'release-v*' - 'release-v*'
# stable docs
- master
workflow_dispatch: workflow_dispatch:
@ -23,42 +25,42 @@ jobs:
mdbook-version: '0.4.9' mdbook-version: '0.4.9'
- name: Build the documentation - name: Build the documentation
run: mdbook build # mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page
# as the default. Let's opt for the welcome page instead.
run: |
mdbook build
cp book/welcome_and_overview.html book/index.html
# Deploy to the latest documentation directories # Figure out the target directory.
- name: Deploy latest documentation #
# The target directory depends on the name of the branch
#
- name: Get the target directory name
id: vars
run: |
# first strip the 'refs/heads/' prefix with some shell foo
branch="${GITHUB_REF#refs/heads/}"
case $branch in
release-*)
# strip 'release-' from the name for release branches.
branch="${branch#release-}"
;;
master)
# deploy to "latest" for the master branch.
branch="latest"
;;
esac
# finally, set the 'branch-version' var.
echo "::set-output name=branch-version::$branch"
# Deploy to the target directory.
- name: Deploy to gh pages
uses: peaceiris/actions-gh-pages@068dc23d9710f1ba62e86896f84735d869951305 # v3.8.0 uses: peaceiris/actions-gh-pages@068dc23d9710f1ba62e86896f84735d869951305 # v3.8.0
with: with:
github_token: ${{ secrets.GITHUB_TOKEN }} github_token: ${{ secrets.GITHUB_TOKEN }}
keep_files: true keep_files: true
publish_dir: ./book publish_dir: ./book
destination_dir: ./develop destination_dir: ./${{ steps.vars.outputs.branch-version }}
- name: Get the current Synapse version
id: vars
# The $GITHUB_REF value for a branch looks like `refs/heads/release-v1.2`. We do some
# shell magic to remove the "refs/heads/release-v" bit from this, to end up with "1.2",
# our major/minor version number, and set this to a var called `branch-version`.
#
# We then use some python to get Synapse's full version string, which may look
# like "1.2.3rc4". We set this to a var called `synapse-version`. We use this
# to determine if this release is still an RC, and if so block deployment.
run: |
echo ::set-output name=branch-version::${GITHUB_REF#refs/heads/release-v}
echo ::set-output name=synapse-version::`python3 -c 'import synapse; print(synapse.__version__)'`
# Deploy to the version-specific directory
- name: Deploy release-specific documentation
# We only carry out this step if we're running on a release branch,
# and the current Synapse version does not have "rc" in the name.
#
# The result is that only full releases are deployed, but can be
# updated if the release branch gets retroactive fixes.
if: ${{ startsWith( github.ref, 'refs/heads/release-v' ) && !contains( steps.vars.outputs.synapse-version, 'rc') }}
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
keep_files: true
publish_dir: ./book
# The resulting documentation will end up in a directory named `vX.Y`.
destination_dir: ./v${{ steps.vars.outputs.branch-version }}

View file

@ -1,3 +1,54 @@
Synapse 1.38.0rc1 (2021-07-06)
==============================
This release includes a database schema update which could result in elevated disk usage. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#upgrading-to-v1380) for more information.
Features
--------
- Implement refresh tokens as specified by [MSC2918](https://github.com/matrix-org/matrix-doc/pull/2918). ([\#9450](https://github.com/matrix-org/synapse/issues/9450))
- Add support for evicting cache entries based on last access time. ([\#10205](https://github.com/matrix-org/synapse/issues/10205))
- Omit empty fields from the `/sync` response. Contributed by @deepbluev7. ([\#10214](https://github.com/matrix-org/synapse/issues/10214))
- Improve validation on federation `send_{join,leave,knock}` endpoints. ([\#10225](https://github.com/matrix-org/synapse/issues/10225), [\#10243](https://github.com/matrix-org/synapse/issues/10243))
- Add SSO `external_ids` to the Query User Account admin API. ([\#10261](https://github.com/matrix-org/synapse/issues/10261))
- Mark events received over federation which fail a spam check as "soft-failed". ([\#10263](https://github.com/matrix-org/synapse/issues/10263))
- Add metrics for new inbound federation staging area. ([\#10284](https://github.com/matrix-org/synapse/issues/10284))
- Add script to print information about recently registered users. ([\#10290](https://github.com/matrix-org/synapse/issues/10290))
Bugfixes
--------
- Fix a long-standing bug which meant that invite rejections and knocks were not sent out over federation in a timely manner. ([\#10223](https://github.com/matrix-org/synapse/issues/10223))
- Fix a bug introduced in v1.26.0 where only users who have set profile information could be deactivated with erasure enabled. ([\#10252](https://github.com/matrix-org/synapse/issues/10252))
- Fix a long-standing bug where Synapse would return errors after 2<sup>31</sup> events were handled by the server. ([\#10264](https://github.com/matrix-org/synapse/issues/10264), [\#10267](https://github.com/matrix-org/synapse/issues/10267), [\#10282](https://github.com/matrix-org/synapse/issues/10282), [\#10286](https://github.com/matrix-org/synapse/issues/10286), [\#10291](https://github.com/matrix-org/synapse/issues/10291), [\#10314](https://github.com/matrix-org/synapse/issues/10314), [\#10326](https://github.com/matrix-org/synapse/issues/10326))
- Fix the prometheus `synapse_federation_server_pdu_process_time` metric. Broke in v1.37.1. ([\#10279](https://github.com/matrix-org/synapse/issues/10279))
- Ensure that inbound events from federation that were being processed when Synapse was restarted get promptly processed on start up. ([\#10303](https://github.com/matrix-org/synapse/issues/10303))
Improved Documentation
----------------------
- Move the upgrade notes to [docs/upgrade.md](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md) and convert them to markdown. ([\#10166](https://github.com/matrix-org/synapse/issues/10166))
- Choose Welcome & Overview as the default page for synapse documentation website. ([\#10242](https://github.com/matrix-org/synapse/issues/10242))
- Adjust the URL in the README.rst file to point to irc.libera.chat. ([\#10258](https://github.com/matrix-org/synapse/issues/10258))
- Fix homeserver config option name in presence router documentation. ([\#10288](https://github.com/matrix-org/synapse/issues/10288))
- Fix link pointing at the wrong section in the modules documentation page. ([\#10302](https://github.com/matrix-org/synapse/issues/10302))
Internal Changes
----------------
- Drop `Origin` and `Accept` from the value of the `Access-Control-Allow-Headers` response header. ([\#10114](https://github.com/matrix-org/synapse/issues/10114))
- Add type hints to the federation servlets. ([\#10213](https://github.com/matrix-org/synapse/issues/10213))
- Improve the reliability of auto-joining remote rooms. ([\#10237](https://github.com/matrix-org/synapse/issues/10237))
- Update the release script to use the semver terminology and determine the release branch based on the next version. ([\#10239](https://github.com/matrix-org/synapse/issues/10239))
- Fix type hints for computing auth events. ([\#10253](https://github.com/matrix-org/synapse/issues/10253))
- Improve the performance of the spaces summary endpoint by only recursing into spaces (and not rooms in general). ([\#10256](https://github.com/matrix-org/synapse/issues/10256))
- Move event authentication methods from `Auth` to `EventAuthHandler`. ([\#10268](https://github.com/matrix-org/synapse/issues/10268))
- Re-enable a SyTest after it has been fixed. ([\#10292](https://github.com/matrix-org/synapse/issues/10292))
Synapse 1.37.1 (2021-06-30) Synapse 1.37.1 (2021-06-30)
=========================== ===========================
@ -775,7 +826,7 @@ Internal Changes
Synapse 1.29.0 (2021-03-08) Synapse 1.29.0 (2021-03-08)
=========================== ===========================
Note that synapse now expects an `X-Forwarded-Proto` header when used with a reverse proxy. Please see [UPGRADE.rst](UPGRADE.rst#upgrading-to-v1290) for more details on this change. Note that synapse now expects an `X-Forwarded-Proto` header when used with a reverse proxy. Please see the [upgrade notes](docs/upgrade.md#upgrading-to-v1290) for more details on this change.
No significant changes. No significant changes.
@ -840,7 +891,7 @@ Synapse 1.28.0 (2021-02-25)
Note that this release drops support for ARMv7 in the official Docker images, due to repeated problems building for ARMv7 (and the associated maintenance burden this entails). Note that this release drops support for ARMv7 in the official Docker images, due to repeated problems building for ARMv7 (and the associated maintenance burden this entails).
This release also fixes the documentation included in v1.27.0 around the callback URI for SAML2 identity providers. If your server is configured to use single sign-on via a SAML2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes. This release also fixes the documentation included in v1.27.0 around the callback URI for SAML2 identity providers. If your server is configured to use single sign-on via a SAML2 IdP, you may need to make configuration changes. Please review the [upgrade notes](docs/upgrade.md) for more details on these changes.
Internal Changes Internal Changes
@ -939,9 +990,9 @@ Synapse 1.27.0 (2021-02-16)
Note that this release includes a change in Synapse to use Redis as a cache ─ as well as a pub/sub mechanism ─ if Redis support is enabled for workers. No action is needed by server administrators, and we do not expect resource usage of the Redis instance to change dramatically. Note that this release includes a change in Synapse to use Redis as a cache ─ as well as a pub/sub mechanism ─ if Redis support is enabled for workers. No action is needed by server administrators, and we do not expect resource usage of the Redis instance to change dramatically.
This release also changes the callback URI for OpenID Connect (OIDC) and SAML2 identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 or SAML2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes. This release also changes the callback URI for OpenID Connect (OIDC) and SAML2 identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 or SAML2 IdP, you may need to make configuration changes. Please review the [upgrade notes](docs/upgrade.md) for more details on these changes.
This release also changes escaping of variables in the HTML templates for SSO or email notifications. If you have customised these templates, please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes. This release also changes escaping of variables in the HTML templates for SSO or email notifications. If you have customised these templates, please review the [upgrade notes](docs/upgrade.md) for more details on these changes.
Bugfixes Bugfixes
@ -1045,7 +1096,7 @@ Synapse 1.26.0 (2021-01-27)
=========================== ===========================
This release brings a new schema version for Synapse and rolling back to a previous This release brings a new schema version for Synapse and rolling back to a previous
version is not trivial. Please review [UPGRADE.rst](UPGRADE.rst) for more details version is not trivial. Please review the [upgrade notes](docs/upgrade.md) for more details
on these changes and for general upgrade guidance. on these changes and for general upgrade guidance.
No significant changes since 1.26.0rc2. No significant changes since 1.26.0rc2.
@ -1072,7 +1123,7 @@ Synapse 1.26.0rc1 (2021-01-20)
============================== ==============================
This release brings a new schema version for Synapse and rolling back to a previous This release brings a new schema version for Synapse and rolling back to a previous
version is not trivial. Please review [UPGRADE.rst](UPGRADE.rst) for more details version is not trivial. Please review the [upgrade notes](docs/upgrade.md) for more details
on these changes and for general upgrade guidance. on these changes and for general upgrade guidance.
Features Features
@ -1478,7 +1529,7 @@ Internal Changes
Synapse 1.23.0 (2020-11-18) Synapse 1.23.0 (2020-11-18)
=========================== ===========================
This release changes the way structured logging is configured. See the [upgrade notes](UPGRADE.rst#upgrading-to-v1230) for details. This release changes the way structured logging is configured. See the [upgrade notes](docs/upgrade.md#upgrading-to-v1230) for details.
**Note**: We are aware of a trivially exploitable denial of service vulnerability in versions of Synapse prior to 1.20.0. Complete details will be disclosed on Monday, November 23rd. If you have not upgraded recently, please do so. **Note**: We are aware of a trivially exploitable denial of service vulnerability in versions of Synapse prior to 1.20.0. Complete details will be disclosed on Monday, November 23rd. If you have not upgraded recently, please do so.
@ -2081,7 +2132,10 @@ No significant changes since 1.19.0rc1.
Removal warning Removal warning
--------------- ---------------
As outlined in the [previous release](https://github.com/matrix-org/synapse/releases/tag/v1.18.0), we are no longer publishing Docker images with the `-py3` tag suffix. On top of that, we have also removed the `latest-py3` tag. Please see [the announcement in the upgrade notes for 1.18.0](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180). As outlined in the [previous release](https://github.com/matrix-org/synapse/releases/tag/v1.18.0),
we are no longer publishing Docker images with the `-py3` tag suffix. On top of that, we have also removed the
`latest-py3` tag. Please see
[the announcement in the upgrade notes for 1.18.0](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#upgrading-to-v1180).
Synapse 1.19.0rc1 (2020-08-13) Synapse 1.19.0rc1 (2020-08-13)
@ -2112,7 +2166,7 @@ Bugfixes
Updates to the Docker image Updates to the Docker image
--------------------------- ---------------------------
- We no longer publish Docker images with the `-py3` tag suffix, as [announced in the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180). ([\#8056](https://github.com/matrix-org/synapse/issues/8056)) - We no longer publish Docker images with the `-py3` tag suffix, as [announced in the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#upgrading-to-v1180). ([\#8056](https://github.com/matrix-org/synapse/issues/8056))
Improved Documentation Improved Documentation
@ -2670,7 +2724,7 @@ configurations of Synapse:
to be incomplete or empty if Synapse was upgraded directly from v1.2.1 or to be incomplete or empty if Synapse was upgraded directly from v1.2.1 or
earlier, to versions between v1.4.0 and v1.12.x. earlier, to versions between v1.4.0 and v1.12.x.
Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes Please review the [upgrade notes](docs/upgrade.md) for more details on these changes
and for general upgrade guidance. and for general upgrade guidance.
@ -2771,7 +2825,7 @@ Bugfixes
- Fix bad error handling that would cause Synapse to crash if it's provided with a YAML configuration file that's either empty or doesn't parse into a key-value map. ([\#7341](https://github.com/matrix-org/synapse/issues/7341)) - Fix bad error handling that would cause Synapse to crash if it's provided with a YAML configuration file that's either empty or doesn't parse into a key-value map. ([\#7341](https://github.com/matrix-org/synapse/issues/7341))
- Fix incorrect metrics reporting for `renew_attestations` background task. ([\#7344](https://github.com/matrix-org/synapse/issues/7344)) - Fix incorrect metrics reporting for `renew_attestations` background task. ([\#7344](https://github.com/matrix-org/synapse/issues/7344))
- Prevent non-federating rooms from appearing in responses to federated `POST /publicRoom` requests when a filter was included. ([\#7367](https://github.com/matrix-org/synapse/issues/7367)) - Prevent non-federating rooms from appearing in responses to federated `POST /publicRoom` requests when a filter was included. ([\#7367](https://github.com/matrix-org/synapse/issues/7367))
- Fix a bug which would cause the room durectory to be incorrectly populated if Synapse was upgraded directly from v1.2.1 or earlier to v1.4.0 or later. Note that this fix does not apply retrospectively; see the [upgrade notes](UPGRADE.rst#upgrading-to-v1130) for more information. ([\#7387](https://github.com/matrix-org/synapse/issues/7387)) - Fix a bug which would cause the room durectory to be incorrectly populated if Synapse was upgraded directly from v1.2.1 or earlier to v1.4.0 or later. Note that this fix does not apply retrospectively; see the [upgrade notes](docs/upgrade.md#upgrading-to-v1130) for more information. ([\#7387](https://github.com/matrix-org/synapse/issues/7387))
- Fix bug in `EventContext.deserialize`. ([\#7393](https://github.com/matrix-org/synapse/issues/7393)) - Fix bug in `EventContext.deserialize`. ([\#7393](https://github.com/matrix-org/synapse/issues/7393))
@ -2921,7 +2975,7 @@ Synapse 1.12.0 includes a database update which is run as part of the upgrade,
and which may take some time (several hours in the case of a large and which may take some time (several hours in the case of a large
server). Synapse will not respond to HTTP requests while this update is taking server). Synapse will not respond to HTTP requests while this update is taking
place. For imformation on seeing if you are affected, and workaround if you place. For imformation on seeing if you are affected, and workaround if you
are, see the [upgrade notes](UPGRADE.rst#upgrading-to-v1120). are, see the [upgrade notes](docs/upgrade.md#upgrading-to-v1120).
Security advisory Security advisory
----------------- -----------------
@ -3474,7 +3528,7 @@ Bugfixes
Synapse 1.7.0 (2019-12-13) Synapse 1.7.0 (2019-12-13)
========================== ==========================
This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](UPGRADE.rst#upgrading-to-v170) for details. This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](docs/upgrade.md#upgrading-to-v170) for details.
Support for SQLite versions before 3.11 is now deprecated. A future release will refuse to start if used with an SQLite version before 3.11. Support for SQLite versions before 3.11 is now deprecated. A future release will refuse to start if used with an SQLite version before 3.11.
@ -3838,7 +3892,7 @@ Synapse 1.4.0rc1 (2019-09-26)
============================= =============================
Note that this release includes significant changes around 3pid Note that this release includes significant changes around 3pid
verification. Administrators are reminded to review the [upgrade notes](UPGRADE.rst#upgrading-to-v140). verification. Administrators are reminded to review the [upgrade notes](docs/upgrade.md#upgrading-to-v140).
Features Features
-------- --------
@ -4214,7 +4268,7 @@ Synapse 1.1.0 (2019-07-04)
========================== ==========================
As of v1.1.0, Synapse no longer supports Python 2, nor Postgres version 9.4. As of v1.1.0, Synapse no longer supports Python 2, nor Postgres version 9.4.
See the [upgrade notes](UPGRADE.rst#upgrading-to-v110) for more details. See the [upgrade notes](docs/upgrade.md#upgrading-to-v110) for more details.
This release also deprecates the use of environment variables to configure the This release also deprecates the use of environment variables to configure the
docker image. See the [docker README](https://github.com/matrix-org/synapse/blob/release-v1.1.0/docker/README.md#legacy-dynamic-configuration-file-support) docker image. See the [docker README](https://github.com/matrix-org/synapse/blob/release-v1.1.0/docker/README.md#legacy-dynamic-configuration-file-support)
@ -4244,7 +4298,7 @@ Synapse 1.1.0rc1 (2019-07-02)
============================= =============================
As of v1.1.0, Synapse no longer supports Python 2, nor Postgres version 9.4. As of v1.1.0, Synapse no longer supports Python 2, nor Postgres version 9.4.
See the [upgrade notes](UPGRADE.rst#upgrading-to-v110) for more details. See the [upgrade notes](docs/upgrade.md#upgrading-to-v110) for more details.
Features Features
-------- --------
@ -5016,7 +5070,7 @@ run on Python versions 3.5 or 3.6 (as well as 2.7). Support for Python 3.7
remains experimental. remains experimental.
We recommend upgrading to Python 3, but make sure to read the [upgrade We recommend upgrading to Python 3, but make sure to read the [upgrade
notes](UPGRADE.rst#upgrading-to-v0340) when doing so. notes](docs/upgrade.md#upgrading-to-v0340) when doing so.
Features Features
-------- --------

View file

@ -25,7 +25,7 @@ The overall architecture is::
``#matrix:matrix.org`` is the official support room for Matrix, and can be ``#matrix:matrix.org`` is the official support room for Matrix, and can be
accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html or accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html or
via IRC bridge at irc://irc.freenode.net/matrix. via IRC bridge at irc://irc.libera.chat/matrix.
Synapse is currently in rapid development, but as of version 0.5 we believe it Synapse is currently in rapid development, but as of version 0.5 we believe it
is sufficiently stable to be run as an internet-facing service for real usage! is sufficiently stable to be run as an internet-facing service for real usage!
@ -186,11 +186,11 @@ impact to other applications will be minimal.
Upgrading an existing Synapse Upgrading an existing Synapse
============================= =============================
The instructions for upgrading synapse are in `UPGRADE.rst`_. The instructions for upgrading synapse are in `the upgrade notes`_.
Please check these instructions as upgrading may require extra steps for some Please check these instructions as upgrading may require extra steps for some
versions of synapse. versions of synapse.
.. _UPGRADE.rst: UPGRADE.rst .. _the upgrade notes: https://matrix-org.github.io/synapse/develop/upgrade.html
.. _reverse-proxy: .. _reverse-proxy:

File diff suppressed because it is too large Load diff

6
debian/changelog vendored
View file

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.37.1ubuntu1) UNRELEASED; urgency=medium
* Add synapse_review_recent_signups script
-- Erik Johnston <erikj@matrix.org> Thu, 01 Jul 2021 15:55:03 +0100
matrix-synapse-py3 (1.37.1) stable; urgency=medium matrix-synapse-py3 (1.37.1) stable; urgency=medium
* New synapse release 1.37.1. * New synapse release 1.37.1.

View file

@ -1,90 +1,58 @@
.\" generated with Ronn/v0.7.3 .\" generated with Ronn-NG/v0.8.0
.\" http://github.com/rtomayko/ronn/tree/0.7.3 .\" http://github.com/apjanke/ronn-ng/tree/0.8.0
. .TH "HASH_PASSWORD" "1" "July 2021" "" ""
.TH "HASH_PASSWORD" "1" "February 2017" "" ""
.
.SH "NAME" .SH "NAME"
\fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset \fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset
.
.SH "SYNOPSIS" .SH "SYNOPSIS"
\fBhash_password\fR [\fB\-p\fR|\fB\-\-password\fR [password]] [\fB\-c\fR|\fB\-\-config\fR \fIfile\fR] \fBhash_password\fR [\fB\-p\fR|\fB\-\-password\fR [password]] [\fB\-c\fR|\fB\-\-config\fR \fIfile\fR]
.
.SH "DESCRIPTION" .SH "DESCRIPTION"
\fBhash_password\fR calculates the hash of a supplied password using bcrypt\. \fBhash_password\fR calculates the hash of a supplied password using bcrypt\.
.
.P .P
\fBhash_password\fR takes a password as an parameter either on the command line or the \fBSTDIN\fR if not supplied\. \fBhash_password\fR takes a password as an parameter either on the command line or the \fBSTDIN\fR if not supplied\.
.
.P .P
It accepts an YAML file which can be used to specify parameters like the number of rounds for bcrypt and password_config section having the pepper value used for the hashing\. By default \fBbcrypt_rounds\fR is set to \fB10\fR\. It accepts an YAML file which can be used to specify parameters like the number of rounds for bcrypt and password_config section having the pepper value used for the hashing\. By default \fBbcrypt_rounds\fR is set to \fB10\fR\.
.
.P .P
The hashed password is written on the \fBSTDOUT\fR\. The hashed password is written on the \fBSTDOUT\fR\.
.
.SH "FILES" .SH "FILES"
A sample YAML file accepted by \fBhash_password\fR is described below: A sample YAML file accepted by \fBhash_password\fR is described below:
.
.P .P
bcrypt_rounds: 17 password_config: pepper: "random hashing pepper" bcrypt_rounds: 17 password_config: pepper: "random hashing pepper"
.
.SH "OPTIONS" .SH "OPTIONS"
.
.TP .TP
\fB\-p\fR, \fB\-\-password\fR \fB\-p\fR, \fB\-\-password\fR
Read the password form the command line if [password] is supplied\. If not, prompt the user and read the password form the \fBSTDIN\fR\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\. Read the password form the command line if [password] is supplied\. If not, prompt the user and read the password form the \fBSTDIN\fR\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
.
.TP .TP
\fB\-c\fR, \fB\-\-config\fR \fB\-c\fR, \fB\-\-config\fR
Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\. Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\.
.
.SH "EXAMPLES" .SH "EXAMPLES"
Hash from the command line: Hash from the command line:
.
.IP "" 4 .IP "" 4
.
.nf .nf
$ hash_password \-p "p@ssw0rd" $ hash_password \-p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8\.X8fWFpum7SxZ9MFe $2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8\.X8fWFpum7SxZ9MFe
.
.fi .fi
.
.IP "" 0 .IP "" 0
.
.P .P
Hash from the STDIN: Hash from the STDIN:
.
.IP "" 4 .IP "" 4
.
.nf .nf
$ hash_password $ hash_password
Password: Password:
Confirm password: Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX\.rcuAbM8ErLoUhybG $2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX\.rcuAbM8ErLoUhybG
.
.fi .fi
.
.IP "" 0 .IP "" 0
.
.P .P
Using a config file: Using a config file:
.
.IP "" 4 .IP "" 4
.
.nf .nf
$ hash_password \-c config\.yml $ hash_password \-c config\.yml
Password: Password:
Confirm password: Confirm password:
$2b$12$CwI\.wBNr\.w3kmiUlV3T5s\.GT2wH7uebDCovDrCOh18dFedlANK99O $2b$12$CwI\.wBNr\.w3kmiUlV3T5s\.GT2wH7uebDCovDrCOh18dFedlANK99O
.
.fi .fi
.
.IP "" 0 .IP "" 0
.
.SH "COPYRIGHT" .SH "COPYRIGHT"
This man page was written by Rahul De <\fIrahulde@swecha\.net\fR> for Debian GNU/Linux distribution\. This man page was written by Rahul De <\fI\%mailto:rahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
.
.SH "SEE ALSO" .SH "SEE ALSO"
synctl(1), synapse_port_db(1), register_new_matrix_user(1) synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

View file

@ -66,4 +66,4 @@ for Debian GNU/Linux distribution.
## SEE ALSO ## SEE ALSO
synctl(1), synapse_port_db(1), register_new_matrix_user(1) synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

1
debian/manpages vendored
View file

@ -1,4 +1,5 @@
debian/hash_password.1 debian/hash_password.1
debian/register_new_matrix_user.1 debian/register_new_matrix_user.1
debian/synapse_port_db.1 debian/synapse_port_db.1
debian/synapse_review_recent_signups.1
debian/synctl.1 debian/synctl.1

View file

@ -1,4 +1,5 @@
opt/venvs/matrix-synapse/bin/hash_password usr/bin/hash_password opt/venvs/matrix-synapse/bin/hash_password usr/bin/hash_password
opt/venvs/matrix-synapse/bin/register_new_matrix_user usr/bin/register_new_matrix_user opt/venvs/matrix-synapse/bin/register_new_matrix_user usr/bin/register_new_matrix_user
opt/venvs/matrix-synapse/bin/synapse_port_db usr/bin/synapse_port_db opt/venvs/matrix-synapse/bin/synapse_port_db usr/bin/synapse_port_db
opt/venvs/matrix-synapse/bin/synapse_review_recent_signups usr/bin/synapse_review_recent_signups
opt/venvs/matrix-synapse/bin/synctl usr/bin/synctl opt/venvs/matrix-synapse/bin/synctl usr/bin/synctl

View file

@ -1,72 +1,47 @@
.\" generated with Ronn/v0.7.3 .\" generated with Ronn-NG/v0.8.0
.\" http://github.com/rtomayko/ronn/tree/0.7.3 .\" http://github.com/apjanke/ronn-ng/tree/0.8.0
. .TH "REGISTER_NEW_MATRIX_USER" "1" "July 2021" "" ""
.TH "REGISTER_NEW_MATRIX_USER" "1" "February 2017" "" ""
.
.SH "NAME" .SH "NAME"
\fBregister_new_matrix_user\fR \- Used to register new users with a given home server when registration has been disabled \fBregister_new_matrix_user\fR \- Used to register new users with a given home server when registration has been disabled
.
.SH "SYNOPSIS" .SH "SYNOPSIS"
\fBregister_new_matrix_user\fR options\.\.\. \fBregister_new_matrix_user\fR options\|\.\|\.\|\.
.
.SH "DESCRIPTION" .SH "DESCRIPTION"
\fBregister_new_matrix_user\fR registers new users with a given home server when registration has been disabled\. For this to work, the home server must be configured with the \'registration_shared_secret\' option set\. \fBregister_new_matrix_user\fR registers new users with a given home server when registration has been disabled\. For this to work, the home server must be configured with the \'registration_shared_secret\' option set\.
.
.P .P
This accepts the user credentials like the username, password, is user an admin or not and registers the user onto the homeserver database\. Also, a YAML file containing the shared secret can be provided\. If not, the shared secret can be provided via the command line\. This accepts the user credentials like the username, password, is user an admin or not and registers the user onto the homeserver database\. Also, a YAML file containing the shared secret can be provided\. If not, the shared secret can be provided via the command line\.
.
.P .P
By default it assumes the home server URL to be \fBhttps://localhost:8448\fR\. This can be changed via the \fBserver_url\fR command line option\. By default it assumes the home server URL to be \fBhttps://localhost:8448\fR\. This can be changed via the \fBserver_url\fR command line option\.
.
.SH "FILES" .SH "FILES"
A sample YAML file accepted by \fBregister_new_matrix_user\fR is described below: A sample YAML file accepted by \fBregister_new_matrix_user\fR is described below:
.
.IP "" 4 .IP "" 4
.
.nf .nf
registration_shared_secret: "s3cr3t" registration_shared_secret: "s3cr3t"
.
.fi .fi
.
.IP "" 0 .IP "" 0
.
.SH "OPTIONS" .SH "OPTIONS"
.
.TP .TP
\fB\-u\fR, \fB\-\-user\fR \fB\-u\fR, \fB\-\-user\fR
Local part of the new user\. Will prompt if omitted\. Local part of the new user\. Will prompt if omitted\.
.
.TP .TP
\fB\-p\fR, \fB\-\-password\fR \fB\-p\fR, \fB\-\-password\fR
New password for user\. Will prompt if omitted\. Supplying the password on the command line is not recommended\. Use the STDIN instead\. New password for user\. Will prompt if omitted\. Supplying the password on the command line is not recommended\. Use the STDIN instead\.
.
.TP .TP
\fB\-a\fR, \fB\-\-admin\fR \fB\-a\fR, \fB\-\-admin\fR
Register new user as an admin\. Will prompt if omitted\. Register new user as an admin\. Will prompt if omitted\.
.
.TP .TP
\fB\-c\fR, \fB\-\-config\fR \fB\-c\fR, \fB\-\-config\fR
Path to server config file containing the shared secret\. Path to server config file containing the shared secret\.
.
.TP .TP
\fB\-k\fR, \fB\-\-shared\-secret\fR \fB\-k\fR, \fB\-\-shared\-secret\fR
Shared secret as defined in server config file\. This is an optional parameter as it can be also supplied via the YAML file\. Shared secret as defined in server config file\. This is an optional parameter as it can be also supplied via the YAML file\.
.
.TP .TP
\fBserver_url\fR \fBserver_url\fR
URL of the home server\. Defaults to \'https://localhost:8448\'\. URL of the home server\. Defaults to \'https://localhost:8448\'\.
.
.SH "EXAMPLES" .SH "EXAMPLES"
.
.nf .nf
$ register_new_matrix_user \-u user1 \-p p@ssword \-a \-c config\.yaml $ register_new_matrix_user \-u user1 \-p p@ssword \-a \-c config\.yaml
.
.fi .fi
.
.SH "COPYRIGHT" .SH "COPYRIGHT"
This man page was written by Rahul De <\fIrahulde@swecha\.net\fR> for Debian GNU/Linux distribution\. This man page was written by Rahul De <\fI\%mailto:rahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
.
.SH "SEE ALSO" .SH "SEE ALSO"
synctl(1), synapse_port_db(1), hash_password(1) synctl(1), synapse_port_db(1), hash_password(1), synapse_review_recent_signups(1)

View file

@ -58,4 +58,4 @@ for Debian GNU/Linux distribution.
## SEE ALSO ## SEE ALSO
synctl(1), synapse_port_db(1), hash_password(1) synctl(1), synapse_port_db(1), hash_password(1), synapse_review_recent_signups(1)

View file

@ -1,83 +1,56 @@
.\" generated with Ronn/v0.7.3 .\" generated with Ronn-NG/v0.8.0
.\" http://github.com/rtomayko/ronn/tree/0.7.3 .\" http://github.com/apjanke/ronn-ng/tree/0.8.0
. .TH "SYNAPSE_PORT_DB" "1" "July 2021" "" ""
.TH "SYNAPSE_PORT_DB" "1" "February 2017" "" ""
.
.SH "NAME" .SH "NAME"
\fBsynapse_port_db\fR \- A script to port an existing synapse SQLite database to a new PostgreSQL database\. \fBsynapse_port_db\fR \- A script to port an existing synapse SQLite database to a new PostgreSQL database\.
.
.SH "SYNOPSIS" .SH "SYNOPSIS"
\fBsynapse_port_db\fR [\-v] \-\-sqlite\-database=\fIdbfile\fR \-\-postgres\-config=\fIyamlconfig\fR [\-\-curses] [\-\-batch\-size=\fIbatch\-size\fR] \fBsynapse_port_db\fR [\-v] \-\-sqlite\-database=\fIdbfile\fR \-\-postgres\-config=\fIyamlconfig\fR [\-\-curses] [\-\-batch\-size=\fIbatch\-size\fR]
.
.SH "DESCRIPTION" .SH "DESCRIPTION"
\fBsynapse_port_db\fR ports an existing synapse SQLite database to a new PostgreSQL database\. \fBsynapse_port_db\fR ports an existing synapse SQLite database to a new PostgreSQL database\.
.
.P .P
SQLite database is specified with \fB\-\-sqlite\-database\fR option and PostgreSQL configuration required to connect to PostgreSQL database is provided using \fB\-\-postgres\-config\fR configuration\. The configuration is specified in YAML format\. SQLite database is specified with \fB\-\-sqlite\-database\fR option and PostgreSQL configuration required to connect to PostgreSQL database is provided using \fB\-\-postgres\-config\fR configuration\. The configuration is specified in YAML format\.
.
.SH "OPTIONS" .SH "OPTIONS"
.
.TP .TP
\fB\-v\fR \fB\-v\fR
Print log messages in \fBdebug\fR level instead of \fBinfo\fR level\. Print log messages in \fBdebug\fR level instead of \fBinfo\fR level\.
.
.TP .TP
\fB\-\-sqlite\-database\fR \fB\-\-sqlite\-database\fR
The snapshot of the SQLite database file\. This must not be currently used by a running synapse server\. The snapshot of the SQLite database file\. This must not be currently used by a running synapse server\.
.
.TP .TP
\fB\-\-postgres\-config\fR \fB\-\-postgres\-config\fR
The database config file for the PostgreSQL database\. The database config file for the PostgreSQL database\.
.
.TP .TP
\fB\-\-curses\fR \fB\-\-curses\fR
Display a curses based progress UI\. Display a curses based progress UI\.
.
.SH "CONFIG FILE" .SH "CONFIG FILE"
The postgres configuration file must be a valid YAML file with the following options\. The postgres configuration file must be a valid YAML file with the following options\.
. .IP "\[ci]" 4
.IP "\(bu" 4
\fBdatabase\fR: Database configuration section\. This section header can be ignored and the options below may be specified as top level keys\. \fBdatabase\fR: Database configuration section\. This section header can be ignored and the options below may be specified as top level keys\.
. .IP "\[ci]" 4
.IP "\(bu" 4
\fBname\fR: Connector to use when connecting to the database\. This value must be \fBpsycopg2\fR\. \fBname\fR: Connector to use when connecting to the database\. This value must be \fBpsycopg2\fR\.
. .IP "\[ci]" 4
.IP "\(bu" 4
\fBargs\fR: DB API 2\.0 compatible arguments to send to the \fBpsycopg2\fR module\. \fBargs\fR: DB API 2\.0 compatible arguments to send to the \fBpsycopg2\fR module\.
. .IP "\[ci]" 4
.IP "\(bu" 4
\fBdbname\fR \- the database name \fBdbname\fR \- the database name
. .IP "\[ci]" 4
.IP "\(bu" 4
\fBuser\fR \- user name used to authenticate \fBuser\fR \- user name used to authenticate
. .IP "\[ci]" 4
.IP "\(bu" 4
\fBpassword\fR \- password used to authenticate \fBpassword\fR \- password used to authenticate
. .IP "\[ci]" 4
.IP "\(bu" 4
\fBhost\fR \- database host address (defaults to UNIX socket if not provided) \fBhost\fR \- database host address (defaults to UNIX socket if not provided)
. .IP "\[ci]" 4
.IP "\(bu" 4
\fBport\fR \- connection port number (defaults to 5432 if not provided) \fBport\fR \- connection port number (defaults to 5432 if not provided)
.
.IP "" 0 .IP "" 0
. .IP "\[ci]" 4
.IP "\(bu" 4
\fBsynchronous_commit\fR: Optional\. Default is True\. If the value is \fBFalse\fR, enable asynchronous commit and don\'t wait for the server to call fsync before ending the transaction\. See: https://www\.postgresql\.org/docs/current/static/wal\-async\-commit\.html \fBsynchronous_commit\fR: Optional\. Default is True\. If the value is \fBFalse\fR, enable asynchronous commit and don\'t wait for the server to call fsync before ending the transaction\. See: https://www\.postgresql\.org/docs/current/static/wal\-async\-commit\.html
.
.IP "" 0 .IP "" 0
.
.IP "" 0 .IP "" 0
.
.P .P
Following example illustrates the configuration file format\. Following example illustrates the configuration file format\.
.
.IP "" 4 .IP "" 4
.
.nf .nf
database: database:
name: psycopg2 name: psycopg2
args: args:
@ -86,13 +59,9 @@ database:
password: ORohmi9Eet=ohphi password: ORohmi9Eet=ohphi
host: localhost host: localhost
synchronous_commit: false synchronous_commit: false
.
.fi .fi
.
.IP "" 0 .IP "" 0
.
.SH "COPYRIGHT" .SH "COPYRIGHT"
This man page was written by Sunil Mohan Adapa <\fIsunil@medhas\.org\fR> for Debian GNU/Linux distribution\. This man page was written by Sunil Mohan Adapa <\fI\%mailto:sunil@medhas\.org\fR> for Debian GNU/Linux distribution\.
.
.SH "SEE ALSO" .SH "SEE ALSO"
synctl(1), hash_password(1), register_new_matrix_user(1) synctl(1), hash_password(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

View file

@ -84,4 +84,4 @@ Debian GNU/Linux distribution.
## SEE ALSO ## SEE ALSO
synctl(1), hash_password(1), register_new_matrix_user(1) synctl(1), hash_password(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

26
debian/synapse_review_recent_signups.1 vendored Normal file
View file

@ -0,0 +1,26 @@
.\" generated with Ronn-NG/v0.8.0
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
.TH "SYNAPSE_REVIEW_RECENT_SIGNUPS" "1" "July 2021" "" ""
.SH "NAME"
\fBsynapse_review_recent_signups\fR \- Print users that have recently registered on Synapse
.SH "SYNOPSIS"
\fBsynapse_review_recent_signups\fR \fB\-c\fR|\fB\-\-config\fR \fIfile\fR [\fB\-s\fR|\fB\-\-since\fR \fIperiod\fR] [\fB\-e\fR|\fB\-\-exclude\-emails\fR] [\fB\-u\fR|\fB\-\-only\-users\fR]
.SH "DESCRIPTION"
\fBsynapse_review_recent_signups\fR prints out recently registered users on a Synapse server, as well as some basic information about the user\.
.P
\fBsynapse_review_recent_signups\fR must be supplied with the config of the Synapse server, so that it can fetch the database config and connect to the database\.
.SH "OPTIONS"
.TP
\fB\-c\fR, \fB\-\-config\fR
The config file(s) used by the Synapse server\.
.TP
\fB\-s\fR, \fB\-\-since\fR
How far back to search for newly registered users\. Defaults to 7d, i\.e\. up to seven days in the past\. Valid units are \'s\', \'m\', \'h\', \'d\', \'w\', or \'y\'\.
.TP
\fB\-e\fR, \fB\-\-exclude\-emails\fR
Do not print out users that have validated emails associated with their account\.
.TP
\fB\-u\fR, \fB\-\-only\-users\fR
Only print out the user IDs of recently registered users, without any additional information
.SH "SEE ALSO"
synctl(1), synapse_port_db(1), register_new_matrix_user(1), hash_password(1)

View file

@ -0,0 +1,37 @@
synapse_review_recent_signups(1) -- Print users that have recently registered on Synapse
========================================================================================
## SYNOPSIS
`synapse_review_recent_signups` `-c`|`--config` <file> [`-s`|`--since` <period>] [`-e`|`--exclude-emails`] [`-u`|`--only-users`]
## DESCRIPTION
**synapse_review_recent_signups** prints out recently registered users on a
Synapse server, as well as some basic information about the user.
`synapse_review_recent_signups` must be supplied with the config of the Synapse
server, so that it can fetch the database config and connect to the database.
## OPTIONS
* `-c`, `--config`:
The config file(s) used by the Synapse server.
* `-s`, `--since`:
How far back to search for newly registered users. Defaults to 7d, i.e. up
to seven days in the past. Valid units are 's', 'm', 'h', 'd', 'w', or 'y'.
* `-e`, `--exclude-emails`:
Do not print out users that have validated emails associated with their
account.
* `-u`, `--only-users`:
Only print out the user IDs of recently registered users, without any
additional information
## SEE ALSO
synctl(1), synapse_port_db(1), register_new_matrix_user(1), hash_password(1)

42
debian/synctl.1 vendored
View file

@ -1,63 +1,41 @@
.\" generated with Ronn/v0.7.3 .\" generated with Ronn-NG/v0.8.0
.\" http://github.com/rtomayko/ronn/tree/0.7.3 .\" http://github.com/apjanke/ronn-ng/tree/0.8.0
. .TH "SYNCTL" "1" "July 2021" "" ""
.TH "SYNCTL" "1" "February 2017" "" ""
.
.SH "NAME" .SH "NAME"
\fBsynctl\fR \- Synapse server control interface \fBsynctl\fR \- Synapse server control interface
.
.SH "SYNOPSIS" .SH "SYNOPSIS"
Start, stop or restart synapse server\. Start, stop or restart synapse server\.
.
.P .P
\fBsynctl\fR {start|stop|restart} [configfile] [\-w|\-\-worker=\fIWORKERCONFIG\fR] [\-a|\-\-all\-processes=\fIWORKERCONFIGDIR\fR] \fBsynctl\fR {start|stop|restart} [configfile] [\-w|\-\-worker=\fIWORKERCONFIG\fR] [\-a|\-\-all\-processes=\fIWORKERCONFIGDIR\fR]
.
.SH "DESCRIPTION" .SH "DESCRIPTION"
\fBsynctl\fR can be used to start, stop or restart Synapse server\. The control operation can be done on all processes or a single worker process\. \fBsynctl\fR can be used to start, stop or restart Synapse server\. The control operation can be done on all processes or a single worker process\.
.
.SH "OPTIONS" .SH "OPTIONS"
.
.TP .TP
\fBaction\fR \fBaction\fR
The value of action should be one of \fBstart\fR, \fBstop\fR or \fBrestart\fR\. The value of action should be one of \fBstart\fR, \fBstop\fR or \fBrestart\fR\.
.
.TP .TP
\fBconfigfile\fR \fBconfigfile\fR
Optional path of the configuration file to use\. Default value is \fBhomeserver\.yaml\fR\. The configuration file must exist for the operation to succeed\. Optional path of the configuration file to use\. Default value is \fBhomeserver\.yaml\fR\. The configuration file must exist for the operation to succeed\.
.
.TP .TP
\fB\-w\fR, \fB\-\-worker\fR: \fB\-w\fR, \fB\-\-worker\fR:
.
.IP
Perform start, stop or restart operations on a single worker\. Incompatible with \fB\-a\fR|\fB\-\-all\-processes\fR\. Value passed must be a valid worker\'s configuration file\.
.
.TP .TP
\fB\-a\fR, \fB\-\-all\-processes\fR: \fB\-a\fR, \fB\-\-all\-processes\fR:
.
.IP
Perform start, stop or restart operations on all the workers in the given directory and the main synapse process\. Incompatible with \fB\-w\fR|\fB\-\-worker\fR\. Value passed must be a directory containing valid work configuration files\. All files ending with \fB\.yaml\fR extension shall be considered as configuration files and all other files in the directory are ignored\.
.
.SH "CONFIGURATION FILE" .SH "CONFIGURATION FILE"
Configuration file may be generated as follows: Configuration file may be generated as follows:
.
.IP "" 4 .IP "" 4
.
.nf .nf
$ python \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name> $ python \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name>
.
.fi .fi
.
.IP "" 0 .IP "" 0
.
.SH "ENVIRONMENT" .SH "ENVIRONMENT"
.
.TP .TP
\fBSYNAPSE_CACHE_FACTOR\fR \fBSYNAPSE_CACHE_FACTOR\fR
Synapse\'s architecture is quite RAM hungry currently \- a lot of recent room data and metadata is deliberately cached in RAM in order to speed up common requests\. This will be improved in future, but for now the easiest way to either reduce the RAM usage (at the risk of slowing things down) is to set the SYNAPSE_CACHE_FACTOR environment variable\. Roughly speaking, a SYNAPSE_CACHE_FACTOR of 1\.0 will max out at around 3\-4GB of resident memory \- this is what we currently run the matrix\.org on\. The default setting is currently 0\.1, which is probably around a ~700MB footprint\. You can dial it down further to 0\.02 if desired, which targets roughly ~512MB\. Conversely you can dial it up if you need performance for lots of users and have a box with a lot of RAM\. Synapse\'s architecture is quite RAM hungry currently \- we deliberately cache a lot of recent room data and metadata in RAM in order to speed up common requests\. We\'ll improve this in the future, but for now the easiest way to either reduce the RAM usage (at the risk of slowing things down) is to set the almost\-undocumented \fBSYNAPSE_CACHE_FACTOR\fR environment variable\. The default is 0\.5, which can be decreased to reduce RAM usage in memory constrained enviroments, or increased if performance starts to degrade\.
. .IP
However, degraded performance due to a low cache factor, common on machines with slow disks, often leads to explosions in memory use due backlogged requests\. In this case, reducing the cache factor will make things worse\. Instead, try increasing it drastically\. 2\.0 is a good starting value\.
.SH "COPYRIGHT" .SH "COPYRIGHT"
This man page was written by Sunil Mohan Adapa <\fIsunil@medhas\.org\fR> for Debian GNU/Linux distribution\. This man page was written by Sunil Mohan Adapa <\fI\%mailto:sunil@medhas\.org\fR> for Debian GNU/Linux distribution\.
.
.SH "SEE ALSO" .SH "SEE ALSO"
synapse_port_db(1), hash_password(1), register_new_matrix_user(1) synapse_port_db(1), hash_password(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

2
debian/synctl.ronn vendored
View file

@ -68,4 +68,4 @@ Debian GNU/Linux distribution.
## SEE ALSO ## SEE ALSO
synapse_port_db(1), hash_password(1), register_new_matrix_user(1) synapse_port_db(1), hash_password(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

View file

@ -11,7 +11,7 @@
- [Delegation](delegate.md) - [Delegation](delegate.md)
# Upgrading # Upgrading
- [Upgrading between Synapse Versions](upgrading/README.md) - [Upgrading between Synapse Versions](upgrade.md)
- [Upgrading from pre-Synapse 1.0](MSC1711_certificates_FAQ.md) - [Upgrading from pre-Synapse 1.0](MSC1711_certificates_FAQ.md)
# Usage # Usage

View file

@ -36,7 +36,17 @@ It returns a JSON body like the following:
"creation_ts": 1560432506, "creation_ts": 1560432506,
"appservice_id": null, "appservice_id": null,
"consent_server_notice_sent": null, "consent_server_notice_sent": null,
"consent_version": null "consent_version": null,
"external_ids": [
{
"auth_provider": "<provider1>",
"external_id": "<user_id_provider_1>"
},
{
"auth_provider": "<provider2>",
"external_id": "<user_id_provider_2>"
}
]
} }
``` ```

View file

@ -194,7 +194,7 @@ In order to port a module that uses Synapse's old module interface, its author n
* ensure the module's callbacks are all asynchronous. * ensure the module's callbacks are all asynchronous.
* register their callbacks using one or more of the `register_[...]_callbacks` methods * register their callbacks using one or more of the `register_[...]_callbacks` methods
from the `ModuleApi` class in the module's `__init__` method (see [this section](#registering-a-web-resource) from the `ModuleApi` class in the module's `__init__` method (see [this section](#registering-a-callback)
for more info). for more info).
Additionally, if the module is packaged with an additional web resource, the module Additionally, if the module is packaged with an additional web resource, the module

View file

@ -222,7 +222,9 @@ Synapse, amend your homeserver config file with the following.
```yaml ```yaml
presence: presence:
routing_module: enabled: true
presence_router:
module: my_module.ExamplePresenceRouter module: my_module.ExamplePresenceRouter
config: config:
# Any configuration options for your module. The below is an example. # Any configuration options for your module. The below is an example.

View file

@ -703,6 +703,12 @@ caches:
per_cache_factors: per_cache_factors:
#get_users_who_share_room_with_user: 2.0 #get_users_who_share_room_with_user: 2.0
# Controls how long an entry can be in a cache without having been
# accessed before being evicted. Defaults to None, which means
# entries are never evicted based on time.
#
#expiry_time: 30m
## Database ## ## Database ##

1391
docs/upgrade.md Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,7 +0,0 @@
<!--
Include the contents of UPGRADE.rst from the project root without moving it, which may
break links around the internet. Additionally, note that SUMMARY.md is unable to
directly link to content outside of the docs/ directory. So we use this file as a
redirection.
-->
{{#include ../../UPGRADE.rst}}

View file

@ -75,6 +75,7 @@ files =
synapse/util/daemonize.py, synapse/util/daemonize.py,
synapse/util/hash.py, synapse/util/hash.py,
synapse/util/iterutils.py, synapse/util/iterutils.py,
synapse/util/linked_list.py,
synapse/util/metrics.py, synapse/util/metrics.py,
synapse/util/macaroons.py, synapse/util/macaroons.py,
synapse/util/module_loader.py, synapse/util/module_loader.py,

View file

@ -65,4 +65,4 @@ if [[ -n "$1" ]]; then
fi fi
# Run the tests! # Run the tests!
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2716 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests go test -v -tags synapse_blacklist,msc2946,msc3083,msc2716,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests

View file

@ -83,12 +83,6 @@ def run():
if current_version.pre: if current_version.pre:
# If the current version is an RC we don't need to bump any of the # If the current version is an RC we don't need to bump any of the
# version numbers (other than the RC number). # version numbers (other than the RC number).
base_version = "{}.{}.{}".format(
current_version.major,
current_version.minor,
current_version.micro,
)
if rc: if rc:
new_version = "{}.{}.{}rc{}".format( new_version = "{}.{}.{}rc{}".format(
current_version.major, current_version.major,
@ -97,38 +91,43 @@ def run():
current_version.pre[1] + 1, current_version.pre[1] + 1,
) )
else: else:
new_version = base_version new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor,
current_version.micro,
)
else: else:
# If this is a new release cycle then we need to know if its a major # If this is a new release cycle then we need to know if it's a minor
# version bump or a hotfix. # or a patch version bump.
release_type = click.prompt( release_type = click.prompt(
"Release type", "Release type",
type=click.Choice(("major", "hotfix")), type=click.Choice(("minor", "patch")),
show_choices=True, show_choices=True,
default="major", default="minor",
) )
if release_type == "major": if release_type == "minor":
base_version = new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor + 1,
0,
)
if rc: if rc:
new_version = "{}.{}.{}rc1".format( new_version = "{}.{}.{}rc1".format(
current_version.major, current_version.major,
current_version.minor + 1, current_version.minor + 1,
0, 0,
) )
else: else:
base_version = new_version = "{}.{}.{}".format( new_version = "{}.{}.{}".format(
current_version.major,
current_version.minor + 1,
0,
)
else:
if rc:
new_version = "{}.{}.{}rc1".format(
current_version.major, current_version.major,
current_version.minor, current_version.minor,
current_version.micro + 1, current_version.micro + 1,
) )
if rc: else:
new_version = "{}.{}.{}rc1".format( new_version = "{}.{}.{}".format(
current_version.major, current_version.major,
current_version.minor, current_version.minor,
current_version.micro + 1, current_version.micro + 1,
@ -139,7 +138,10 @@ def run():
click.get_current_context().abort() click.get_current_context().abort()
# Switch to the release branch. # Switch to the release branch.
release_branch_name = f"release-v{current_version.major}.{current_version.minor}" parsed_new_version = version.parse(new_version)
release_branch_name = (
f"release-v{parsed_new_version.major}.{parsed_new_version.minor}"
)
release_branch = find_ref(repo, release_branch_name) release_branch = find_ref(repo, release_branch_name)
if release_branch: if release_branch:
if release_branch.is_remote(): if release_branch.is_remote():
@ -153,7 +155,7 @@ def run():
# release type. # release type.
if current_version.is_prerelease: if current_version.is_prerelease:
default = release_branch_name default = release_branch_name
elif release_type == "major": elif release_type == "minor":
default = "develop" default = "develop"
else: else:
default = "master" default = "master"

View file

@ -93,6 +93,7 @@ BOOLEAN_COLUMNS = {
"local_media_repository": ["safe_from_quarantine"], "local_media_repository": ["safe_from_quarantine"],
"users": ["shadow_banned"], "users": ["shadow_banned"],
"e2e_fallback_keys_json": ["used"], "e2e_fallback_keys_json": ["used"],
"access_tokens": ["used"],
} }
@ -307,7 +308,8 @@ class Porter(object):
information_schema.table_constraints AS tc information_schema.table_constraints AS tc
INNER JOIN information_schema.constraint_column_usage AS ccu INNER JOIN information_schema.constraint_column_usage AS ccu
USING (table_schema, constraint_name) USING (table_schema, constraint_name)
WHERE tc.constraint_type = 'FOREIGN KEY'; WHERE tc.constraint_type = 'FOREIGN KEY'
AND tc.table_name != ccu.table_name;
""" """
txn.execute(sql) txn.execute(sql)

View file

@ -0,0 +1,19 @@
#!/usr/bin/env python
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse._scripts.review_recent_signups import main
if __name__ == "__main__":
main()

View file

@ -47,7 +47,7 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.37.1" __version__ = "1.38.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when # We import here so that we don't have to install a bunch of deps when

View file

@ -0,0 +1,175 @@
#!/usr/bin/env python
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import sys
import time
from datetime import datetime
from typing import List
import attr
from synapse.config._base import RootConfig, find_config_files, read_config_files
from synapse.config.database import DatabaseConfig
from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn
from synapse.storage.engines import create_engine
class ReviewConfig(RootConfig):
"A config class that just pulls out the database config"
config_classes = [DatabaseConfig]
@attr.s(auto_attribs=True)
class UserInfo:
user_id: str
creation_ts: int
emails: List[str] = attr.Factory(list)
private_rooms: List[str] = attr.Factory(list)
public_rooms: List[str] = attr.Factory(list)
ips: List[str] = attr.Factory(list)
def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
"""Fetches recently registered users and some info on them."""
sql = """
SELECT name, creation_ts FROM users
WHERE
? <= creation_ts
AND deactivated = 0
"""
txn.execute(sql, (since_ms / 1000,))
user_infos = [UserInfo(user_id, creation_ts) for user_id, creation_ts in txn]
for user_info in user_infos:
user_info.emails = DatabasePool.simple_select_onecol_txn(
txn,
table="user_threepids",
keyvalues={"user_id": user_info.user_id, "medium": "email"},
retcol="address",
)
sql = """
SELECT room_id, canonical_alias, name, join_rules
FROM local_current_membership
INNER JOIN room_stats_state USING (room_id)
WHERE user_id = ? AND membership = 'join'
"""
txn.execute(sql, (user_info.user_id,))
for room_id, canonical_alias, name, join_rules in txn:
if join_rules == "public":
user_info.public_rooms.append(canonical_alias or name or room_id)
else:
user_info.private_rooms.append(canonical_alias or name or room_id)
user_info.ips = DatabasePool.simple_select_onecol_txn(
txn,
table="user_ips",
keyvalues={"user_id": user_info.user_id},
retcol="ip",
)
return user_infos
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"-c",
"--config-path",
action="append",
metavar="CONFIG_FILE",
help="The config files for Synapse.",
required=True,
)
parser.add_argument(
"-s",
"--since",
metavar="duration",
help="Specify how far back to review user registrations for, defaults to 7d (i.e. 7 days).",
default="7d",
)
parser.add_argument(
"-e",
"--exclude-emails",
action="store_true",
help="Exclude users that have validated email addresses",
)
parser.add_argument(
"-u",
"--only-users",
action="store_true",
help="Only print user IDs that match.",
)
config = ReviewConfig()
config_args = parser.parse_args(sys.argv[1:])
config_files = find_config_files(search_paths=config_args.config_path)
config_dict = read_config_files(config_files)
config.parse_config_dict(
config_dict,
)
since_ms = time.time() * 1000 - config.parse_duration(config_args.since)
exclude_users_with_email = config_args.exclude_emails
include_context = not config_args.only_users
for database_config in config.database.databases:
if "main" in database_config.databases:
break
engine = create_engine(database_config.config)
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
user_infos = get_recent_users(db_conn.cursor(), since_ms)
for user_info in user_infos:
if exclude_users_with_email and user_info.emails:
continue
if include_context:
print_public_rooms = ""
if user_info.public_rooms:
print_public_rooms = "(" + ", ".join(user_info.public_rooms[:3])
if len(user_info.public_rooms) > 3:
print_public_rooms += ", ..."
print_public_rooms += ")"
print("# Created:", datetime.fromtimestamp(user_info.creation_ts))
print("# Email:", ", ".join(user_info.emails) or "None")
print("# IPs:", ", ".join(user_info.ips))
print(
"# Number joined public rooms:",
len(user_info.public_rooms),
print_public_rooms,
)
print("# Number joined private rooms:", len(user_info.private_rooms))
print("#")
print(user_info.user_id)
if include_context:
print()
if __name__ == "__main__":
main()

View file

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple from typing import TYPE_CHECKING, Optional, Tuple
import pymacaroons import pymacaroons
from netaddr import IPAddress from netaddr import IPAddress
@ -28,7 +28,6 @@ from synapse.api.errors import (
InvalidClientTokenError, InvalidClientTokenError,
MissingClientTokenError, MissingClientTokenError,
) )
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.appservice import ApplicationService from synapse.appservice import ApplicationService
from synapse.events import EventBase from synapse.events import EventBase
from synapse.http import get_request_user_agent from synapse.http import get_request_user_agent
@ -38,7 +37,6 @@ from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import Requester, StateMap, UserID, create_requester from synapse.types import Requester, StateMap, UserID, create_requester
from synapse.util.caches.lrucache import LruCache from synapse.util.caches.lrucache import LruCache
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
from synapse.util.metrics import Measure
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -46,15 +44,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
AuthEventTypes = (
EventTypes.Create,
EventTypes.Member,
EventTypes.PowerLevels,
EventTypes.JoinRules,
EventTypes.RoomHistoryVisibility,
EventTypes.ThirdPartyInvite,
)
# guests always get this device id. # guests always get this device id.
GUEST_DEVICE_ID = "guest_device" GUEST_DEVICE_ID = "guest_device"
@ -65,9 +54,7 @@ class _InvalidMacaroonException(Exception):
class Auth: class Auth:
""" """
FIXME: This class contains a mix of functions for authenticating users This class contains functions for authenticating users of our client-server API.
of our client-server API and authenticating events added to room graphs.
The latter should be moved to synapse.handlers.event_auth.EventAuthHandler.
""" """
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
@ -89,18 +76,6 @@ class Auth:
self._macaroon_secret_key = hs.config.macaroon_secret_key self._macaroon_secret_key = hs.config.macaroon_secret_key
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
async def check_from_context(
self, room_version: str, event, context, do_sig_check=True
) -> None:
auth_event_ids = event.auth_event_ids()
auth_events_by_id = await self.store.get_events(auth_event_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_by_id.values()}
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
event_auth.check(
room_version_obj, event, auth_events=auth_events, do_sig_check=do_sig_check
)
async def check_user_in_room( async def check_user_in_room(
self, self,
room_id: str, room_id: str,
@ -151,13 +126,6 @@ class Auth:
raise AuthError(403, "User %s not in room %s" % (user_id, room_id)) raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
async def check_host_in_room(self, room_id: str, host: str) -> bool:
with Measure(self.clock, "check_host_in_room"):
return await self.store.is_host_joined(room_id, host)
def get_public_keys(self, invite_event: EventBase) -> List[Dict[str, Any]]:
return event_auth.get_public_keys(invite_event)
async def get_user_by_req( async def get_user_by_req(
self, self,
request: SynapseRequest, request: SynapseRequest,
@ -245,6 +213,11 @@ class Auth:
errcode=Codes.GUEST_ACCESS_FORBIDDEN, errcode=Codes.GUEST_ACCESS_FORBIDDEN,
) )
# Mark the token as used. This is used to invalidate old refresh
# tokens after some time.
if not user_info.token_used and token_id is not None:
await self.store.mark_access_token_as_used(token_id)
requester = create_requester( requester = create_requester(
user_info.user_id, user_info.user_id,
token_id, token_id,
@ -483,44 +456,6 @@ class Auth:
""" """
return await self.store.is_server_admin(user) return await self.store.is_server_admin(user)
def compute_auth_events(
self,
event,
current_state_ids: StateMap[str],
for_verification: bool = False,
) -> List[str]:
"""Given an event and current state return the list of event IDs used
to auth an event.
If `for_verification` is False then only return auth events that
should be added to the event's `auth_events`.
Returns:
List of event IDs.
"""
if event.type == EventTypes.Create:
return []
# Currently we ignore the `for_verification` flag even though there are
# some situations where we can drop particular auth events when adding
# to the event's `auth_events` (e.g. joins pointing to previous joins
# when room is publicly joinable). Dropping event IDs has the
# advantage that the auth chain for the room grows slower, but we use
# the auth chain in state resolution v2 to order events, which means
# care must be taken if dropping events to ensure that it doesn't
# introduce undesirable "state reset" behaviour.
#
# All of which sounds a bit tricky so we don't bother for now.
auth_ids = []
for etype, state_key in event_auth.auth_types_for_event(event):
auth_ev_id = current_state_ids.get((etype, state_key))
if auth_ev_id:
auth_ids.append(auth_ev_id)
return auth_ids
async def check_can_change_room_list(self, room_id: str, user: UserID) -> bool: async def check_can_change_room_list(self, room_id: str, user: UserID) -> bool:
"""Determine whether the user is allowed to edit the room's entry in the """Determine whether the user is allowed to edit the room's entry in the
published room list. published room list.

View file

@ -201,6 +201,12 @@ class EventContentFields:
) )
class RoomTypes:
"""Understood values of the room_type field of m.room.create events."""
SPACE = "m.space"
class RoomEncryptionAlgorithms: class RoomEncryptionAlgorithms:
MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2" MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"
DEFAULT = MEGOLM_V1_AES_SHA2 DEFAULT = MEGOLM_V1_AES_SHA2

View file

@ -21,7 +21,7 @@ import socket
import sys import sys
import traceback import traceback
import warnings import warnings
from typing import Awaitable, Callable, Iterable from typing import TYPE_CHECKING, Awaitable, Callable, Iterable
from cryptography.utils import CryptographyDeprecationWarning from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import NoReturn from typing_extensions import NoReturn
@ -41,10 +41,14 @@ from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.logging.context import PreserveLoggingContext from synapse.logging.context import PreserveLoggingContext
from synapse.metrics.background_process_metrics import wrap_as_background_process from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
from synapse.util.daemonize import daemonize_process from synapse.util.daemonize import daemonize_process
from synapse.util.rlimit import change_resource_limit from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# list of tuples of function, args list, kwargs dict # list of tuples of function, args list, kwargs dict
@ -312,7 +316,7 @@ def refresh_certificate(hs):
logger.info("Context factories updated.") logger.info("Context factories updated.")
async def start(hs: "synapse.server.HomeServer"): async def start(hs: "HomeServer"):
""" """
Start a Synapse server or worker. Start a Synapse server or worker.
@ -365,6 +369,9 @@ async def start(hs: "synapse.server.HomeServer"):
load_legacy_spam_checkers(hs) load_legacy_spam_checkers(hs)
# If we've configured an expiry time for caches, start the background job now.
setup_expire_lru_cache_entries(hs)
# It is now safe to start your Synapse. # It is now safe to start your Synapse.
hs.start_listening() hs.start_listening()
hs.get_datastore().db_pool.start_profiling() hs.get_datastore().db_pool.start_profiling()

View file

@ -5,6 +5,7 @@ from synapse.config import (
api, api,
appservice, appservice,
auth, auth,
cache,
captcha, captcha,
cas, cas,
consent, consent,
@ -88,6 +89,7 @@ class RootConfig:
tracer: tracer.TracerConfig tracer: tracer.TracerConfig
redis: redis.RedisConfig redis: redis.RedisConfig
modules: modules.ModulesConfig modules: modules.ModulesConfig
caches: cache.CacheConfig
federation: federation.FederationConfig federation: federation.FederationConfig
config_classes: List = ... config_classes: List = ...

View file

@ -145,6 +145,12 @@ class CacheConfig(Config):
# #
per_cache_factors: per_cache_factors:
#get_users_who_share_room_with_user: 2.0 #get_users_who_share_room_with_user: 2.0
# Controls how long an entry can be in a cache without having been
# accessed before being evicted. Defaults to None, which means
# entries are never evicted based on time.
#
#expiry_time: 30m
""" """
def read_config(self, config, **kwargs): def read_config(self, config, **kwargs):
@ -200,6 +206,12 @@ class CacheConfig(Config):
e.message # noqa: B306, DependencyException.message is a property e.message # noqa: B306, DependencyException.message is a property
) )
expiry_time = cache_config.get("expiry_time")
if expiry_time:
self.expiry_time_msec = self.parse_duration(expiry_time)
else:
self.expiry_time_msec = None
# Resize all caches (if necessary) with the new factors we've loaded # Resize all caches (if necessary) with the new factors we've loaded
self.resize_all_caches() self.resize_all_caches()

View file

@ -119,6 +119,27 @@ class RegistrationConfig(Config):
session_lifetime = self.parse_duration(session_lifetime) session_lifetime = self.parse_duration(session_lifetime)
self.session_lifetime = session_lifetime self.session_lifetime = session_lifetime
# The `access_token_lifetime` applies for tokens that can be renewed
# using a refresh token, as per MSC2918. If it is `None`, the refresh
# token mechanism is disabled.
#
# Since it is incompatible with the `session_lifetime` mechanism, it is set to
# `None` by default if a `session_lifetime` is set.
access_token_lifetime = config.get(
"access_token_lifetime", "5m" if session_lifetime is None else None
)
if access_token_lifetime is not None:
access_token_lifetime = self.parse_duration(access_token_lifetime)
self.access_token_lifetime = access_token_lifetime
if session_lifetime is not None and access_token_lifetime is not None:
raise ConfigError(
"The refresh token mechanism is incompatible with the "
"`session_lifetime` option. Consider disabling the "
"`session_lifetime` option or disabling the refresh token "
"mechanism by removing the `access_token_lifetime` option."
)
# The success template used during fallback auth. # The success template used during fallback auth.
self.fallback_success_template = self.read_template("auth_success.html") self.fallback_success_template = self.read_template("auth_success.html")

View file

@ -14,7 +14,7 @@
# limitations under the License. # limitations under the License.
import logging import logging
from typing import Any, Dict, List, Optional, Set, Tuple from typing import Any, Dict, List, Optional, Set, Tuple, Union
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes from signedjson.key import decode_verify_key_bytes
@ -29,6 +29,7 @@ from synapse.api.room_versions import (
RoomVersion, RoomVersion,
) )
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.types import StateMap, UserID, get_domain_from_id from synapse.types import StateMap, UserID, get_domain_from_id
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -724,7 +725,7 @@ def get_public_keys(invite_event: EventBase) -> List[Dict[str, Any]]:
return public_keys return public_keys
def auth_types_for_event(event: EventBase) -> Set[Tuple[str, str]]: def auth_types_for_event(event: Union[EventBase, EventBuilder]) -> Set[Tuple[str, str]]:
"""Given an event, return a list of (EventType, StateKey) that may be """Given an event, return a list of (EventType, StateKey) that may be
needed to auth the event. The returned list may be a superset of what needed to auth the event. The returned list may be a superset of what
would actually be required depending on the full state of the room. would actually be required depending on the full state of the room.

View file

@ -118,7 +118,7 @@ class _EventInternalMetadata:
proactively_send = DictProperty("proactively_send") # type: bool proactively_send = DictProperty("proactively_send") # type: bool
redacted = DictProperty("redacted") # type: bool redacted = DictProperty("redacted") # type: bool
txn_id = DictProperty("txn_id") # type: str txn_id = DictProperty("txn_id") # type: str
token_id = DictProperty("token_id") # type: str token_id = DictProperty("token_id") # type: int
historical = DictProperty("historical") # type: bool historical = DictProperty("historical") # type: bool
# XXX: These are set by StreamWorkerStore._set_before_and_after. # XXX: These are set by StreamWorkerStore._set_before_and_after.

View file

@ -12,12 +12,11 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging import logging
from typing import Any, Dict, List, Optional, Tuple, Union from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
import attr import attr
from nacl.signing import SigningKey from nacl.signing import SigningKey
from synapse.api.auth import Auth
from synapse.api.constants import MAX_DEPTH from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import UnsupportedRoomVersionError from synapse.api.errors import UnsupportedRoomVersionError
from synapse.api.room_versions import ( from synapse.api.room_versions import (
@ -34,10 +33,14 @@ from synapse.types import EventID, JsonDict
from synapse.util import Clock from synapse.util import Clock
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
if TYPE_CHECKING:
from synapse.handlers.event_auth import EventAuthHandler
from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@attr.s(slots=True, cmp=False, frozen=True) @attr.s(slots=True, cmp=False, frozen=True, auto_attribs=True)
class EventBuilder: class EventBuilder:
"""A format independent event builder used to build up the event content """A format independent event builder used to build up the event content
before signing the event. before signing the event.
@ -62,31 +65,30 @@ class EventBuilder:
_signing_key: The signing key to use to sign the event as the server _signing_key: The signing key to use to sign the event as the server
""" """
_state = attr.ib(type=StateHandler) _state: StateHandler
_auth = attr.ib(type=Auth) _event_auth_handler: "EventAuthHandler"
_store = attr.ib(type=DataStore) _store: DataStore
_clock = attr.ib(type=Clock) _clock: Clock
_hostname = attr.ib(type=str) _hostname: str
_signing_key = attr.ib(type=SigningKey) _signing_key: SigningKey
room_version = attr.ib(type=RoomVersion) room_version: RoomVersion
room_id = attr.ib(type=str) room_id: str
type = attr.ib(type=str) type: str
sender = attr.ib(type=str) sender: str
content = attr.ib(default=attr.Factory(dict), type=JsonDict) content: JsonDict = attr.Factory(dict)
unsigned = attr.ib(default=attr.Factory(dict), type=JsonDict) unsigned: JsonDict = attr.Factory(dict)
# These only exist on a subset of events, so they raise AttributeError if # These only exist on a subset of events, so they raise AttributeError if
# someone tries to get them when they don't exist. # someone tries to get them when they don't exist.
_state_key = attr.ib(default=None, type=Optional[str]) _state_key: Optional[str] = None
_redacts = attr.ib(default=None, type=Optional[str]) _redacts: Optional[str] = None
_origin_server_ts = attr.ib(default=None, type=Optional[int]) _origin_server_ts: Optional[int] = None
internal_metadata = attr.ib( internal_metadata: _EventInternalMetadata = attr.Factory(
default=attr.Factory(lambda: _EventInternalMetadata({})), lambda: _EventInternalMetadata({})
type=_EventInternalMetadata,
) )
@property @property
@ -123,7 +125,9 @@ class EventBuilder:
state_ids = await self._state.get_current_state_ids( state_ids = await self._state.get_current_state_ids(
self.room_id, prev_event_ids self.room_id, prev_event_ids
) )
auth_event_ids = self._auth.compute_auth_events(self, state_ids) auth_event_ids = self._event_auth_handler.compute_auth_events(
self, state_ids
)
format_version = self.room_version.event_format format_version = self.room_version.event_format
if format_version == EventFormatVersions.V1: if format_version == EventFormatVersions.V1:
@ -184,24 +188,23 @@ class EventBuilder:
class EventBuilderFactory: class EventBuilderFactory:
def __init__(self, hs): def __init__(self, hs: "HomeServer"):
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.hostname = hs.hostname self.hostname = hs.hostname
self.signing_key = hs.signing_key self.signing_key = hs.signing_key
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
self.auth = hs.get_auth() self._event_auth_handler = hs.get_event_auth_handler()
def new(self, room_version, key_values): def new(self, room_version: str, key_values: dict) -> EventBuilder:
"""Generate an event builder appropriate for the given room version """Generate an event builder appropriate for the given room version
Deprecated: use for_room_version with a RoomVersion object instead Deprecated: use for_room_version with a RoomVersion object instead
Args: Args:
room_version (str): Version of the room that we're creating an event builder room_version: Version of the room that we're creating an event builder for
for key_values: Fields used as the basis of the new event
key_values (dict): Fields used as the basis of the new event
Returns: Returns:
EventBuilder EventBuilder
@ -212,13 +215,15 @@ class EventBuilderFactory:
raise UnsupportedRoomVersionError() raise UnsupportedRoomVersionError()
return self.for_room_version(v, key_values) return self.for_room_version(v, key_values)
def for_room_version(self, room_version, key_values): def for_room_version(
self, room_version: RoomVersion, key_values: dict
) -> EventBuilder:
"""Generate an event builder appropriate for the given room version """Generate an event builder appropriate for the given room version
Args: Args:
room_version (synapse.api.room_versions.RoomVersion): room_version:
Version of the room that we're creating an event builder for Version of the room that we're creating an event builder for
key_values (dict): Fields used as the basis of the new event key_values: Fields used as the basis of the new event
Returns: Returns:
EventBuilder EventBuilder
@ -226,7 +231,7 @@ class EventBuilderFactory:
return EventBuilder( return EventBuilder(
store=self.store, store=self.store,
state=self.state, state=self.state,
auth=self.auth, event_auth_handler=self._event_auth_handler,
clock=self.clock, clock=self.clock,
hostname=self.hostname, hostname=self.hostname,
signing_key=self.signing_key, signing_key=self.signing_key,
@ -286,15 +291,15 @@ def create_local_event_from_event_dict(
_event_id_counter = 0 _event_id_counter = 0
def _create_event_id(clock, hostname): def _create_event_id(clock: Clock, hostname: str) -> str:
"""Create a new event ID """Create a new event ID
Args: Args:
clock (Clock) clock
hostname (str): The server name for the event ID hostname: The server name for the event ID
Returns: Returns:
str The new event ID
""" """
global _event_id_counter global _event_id_counter

View file

@ -89,12 +89,12 @@ class FederationBase:
result = await self.spam_checker.check_event_for_spam(pdu) result = await self.spam_checker.check_event_for_spam(pdu)
if result: if result:
logger.warning( logger.warning("Event contains spam, soft-failing %s", pdu.event_id)
"Event contains spam, redacting %s: %s", # we redact (to save disk space) as well as soft-failing (to stop
pdu.event_id, # using the event in prev_events).
pdu.get_pdu_json(), redacted_event = prune_event(pdu)
) redacted_event.internal_metadata.soft_failed = True
return prune_event(pdu) return redacted_event
return pdu return pdu

View file

@ -34,7 +34,7 @@ from twisted.internet import defer
from twisted.internet.abstract import isIPAddress from twisted.internet.abstract import isIPAddress
from twisted.python import failure from twisted.python import failure
from synapse.api.constants import EduTypes, EventTypes from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.errors import ( from synapse.api.errors import (
AuthError, AuthError,
Codes, Codes,
@ -46,6 +46,7 @@ from synapse.api.errors import (
) )
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.federation.federation_base import FederationBase, event_from_pdu_json from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.persistence import TransactionActions from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction from synapse.federation.units import Edu, Transaction
@ -107,9 +108,9 @@ class FederationServer(FederationBase):
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
super().__init__(hs) super().__init__(hs)
self.auth = hs.get_auth()
self.handler = hs.get_federation_handler() self.handler = hs.get_federation_handler()
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
self._event_auth_handler = hs.get_event_auth_handler()
self.device_handler = hs.get_device_handler() self.device_handler = hs.get_device_handler()
@ -147,6 +148,41 @@ class FederationServer(FederationBase):
self._room_prejoin_state_types = hs.config.api.room_prejoin_state self._room_prejoin_state_types = hs.config.api.room_prejoin_state
# Whether we have started handling old events in the staging area.
self._started_handling_of_staged_events = False
@wrap_as_background_process("_handle_old_staged_events")
async def _handle_old_staged_events(self) -> None:
"""Handle old staged events by fetching all rooms that have staged
events and start the processing of each of those rooms.
"""
# Get all the rooms IDs with staged events.
room_ids = await self.store.get_all_rooms_with_staged_incoming_events()
# We then shuffle them so that if there are multiple instances doing
# this work they're less likely to collide.
random.shuffle(room_ids)
for room_id in room_ids:
room_version = await self.store.get_room_version(room_id)
# Try and acquire the processing lock for the room, if we get it start a
# background process for handling the events in the room.
lock = await self.store.try_acquire_lock(
_INBOUND_EVENT_HANDLING_LOCK_NAME, room_id
)
if lock:
logger.info("Handling old staged inbound events in %s", room_id)
self._process_incoming_pdus_in_room_inner(
room_id,
room_version,
lock,
)
# We pause a bit so that we don't start handling all rooms at once.
await self._clock.sleep(random.uniform(0, 0.1))
async def on_backfill_request( async def on_backfill_request(
self, origin: str, room_id: str, versions: List[str], limit: int self, origin: str, room_id: str, versions: List[str], limit: int
) -> Tuple[int, Dict[str, Any]]: ) -> Tuple[int, Dict[str, Any]]:
@ -165,6 +201,12 @@ class FederationServer(FederationBase):
async def on_incoming_transaction( async def on_incoming_transaction(
self, origin: str, transaction_data: JsonDict self, origin: str, transaction_data: JsonDict
) -> Tuple[int, Dict[str, Any]]: ) -> Tuple[int, Dict[str, Any]]:
# If we receive a transaction we should make sure that kick off handling
# any old events in the staging area.
if not self._started_handling_of_staged_events:
self._started_handling_of_staged_events = True
self._handle_old_staged_events()
# keep this as early as possible to make the calculated origin ts as # keep this as early as possible to make the calculated origin ts as
# accurate as possible. # accurate as possible.
request_time = self._clock.time_msec() request_time = self._clock.time_msec()
@ -368,7 +410,6 @@ class FederationServer(FederationBase):
async def process_pdu(pdu: EventBase) -> JsonDict: async def process_pdu(pdu: EventBase) -> JsonDict:
event_id = pdu.event_id event_id = pdu.event_id
with pdu_process_time.time():
with nested_logging_context(event_id): with nested_logging_context(event_id):
try: try:
await self._handle_received_pdu(origin, pdu) await self._handle_received_pdu(origin, pdu)
@ -420,7 +461,7 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin) origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id) await self.check_server_matches_acl(origin_host, room_id)
in_room = await self.auth.check_host_in_room(room_id, origin) in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room: if not in_room:
raise AuthError(403, "Host not in room.") raise AuthError(403, "Host not in room.")
@ -453,7 +494,7 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin) origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id) await self.check_server_matches_acl(origin_host, room_id)
in_room = await self.auth.check_host_in_room(room_id, origin) in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room: if not in_room:
raise AuthError(403, "Host not in room.") raise AuthError(403, "Host not in room.")
@ -544,26 +585,21 @@ class FederationServer(FederationBase):
return {"event": ret_pdu.get_pdu_json(time_now)} return {"event": ret_pdu.get_pdu_json(time_now)}
async def on_send_join_request( async def on_send_join_request(
self, origin: str, content: JsonDict self, origin: str, content: JsonDict, room_id: str
) -> Dict[str, Any]: ) -> Dict[str, Any]:
logger.debug("on_send_join_request: content: %s", content) context = await self._on_send_membership_event(
origin, content, Membership.JOIN, room_id
)
assert_params_in_dict(content, ["room_id"]) prev_state_ids = await context.get_prev_state_ids()
room_version = await self.store.get_room_version(content["room_id"]) state_ids = list(prev_state_ids.values())
pdu = event_from_pdu_json(content, room_version) auth_chain = await self.store.get_auth_chain(room_id, state_ids)
state = await self.store.get_events(state_ids)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version, pdu)
res_pdus = await self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
return { return {
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]], "state": [p.get_pdu_json(time_now) for p in state.values()],
"auth_chain": [p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]], "auth_chain": [p.get_pdu_json(time_now) for p in auth_chain],
} }
async def on_make_leave_request( async def on_make_leave_request(
@ -578,21 +614,11 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version} return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
async def on_send_leave_request(self, origin: str, content: JsonDict) -> dict: async def on_send_leave_request(
self, origin: str, content: JsonDict, room_id: str
) -> dict:
logger.debug("on_send_leave_request: content: %s", content) logger.debug("on_send_leave_request: content: %s", content)
await self._on_send_membership_event(origin, content, Membership.LEAVE, room_id)
assert_params_in_dict(content, ["room_id"])
room_version = await self.store.get_room_version(content["room_id"])
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version, pdu)
await self.handler.on_send_leave_request(origin, pdu)
return {} return {}
async def on_make_knock_request( async def on_make_knock_request(
@ -658,30 +684,10 @@ class FederationServer(FederationBase):
Returns: Returns:
The stripped room state. The stripped room state.
""" """
logger.debug("on_send_knock_request: content: %s", content) event_context = await self._on_send_membership_event(
origin, content, Membership.KNOCK, room_id
room_version = await self.store.get_room_version(room_id)
# Check that this room supports knocking as defined by its room version
if not room_version.msc2403_knocking:
raise SynapseError(
403,
"This room version does not support knocking",
errcode=Codes.FORBIDDEN,
) )
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_knock_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version, pdu)
# Handle the event, and retrieve the EventContext
event_context = await self.handler.on_send_knock_request(origin, pdu)
# Retrieve stripped state events from the room and send them back to the remote # Retrieve stripped state events from the room and send them back to the remote
# server. This will allow the remote server's clients to display information # server. This will allow the remote server's clients to display information
# related to the room while the knock request is pending. # related to the room while the knock request is pending.
@ -692,6 +698,63 @@ class FederationServer(FederationBase):
) )
return {"knock_state_events": stripped_room_state} return {"knock_state_events": stripped_room_state}
async def _on_send_membership_event(
self, origin: str, content: JsonDict, membership_type: str, room_id: str
) -> EventContext:
"""Handle an on_send_{join,leave,knock} request
Does some preliminary validation before passing the request on to the
federation handler.
Args:
origin: The (authenticated) requesting server
content: The body of the send_* request - a complete membership event
membership_type: The expected membership type (join or leave, depending
on the endpoint)
room_id: The room_id from the request, to be validated against the room_id
in the event
Returns:
The context of the event after inserting it into the room graph.
Raises:
SynapseError if there is a problem with the request, including things like
the room_id not matching or the event not being authorized.
"""
assert_params_in_dict(content, ["room_id"])
if content["room_id"] != room_id:
raise SynapseError(
400,
"Room ID in body does not match that in request path",
Codes.BAD_JSON,
)
room_version = await self.store.get_room_version(room_id)
if membership_type == Membership.KNOCK and not room_version.msc2403_knocking:
raise SynapseError(
403,
"This room version does not support knocking",
errcode=Codes.FORBIDDEN,
)
event = event_from_pdu_json(content, room_version)
if event.type != EventTypes.Member or not event.is_state():
raise SynapseError(400, "Not an m.room.member event", Codes.BAD_JSON)
if event.content.get("membership") != membership_type:
raise SynapseError(400, "Not a %s event" % membership_type, Codes.BAD_JSON)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, event.room_id)
logger.debug("_on_send_membership_event: pdu sigs: %s", event.signatures)
event = await self._check_sigs_and_hash(room_version, event)
return await self.handler.on_send_membership_event(origin, event)
async def on_event_auth( async def on_event_auth(
self, origin: str, room_id: str, event_id: str self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]: ) -> Tuple[int, Dict[str, Any]]:
@ -860,25 +923,28 @@ class FederationServer(FederationBase):
room_id: str, room_id: str,
room_version: RoomVersion, room_version: RoomVersion,
lock: Lock, lock: Lock,
latest_origin: str, latest_origin: Optional[str] = None,
latest_event: EventBase, latest_event: Optional[EventBase] = None,
) -> None: ) -> None:
"""Process events in the staging area for the given room. """Process events in the staging area for the given room.
The latest_origin and latest_event args are the latest origin and event The latest_origin and latest_event args are the latest origin and event
received. received (or None to simply pull the next event from the database).
""" """
# The common path is for the event we just received be the only event in # The common path is for the event we just received be the only event in
# the room, so instead of pulling the event out of the DB and parsing # the room, so instead of pulling the event out of the DB and parsing
# the event we just pull out the next event ID and check if that matches. # the event we just pull out the next event ID and check if that matches.
next_origin, next_event_id = await self.store.get_next_staged_event_id_for_room( if latest_event is not None and latest_origin is not None:
room_id (
) next_origin,
if next_origin == latest_origin and next_event_id == latest_event.event_id: next_event_id,
origin = latest_origin ) = await self.store.get_next_staged_event_id_for_room(room_id)
event = latest_event if next_origin != latest_origin or next_event_id != latest_event.event_id:
else: latest_origin = None
latest_event = None
if latest_origin is None or latest_event is None:
next = await self.store.get_next_staged_event_for_room( next = await self.store.get_next_staged_event_for_room(
room_id, room_version room_id, room_version
) )
@ -886,6 +952,9 @@ class FederationServer(FederationBase):
return return
origin, event = next origin, event = next
else:
origin = latest_origin
event = latest_event
# We loop round until there are no more events in the room in the # We loop round until there are no more events in the room in the
# staging area, or we fail to get the lock (which means another process # staging area, or we fail to get the lock (which means another process
@ -909,9 +978,13 @@ class FederationServer(FederationBase):
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
) )
await self.store.remove_received_event_from_staging( received_ts = await self.store.remove_received_event_from_staging(
origin, event.event_id origin, event.event_id
) )
if received_ts is not None:
pdu_process_time.observe(
(self._clock.time_msec() - received_ts) / 1000
)
# We need to do this check outside the lock to avoid a race between # We need to do this check outside the lock to avoid a race between
# a new event being inserted by another instance and it attempting # a new event being inserted by another instance and it attempting

File diff suppressed because it is too large Load diff

View file

@ -62,9 +62,16 @@ class AdminHandler(BaseHandler):
if ret: if ret:
profile = await self.store.get_profileinfo(user.localpart) profile = await self.store.get_profileinfo(user.localpart)
threepids = await self.store.user_get_threepids(user.to_string()) threepids = await self.store.user_get_threepids(user.to_string())
external_ids = [
({"auth_provider": auth_provider, "external_id": external_id})
for auth_provider, external_id in await self.store.get_external_ids_by_user(
user.to_string()
)
]
ret["displayname"] = profile.display_name ret["displayname"] = profile.display_name
ret["avatar_url"] = profile.avatar_url ret["avatar_url"] = profile.avatar_url
ret["threepids"] = threepids ret["threepids"] = threepids
ret["external_ids"] = external_ids
return ret return ret
async def export_user_data(self, user_id: str, writer: "ExfiltrationWriter") -> Any: async def export_user_data(self, user_id: str, writer: "ExfiltrationWriter") -> Any:

View file

@ -30,6 +30,7 @@ from typing import (
Optional, Optional,
Tuple, Tuple,
Union, Union,
cast,
) )
import attr import attr
@ -72,6 +73,7 @@ from synapse.util.stringutils import base62_encode
from synapse.util.threepids import canonicalise_email from synapse.util.threepids import canonicalise_email
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.rest.client.v1.login import LoginResponse
from synapse.server import HomeServer from synapse.server import HomeServer
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -777,6 +779,108 @@ class AuthHandler(BaseHandler):
"params": params, "params": params,
} }
async def refresh_token(
self,
refresh_token: str,
valid_until_ms: Optional[int],
) -> Tuple[str, str]:
"""
Consumes a refresh token and generate both a new access token and a new refresh token from it.
The consumed refresh token is considered invalid after the first use of the new access token or the new refresh token.
Args:
refresh_token: The token to consume.
valid_until_ms: The expiration timestamp of the new access token.
Returns:
A tuple containing the new access token and refresh token
"""
# Verify the token signature first before looking up the token
if not self._verify_refresh_token(refresh_token):
raise SynapseError(401, "invalid refresh token", Codes.UNKNOWN_TOKEN)
existing_token = await self.store.lookup_refresh_token(refresh_token)
if existing_token is None:
raise SynapseError(401, "refresh token does not exist", Codes.UNKNOWN_TOKEN)
if (
existing_token.has_next_access_token_been_used
or existing_token.has_next_refresh_token_been_refreshed
):
raise SynapseError(
403, "refresh token isn't valid anymore", Codes.FORBIDDEN
)
(
new_refresh_token,
new_refresh_token_id,
) = await self.get_refresh_token_for_user_id(
user_id=existing_token.user_id, device_id=existing_token.device_id
)
access_token = await self.get_access_token_for_user_id(
user_id=existing_token.user_id,
device_id=existing_token.device_id,
valid_until_ms=valid_until_ms,
refresh_token_id=new_refresh_token_id,
)
await self.store.replace_refresh_token(
existing_token.token_id, new_refresh_token_id
)
return access_token, new_refresh_token
def _verify_refresh_token(self, token: str) -> bool:
"""
Verifies the shape of a refresh token.
Args:
token: The refresh token to verify
Returns:
Whether the token has the right shape
"""
parts = token.split("_", maxsplit=4)
if len(parts) != 4:
return False
type, localpart, rand, crc = parts
# Refresh tokens are prefixed by "syr_", let's check that
if type != "syr":
return False
# Check the CRC
base = f"{type}_{localpart}_{rand}"
expected_crc = base62_encode(crc32(base.encode("ascii")), minwidth=6)
if crc != expected_crc:
return False
return True
async def get_refresh_token_for_user_id(
self,
user_id: str,
device_id: str,
) -> Tuple[str, int]:
"""
Creates a new refresh token for the user with the given user ID.
Args:
user_id: canonical user ID
device_id: the device ID to associate with the token.
Returns:
The newly created refresh token and its ID in the database
"""
refresh_token = self.generate_refresh_token(UserID.from_string(user_id))
refresh_token_id = await self.store.add_refresh_token_to_user(
user_id=user_id,
token=refresh_token,
device_id=device_id,
)
return refresh_token, refresh_token_id
async def get_access_token_for_user_id( async def get_access_token_for_user_id(
self, self,
user_id: str, user_id: str,
@ -784,6 +888,7 @@ class AuthHandler(BaseHandler):
valid_until_ms: Optional[int], valid_until_ms: Optional[int],
puppets_user_id: Optional[str] = None, puppets_user_id: Optional[str] = None,
is_appservice_ghost: bool = False, is_appservice_ghost: bool = False,
refresh_token_id: Optional[int] = None,
) -> str: ) -> str:
""" """
Creates a new access token for the user with the given user ID. Creates a new access token for the user with the given user ID.
@ -801,6 +906,8 @@ class AuthHandler(BaseHandler):
valid_until_ms: when the token is valid until. None for valid_until_ms: when the token is valid until. None for
no expiry. no expiry.
is_appservice_ghost: Whether the user is an application ghost user is_appservice_ghost: Whether the user is an application ghost user
refresh_token_id: the refresh token ID that will be associated with
this access token.
Returns: Returns:
The access token for the user's session. The access token for the user's session.
Raises: Raises:
@ -836,6 +943,7 @@ class AuthHandler(BaseHandler):
device_id=device_id, device_id=device_id,
valid_until_ms=valid_until_ms, valid_until_ms=valid_until_ms,
puppets_user_id=puppets_user_id, puppets_user_id=puppets_user_id,
refresh_token_id=refresh_token_id,
) )
# the device *should* have been registered before we got here; however, # the device *should* have been registered before we got here; however,
@ -928,7 +1036,7 @@ class AuthHandler(BaseHandler):
self, self,
login_submission: Dict[str, Any], login_submission: Dict[str, Any],
ratelimit: bool = False, ratelimit: bool = False,
) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]: ) -> Tuple[str, Optional[Callable[["LoginResponse"], Awaitable[None]]]]:
"""Authenticates the user for the /login API """Authenticates the user for the /login API
Also used by the user-interactive auth flow to validate auth types which don't Also used by the user-interactive auth flow to validate auth types which don't
@ -1073,7 +1181,7 @@ class AuthHandler(BaseHandler):
self, self,
username: str, username: str,
login_submission: Dict[str, Any], login_submission: Dict[str, Any],
) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]: ) -> Tuple[str, Optional[Callable[["LoginResponse"], Awaitable[None]]]]:
"""Helper for validate_login """Helper for validate_login
Handles login, once we've mapped 3pids onto userids Handles login, once we've mapped 3pids onto userids
@ -1151,7 +1259,7 @@ class AuthHandler(BaseHandler):
async def check_password_provider_3pid( async def check_password_provider_3pid(
self, medium: str, address: str, password: str self, medium: str, address: str, password: str
) -> Tuple[Optional[str], Optional[Callable[[Dict[str, str]], Awaitable[None]]]]: ) -> Tuple[Optional[str], Optional[Callable[["LoginResponse"], Awaitable[None]]]]:
"""Check if a password provider is able to validate a thirdparty login """Check if a password provider is able to validate a thirdparty login
Args: Args:
@ -1215,6 +1323,19 @@ class AuthHandler(BaseHandler):
crc = base62_encode(crc32(base.encode("ascii")), minwidth=6) crc = base62_encode(crc32(base.encode("ascii")), minwidth=6)
return f"{base}_{crc}" return f"{base}_{crc}"
def generate_refresh_token(self, for_user: UserID) -> str:
"""Generates an opaque string, for use as a refresh token"""
# we use the following format for refresh tokens:
# syr_<base64 local part>_<random string>_<base62 crc check>
b64local = unpaddedbase64.encode_base64(for_user.localpart.encode("utf-8"))
random_string = stringutils.random_string(20)
base = f"syr_{b64local}_{random_string}"
crc = base62_encode(crc32(base.encode("ascii")), minwidth=6)
return f"{base}_{crc}"
async def validate_short_term_login_token( async def validate_short_term_login_token(
self, login_token: str self, login_token: str
) -> LoginTokenAttributes: ) -> LoginTokenAttributes:
@ -1563,7 +1684,7 @@ class AuthHandler(BaseHandler):
) )
respond_with_html(request, 200, html) respond_with_html(request, 200, html)
async def _sso_login_callback(self, login_result: JsonDict) -> None: async def _sso_login_callback(self, login_result: "LoginResponse") -> None:
""" """
A login callback which might add additional attributes to the login response. A login callback which might add additional attributes to the login response.
@ -1577,7 +1698,8 @@ class AuthHandler(BaseHandler):
extra_attributes = self._extra_attributes.get(login_result["user_id"]) extra_attributes = self._extra_attributes.get(login_result["user_id"])
if extra_attributes: if extra_attributes:
login_result.update(extra_attributes.extra_attributes) login_result_dict = cast(Dict[str, Any], login_result)
login_result_dict.update(extra_attributes.extra_attributes)
def _expire_sso_extra_attributes(self) -> None: def _expire_sso_extra_attributes(self) -> None:
""" """

View file

@ -11,8 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import TYPE_CHECKING, Collection, Optional from typing import TYPE_CHECKING, Collection, List, Optional, Union
from synapse import event_auth
from synapse.api.constants import ( from synapse.api.constants import (
EventTypes, EventTypes,
JoinRules, JoinRules,
@ -20,9 +21,11 @@ from synapse.api.constants import (
RestrictedJoinRuleTypes, RestrictedJoinRuleTypes,
) )
from synapse.api.errors import AuthError from synapse.api.errors import AuthError
from synapse.api.room_versions import RoomVersion from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.types import StateMap from synapse.types import StateMap
from synapse.util.metrics import Measure
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -34,8 +37,63 @@ class EventAuthHandler:
""" """
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
self._clock = hs.get_clock()
self._store = hs.get_datastore() self._store = hs.get_datastore()
async def check_from_context(
self, room_version: str, event, context, do_sig_check=True
) -> None:
auth_event_ids = event.auth_event_ids()
auth_events_by_id = await self._store.get_events(auth_event_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_by_id.values()}
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
event_auth.check(
room_version_obj, event, auth_events=auth_events, do_sig_check=do_sig_check
)
def compute_auth_events(
self,
event: Union[EventBase, EventBuilder],
current_state_ids: StateMap[str],
for_verification: bool = False,
) -> List[str]:
"""Given an event and current state return the list of event IDs used
to auth an event.
If `for_verification` is False then only return auth events that
should be added to the event's `auth_events`.
Returns:
List of event IDs.
"""
if event.type == EventTypes.Create:
return []
# Currently we ignore the `for_verification` flag even though there are
# some situations where we can drop particular auth events when adding
# to the event's `auth_events` (e.g. joins pointing to previous joins
# when room is publicly joinable). Dropping event IDs has the
# advantage that the auth chain for the room grows slower, but we use
# the auth chain in state resolution v2 to order events, which means
# care must be taken if dropping events to ensure that it doesn't
# introduce undesirable "state reset" behaviour.
#
# All of which sounds a bit tricky so we don't bother for now.
auth_ids = []
for etype, state_key in event_auth.auth_types_for_event(event):
auth_ev_id = current_state_ids.get((etype, state_key))
if auth_ev_id:
auth_ids.append(auth_ev_id)
return auth_ids
async def check_host_in_room(self, room_id: str, host: str) -> bool:
with Measure(self._clock, "check_host_in_room"):
return await self._store.is_host_joined(room_id, host)
async def check_restricted_join_rules( async def check_restricted_join_rules(
self, self,
state_ids: StateMap[str], state_ids: StateMap[str],

View file

@ -250,7 +250,9 @@ class FederationHandler(BaseHandler):
# #
# Note that if we were never in the room then we would have already # Note that if we were never in the room then we would have already
# dropped the event, since we wouldn't know the room version. # dropped the event, since we wouldn't know the room version.
is_in_room = await self.auth.check_host_in_room(room_id, self.server_name) is_in_room = await self._event_auth_handler.check_host_in_room(
room_id, self.server_name
)
if not is_in_room: if not is_in_room:
logger.info( logger.info(
"Ignoring PDU from %s as we're not in the room", "Ignoring PDU from %s as we're not in the room",
@ -1674,7 +1676,9 @@ class FederationHandler(BaseHandler):
room_version = await self.store.get_room_version_id(room_id) room_version = await self.store.get_room_version_id(room_id)
# now check that we are *still* in the room # now check that we are *still* in the room
is_in_room = await self.auth.check_host_in_room(room_id, self.server_name) is_in_room = await self._event_auth_handler.check_host_in_room(
room_id, self.server_name
)
if not is_in_room: if not is_in_room:
logger.info( logger.info(
"Got /make_join request for room %s we are no longer in", "Got /make_join request for room %s we are no longer in",
@ -1705,86 +1709,12 @@ class FederationHandler(BaseHandler):
# The remote hasn't signed it yet, obviously. We'll do the full checks # The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_join_request` # when we get the event back in `on_send_join_request`
await self.auth.check_from_context( await self._event_auth_handler.check_from_context(
room_version, event, context, do_sig_check=False room_version, event, context, do_sig_check=False
) )
return event return event
async def on_send_join_request(self, origin: str, pdu: EventBase) -> JsonDict:
"""We have received a join event for a room. Fully process it and
respond with the current state and auth chains.
"""
event = pdu
logger.debug(
"on_send_join_request from %s: Got event: %s, signatures: %s",
origin,
event.event_id,
event.signatures,
)
if get_domain_from_id(event.sender) != origin:
logger.info(
"Got /send_join request for user %r from different origin %s",
event.sender,
origin,
)
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
event.internal_metadata.outlier = False
# Send this event on behalf of the origin server.
#
# The reasons we have the destination server rather than the origin
# server send it are slightly mysterious: the origin server should have
# all the necessary state once it gets the response to the send_join,
# so it could send the event itself if it wanted to. It may be that
# doing it this way reduces failure modes, or avoids certain attacks
# where a new server selectively tells a subset of the federation that
# it has joined.
#
# The fact is that, as of the current writing, Synapse doesn't send out
# the join event over federation after joining, and changing it now
# would introduce the danger of backwards-compatibility problems.
event.internal_metadata.send_on_behalf_of = origin
# Calculate the event context.
context = await self.state_handler.compute_event_context(event)
# Get the state before the new event.
prev_state_ids = await context.get_prev_state_ids()
# Check if the user is already in the room or invited to the room.
user_id = event.state_key
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
prev_member_event = None
if prev_member_event_id:
prev_member_event = await self.store.get_event(prev_member_event_id)
# Check if the member should be allowed access via membership in a space.
await self._event_auth_handler.check_restricted_join_rules(
prev_state_ids,
event.room_version,
user_id,
prev_member_event,
)
# Persist the event.
await self._auth_and_persist_event(origin, event, context)
logger.debug(
"on_send_join_request: After _auth_and_persist_event: %s, sigs: %s",
event.event_id,
event.signatures,
)
state_ids = list(prev_state_ids.values())
auth_chain = await self.store.get_auth_chain(event.room_id, state_ids)
state = await self.store.get_events(list(prev_state_ids.values()))
return {"state": list(state.values()), "auth_chain": auth_chain}
async def on_invite_request( async def on_invite_request(
self, origin: str, event: EventBase, room_version: RoomVersion self, origin: str, event: EventBase, room_version: RoomVersion
) -> EventBase: ) -> EventBase:
@ -1951,7 +1881,7 @@ class FederationHandler(BaseHandler):
try: try:
# The remote hasn't signed it yet, obviously. We'll do the full checks # The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_leave_request` # when we get the event back in `on_send_leave_request`
await self.auth.check_from_context( await self._event_auth_handler.check_from_context(
room_version, event, context, do_sig_check=False room_version, event, context, do_sig_check=False
) )
except AuthError as e: except AuthError as e:
@ -1960,37 +1890,6 @@ class FederationHandler(BaseHandler):
return event return event
async def on_send_leave_request(self, origin: str, pdu: EventBase) -> None:
"""We have received a leave event for a room. Fully process it."""
event = pdu
logger.debug(
"on_send_leave_request: Got event: %s, signatures: %s",
event.event_id,
event.signatures,
)
if get_domain_from_id(event.sender) != origin:
logger.info(
"Got /send_leave request for user %r from different origin %s",
event.sender,
origin,
)
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
event.internal_metadata.outlier = False
context = await self.state_handler.compute_event_context(event)
await self._auth_and_persist_event(origin, event, context)
logger.debug(
"on_send_leave_request: After _auth_and_persist_event: %s, sigs: %s",
event.event_id,
event.signatures,
)
return None
@log_function @log_function
async def on_make_knock_request( async def on_make_knock_request(
self, origin: str, room_id: str, user_id: str self, origin: str, room_id: str, user_id: str
@ -2044,7 +1943,7 @@ class FederationHandler(BaseHandler):
try: try:
# The remote hasn't signed it yet, obviously. We'll do the full checks # The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_knock_request` # when we get the event back in `on_send_knock_request`
await self.auth.check_from_context( await self._event_auth_handler.check_from_context(
room_version, event, context, do_sig_check=False room_version, event, context, do_sig_check=False
) )
except AuthError as e: except AuthError as e:
@ -2054,38 +1953,76 @@ class FederationHandler(BaseHandler):
return event return event
@log_function @log_function
async def on_send_knock_request( async def on_send_membership_event(
self, origin: str, event: EventBase self, origin: str, event: EventBase
) -> EventContext: ) -> EventContext:
""" """
We have received a knock event for a room. Verify that event and send it into the room We have received a join/leave/knock event for a room via send_join/leave/knock.
on the knocking homeserver's behalf.
Verify that event and send it into the room on the remote homeserver's behalf.
This is quite similar to on_receive_pdu, with the following principal
differences:
* only membership events are permitted (and only events with
sender==state_key -- ie, no kicks or bans)
* *We* send out the event on behalf of the remote server.
* We enforce the membership restrictions of restricted rooms.
* Rejected events result in an exception rather than being stored.
There are also other differences, however it is not clear if these are by
design or omission. In particular, we do not attempt to backfill any missing
prev_events.
Args: Args:
origin: The remote homeserver of the knocking user. origin: The homeserver of the remote (joining/invited/knocking) user.
event: The knocking member event that has been signed by the remote homeserver. event: The member event that has been signed by the remote homeserver.
Returns: Returns:
The context of the event after inserting it into the room graph. The context of the event after inserting it into the room graph.
Raises:
SynapseError if the event is not accepted into the room
""" """
logger.debug( logger.debug(
"on_send_knock_request: Got event: %s, signatures: %s", "on_send_membership_event: Got event: %s, signatures: %s",
event.event_id, event.event_id,
event.signatures, event.signatures,
) )
if get_domain_from_id(event.sender) != origin: if get_domain_from_id(event.sender) != origin:
logger.info( logger.info(
"Got /send_knock request for user %r from different origin %s", "Got send_membership request for user %r from different origin %s",
event.sender, event.sender,
origin, origin,
) )
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN) raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
event.internal_metadata.outlier = False if event.sender != event.state_key:
raise SynapseError(400, "state_key and sender must match", Codes.BAD_JSON)
assert not event.internal_metadata.outlier
# Send this event on behalf of the other server.
#
# The remote server isn't a full participant in the room at this point, so
# may not have an up-to-date list of the other homeservers participating in
# the room, so we send it on their behalf.
event.internal_metadata.send_on_behalf_of = origin
context = await self.state_handler.compute_event_context(event) context = await self.state_handler.compute_event_context(event)
context = await self._check_event_auth(origin, event, context)
if context.rejected:
raise SynapseError(
403, f"{event.membership} event was rejected", Codes.FORBIDDEN
)
# for joins, we need to check the restrictions of restricted rooms
if event.membership == Membership.JOIN:
await self._check_join_restrictions(context, event)
# for knock events, we run the third-party event rules. It's not entirely clear
# why we don't do this for other sorts of membership events.
if event.membership == Membership.KNOCK:
event_allowed = await self.third_party_event_rules.check_event_allowed( event_allowed = await self.third_party_event_rules.check_event_allowed(
event, context event, context
) )
@ -2095,10 +2032,36 @@ class FederationHandler(BaseHandler):
403, "This event is not allowed in this context", Codes.FORBIDDEN 403, "This event is not allowed in this context", Codes.FORBIDDEN
) )
await self._auth_and_persist_event(origin, event, context) # all looks good, we can persist the event.
await self._run_push_actions_and_persist_event(event, context)
return context return context
async def _check_join_restrictions(
self, context: EventContext, event: EventBase
) -> None:
"""Check that restrictions in restricted join rules are matched
Called when we receive a join event via send_join.
Raises an auth error if the restrictions are not matched.
"""
prev_state_ids = await context.get_prev_state_ids()
# Check if the user is already in the room or invited to the room.
user_id = event.state_key
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
prev_member_event = None
if prev_member_event_id:
prev_member_event = await self.store.get_event(prev_member_event_id)
# Check if the member should be allowed access via membership in a space.
await self._event_auth_handler.check_restricted_join_rules(
prev_state_ids,
event.room_version,
user_id,
prev_member_event,
)
async def get_state_for_pdu(self, room_id: str, event_id: str) -> List[EventBase]: async def get_state_for_pdu(self, room_id: str, event_id: str) -> List[EventBase]:
"""Returns the state at the event. i.e. not including said event.""" """Returns the state at the event. i.e. not including said event."""
@ -2152,7 +2115,7 @@ class FederationHandler(BaseHandler):
async def on_backfill_request( async def on_backfill_request(
self, origin: str, room_id: str, pdu_list: List[str], limit: int self, origin: str, room_id: str, pdu_list: List[str], limit: int
) -> List[EventBase]: ) -> List[EventBase]:
in_room = await self.auth.check_host_in_room(room_id, origin) in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room: if not in_room:
raise AuthError(403, "Host not in room.") raise AuthError(403, "Host not in room.")
@ -2187,7 +2150,9 @@ class FederationHandler(BaseHandler):
) )
if event: if event:
in_room = await self.auth.check_host_in_room(event.room_id, origin) in_room = await self._event_auth_handler.check_host_in_room(
event.room_id, origin
)
if not in_room: if not in_room:
raise AuthError(403, "Host not in room.") raise AuthError(403, "Host not in room.")
@ -2240,6 +2205,18 @@ class FederationHandler(BaseHandler):
backfilled=backfilled, backfilled=backfilled,
) )
await self._run_push_actions_and_persist_event(event, context, backfilled)
async def _run_push_actions_and_persist_event(
self, event: EventBase, context: EventContext, backfilled: bool = False
):
"""Run the push actions for a received event, and persist it.
Args:
event: The event itself.
context: The event context.
backfilled: True if the event was backfilled.
"""
try: try:
if ( if (
not event.internal_metadata.is_outlier() not event.internal_metadata.is_outlier()
@ -2528,7 +2505,7 @@ class FederationHandler(BaseHandler):
latest_events: List[str], latest_events: List[str],
limit: int, limit: int,
) -> List[EventBase]: ) -> List[EventBase]:
in_room = await self.auth.check_host_in_room(room_id, origin) in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room: if not in_room:
raise AuthError(403, "Host not in room.") raise AuthError(403, "Host not in room.")
@ -2553,9 +2530,9 @@ class FederationHandler(BaseHandler):
origin: str, origin: str,
event: EventBase, event: EventBase,
context: EventContext, context: EventContext,
state: Optional[Iterable[EventBase]], state: Optional[Iterable[EventBase]] = None,
auth_events: Optional[MutableStateMap[EventBase]], auth_events: Optional[MutableStateMap[EventBase]] = None,
backfilled: bool, backfilled: bool = False,
) -> EventContext: ) -> EventContext:
""" """
Checks whether an event should be rejected (for failing auth checks). Checks whether an event should be rejected (for failing auth checks).
@ -2591,7 +2568,7 @@ class FederationHandler(BaseHandler):
if not auth_events: if not auth_events:
prev_state_ids = await context.get_prev_state_ids() prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self.auth.compute_auth_events( auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=True event, prev_state_ids, for_verification=True
) )
auth_events_x = await self.store.get_events(auth_events_ids) auth_events_x = await self.store.get_events(auth_events_ids)
@ -3020,7 +2997,7 @@ class FederationHandler(BaseHandler):
"state_key": target_user_id, "state_key": target_user_id,
} }
if await self.auth.check_host_in_room(room_id, self.hs.hostname): if await self._event_auth_handler.check_host_in_room(room_id, self.hs.hostname):
room_version = await self.store.get_room_version_id(room_id) room_version = await self.store.get_room_version_id(room_id)
builder = self.event_builder_factory.new(room_version, event_dict) builder = self.event_builder_factory.new(room_version, event_dict)
@ -3040,7 +3017,9 @@ class FederationHandler(BaseHandler):
event.internal_metadata.send_on_behalf_of = self.hs.hostname event.internal_metadata.send_on_behalf_of = self.hs.hostname
try: try:
await self.auth.check_from_context(room_version, event, context) await self._event_auth_handler.check_from_context(
room_version, event, context
)
except AuthError as e: except AuthError as e:
logger.warning("Denying new third party invite %r because %s", event, e) logger.warning("Denying new third party invite %r because %s", event, e)
raise e raise e
@ -3083,7 +3062,9 @@ class FederationHandler(BaseHandler):
) )
try: try:
await self.auth.check_from_context(room_version, event, context) await self._event_auth_handler.check_from_context(
room_version, event, context
)
except AuthError as e: except AuthError as e:
logger.warning("Denying third party invite %r because %s", event, e) logger.warning("Denying third party invite %r because %s", event, e)
raise e raise e
@ -3171,7 +3152,7 @@ class FederationHandler(BaseHandler):
last_exception = None # type: Optional[Exception] last_exception = None # type: Optional[Exception]
# for each public key in the 3pid invite event # for each public key in the 3pid invite event
for public_key_object in self.hs.get_auth().get_public_keys(invite_event): for public_key_object in event_auth.get_public_keys(invite_event):
try: try:
# for each sig on the third_party_invite block of the actual invite # for each sig on the third_party_invite block of the actual invite
for server, signature_block in signed["signatures"].items(): for server, signature_block in signed["signatures"].items():

View file

@ -385,6 +385,7 @@ class EventCreationHandler:
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
self.hs = hs self.hs = hs
self.auth = hs.get_auth() self.auth = hs.get_auth()
self._event_auth_handler = hs.get_event_auth_handler()
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.storage = hs.get_storage() self.storage = hs.get_storage()
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
@ -511,6 +512,8 @@ class EventCreationHandler:
Should normally be left as None, which will cause them to be calculated Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events. based on the room state at the prev_events.
If non-None, prev_event_ids must also be provided.
require_consent: Whether to check if the requester has require_consent: Whether to check if the requester has
consented to the privacy policy. consented to the privacy policy.
@ -583,6 +586,9 @@ class EventCreationHandler:
# Strip down the auth_event_ids to only what we need to auth the event. # Strip down the auth_event_ids to only what we need to auth the event.
# For example, we don't need extra m.room.member that don't match event.sender # For example, we don't need extra m.room.member that don't match event.sender
if auth_event_ids is not None: if auth_event_ids is not None:
# If auth events are provided, prev events must be also.
assert prev_event_ids is not None
temp_event = await builder.build( temp_event = await builder.build(
prev_event_ids=prev_event_ids, prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids, auth_event_ids=auth_event_ids,
@ -594,7 +600,7 @@ class EventCreationHandler:
(e.type, e.state_key): e.event_id for e in auth_events (e.type, e.state_key): e.event_id for e in auth_events
} }
# Actually strip down and use the necessary auth events # Actually strip down and use the necessary auth events
auth_event_ids = self.auth.compute_auth_events( auth_event_ids = self._event_auth_handler.compute_auth_events(
event=temp_event, event=temp_event,
current_state_ids=auth_event_state_map, current_state_ids=auth_event_state_map,
for_verification=False, for_verification=False,
@ -786,6 +792,8 @@ class EventCreationHandler:
The event ids to use as the auth_events for the new event. The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events. based on the room state at the prev_events.
If non-None, prev_event_ids must also be provided.
ratelimit: Whether to rate limit this send. ratelimit: Whether to rate limit this send.
txn_id: The transaction ID. txn_id: The transaction ID.
ignore_shadow_ban: True if shadow-banned users should be allowed to ignore_shadow_ban: True if shadow-banned users should be allowed to
@ -1051,7 +1059,9 @@ class EventCreationHandler:
assert event.content["membership"] == Membership.LEAVE assert event.content["membership"] == Membership.LEAVE
else: else:
try: try:
await self.auth.check_from_context(room_version, event, context) await self._event_auth_handler.check_from_context(
room_version, event, context
)
except AuthError as err: except AuthError as err:
logger.warning("Denying new event %r because %s", event, err) logger.warning("Denying new event %r because %s", event, err)
raise err raise err
@ -1377,7 +1387,7 @@ class EventCreationHandler:
raise AuthError(403, "Redacting server ACL events is not permitted") raise AuthError(403, "Redacting server ACL events is not permitted")
prev_state_ids = await context.get_prev_state_ids() prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self.auth.compute_auth_events( auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=True event, prev_state_ids, for_verification=True
) )
auth_events_map = await self.store.get_events(auth_events_ids) auth_events_map = await self.store.get_events(auth_events_ids)

View file

@ -15,9 +15,10 @@
"""Contains functions for registering clients.""" """Contains functions for registering clients."""
import logging import logging
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple
from prometheus_client import Counter from prometheus_client import Counter
from typing_extensions import TypedDict
from synapse import types from synapse import types
from synapse.api.constants import MAX_USERID_LENGTH, EventTypes, JoinRules, LoginType from synapse.api.constants import MAX_USERID_LENGTH, EventTypes, JoinRules, LoginType
@ -54,6 +55,16 @@ login_counter = Counter(
["guest", "auth_provider"], ["guest", "auth_provider"],
) )
LoginDict = TypedDict(
"LoginDict",
{
"device_id": str,
"access_token": str,
"valid_until_ms": Optional[int],
"refresh_token": Optional[str],
},
)
class RegistrationHandler(BaseHandler): class RegistrationHandler(BaseHandler):
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
@ -85,6 +96,7 @@ class RegistrationHandler(BaseHandler):
self.pusher_pool = hs.get_pusherpool() self.pusher_pool = hs.get_pusherpool()
self.session_lifetime = hs.config.session_lifetime self.session_lifetime = hs.config.session_lifetime
self.access_token_lifetime = hs.config.access_token_lifetime
async def check_username( async def check_username(
self, self,
@ -393,11 +405,32 @@ class RegistrationHandler(BaseHandler):
room_alias = RoomAlias.from_string(r) room_alias = RoomAlias.from_string(r)
if self.hs.hostname != room_alias.domain: if self.hs.hostname != room_alias.domain:
logger.warning( # If the alias is remote, try to join the room. This might fail
"Cannot create room alias %s, " # because the room might be invite only, but we don't have any local
"it does not match server domain", # user in the room to invite this one with, so at this point that's
# the best we can do.
logger.info(
"Cannot automatically create room with alias %s as it isn't"
" local, trying to join the room instead",
r, r,
) )
(
room,
remote_room_hosts,
) = await room_member_handler.lookup_room_alias(room_alias)
room_id = room.to_string()
await room_member_handler.update_membership(
requester=create_requester(
user_id, authenticated_entity=self._server_name
),
target=UserID.from_string(user_id),
room_id=room_id,
remote_room_hosts=remote_room_hosts,
action="join",
ratelimit=False,
)
else: else:
# A shallow copy is OK here since the only key that is # A shallow copy is OK here since the only key that is
# modified is room_alias_name. # modified is room_alias_name.
@ -455,10 +488,18 @@ class RegistrationHandler(BaseHandler):
) )
# Calculate whether the room requires an invite or can be # Calculate whether the room requires an invite or can be
# joined directly. Note that unless a join rule of public exists, # joined directly. By default, we consider the room as requiring an
# it is treated as requiring an invite. # invite if the homeserver is in the room (unless told otherwise by the
requires_invite = True # join rules). Otherwise we consider it as being joinable, at the risk of
# failing to join, but in this case there's little more we can do since
# we don't have a local user in the room to craft up an invite with.
requires_invite = await self.store.is_host_joined(
room_id,
self.server_name,
)
if requires_invite:
# If the server is in the room, check if the room is public.
state = await self.store.get_filtered_current_state_ids( state = await self.store.get_filtered_current_state_ids(
room_id, StateFilter.from_types([(EventTypes.JoinRules, "")]) room_id, StateFilter.from_types([(EventTypes.JoinRules, "")])
) )
@ -470,7 +511,9 @@ class RegistrationHandler(BaseHandler):
) )
if join_rules_event: if join_rules_event:
join_rule = join_rules_event.content.get("join_rule", None) join_rule = join_rules_event.content.get("join_rule", None)
requires_invite = join_rule and join_rule != JoinRules.PUBLIC requires_invite = (
join_rule and join_rule != JoinRules.PUBLIC
)
# Send the invite, if necessary. # Send the invite, if necessary.
if requires_invite: if requires_invite:
@ -672,7 +715,8 @@ class RegistrationHandler(BaseHandler):
is_guest: bool = False, is_guest: bool = False,
is_appservice_ghost: bool = False, is_appservice_ghost: bool = False,
auth_provider_id: Optional[str] = None, auth_provider_id: Optional[str] = None,
) -> Tuple[str, str]: should_issue_refresh_token: bool = False,
) -> Tuple[str, str, Optional[int], Optional[str]]:
"""Register a device for a user and generate an access token. """Register a device for a user and generate an access token.
The access token will be limited by the homeserver's session_lifetime config. The access token will be limited by the homeserver's session_lifetime config.
@ -684,8 +728,9 @@ class RegistrationHandler(BaseHandler):
is_guest: Whether this is a guest account is_guest: Whether this is a guest account
auth_provider_id: The SSO IdP the user used, if any (just used for the auth_provider_id: The SSO IdP the user used, if any (just used for the
prometheus metrics). prometheus metrics).
should_issue_refresh_token: Whether it should also issue a refresh token
Returns: Returns:
Tuple of device ID and access token Tuple of device ID, access token, access token expiration time and refresh token
""" """
res = await self._register_device_client( res = await self._register_device_client(
user_id=user_id, user_id=user_id,
@ -693,6 +738,7 @@ class RegistrationHandler(BaseHandler):
initial_display_name=initial_display_name, initial_display_name=initial_display_name,
is_guest=is_guest, is_guest=is_guest,
is_appservice_ghost=is_appservice_ghost, is_appservice_ghost=is_appservice_ghost,
should_issue_refresh_token=should_issue_refresh_token,
) )
login_counter.labels( login_counter.labels(
@ -700,7 +746,12 @@ class RegistrationHandler(BaseHandler):
auth_provider=(auth_provider_id or ""), auth_provider=(auth_provider_id or ""),
).inc() ).inc()
return res["device_id"], res["access_token"] return (
res["device_id"],
res["access_token"],
res["valid_until_ms"],
res["refresh_token"],
)
async def register_device_inner( async def register_device_inner(
self, self,
@ -709,7 +760,8 @@ class RegistrationHandler(BaseHandler):
initial_display_name: Optional[str], initial_display_name: Optional[str],
is_guest: bool = False, is_guest: bool = False,
is_appservice_ghost: bool = False, is_appservice_ghost: bool = False,
) -> Dict[str, str]: should_issue_refresh_token: bool = False,
) -> LoginDict:
"""Helper for register_device """Helper for register_device
Does the bits that need doing on the main process. Not for use outside this Does the bits that need doing on the main process. Not for use outside this
@ -724,6 +776,9 @@ class RegistrationHandler(BaseHandler):
) )
valid_until_ms = self.clock.time_msec() + self.session_lifetime valid_until_ms = self.clock.time_msec() + self.session_lifetime
refresh_token = None
refresh_token_id = None
registered_device_id = await self.device_handler.check_device_registered( registered_device_id = await self.device_handler.check_device_registered(
user_id, device_id, initial_display_name user_id, device_id, initial_display_name
) )
@ -731,14 +786,30 @@ class RegistrationHandler(BaseHandler):
assert valid_until_ms is None assert valid_until_ms is None
access_token = self.macaroon_gen.generate_guest_access_token(user_id) access_token = self.macaroon_gen.generate_guest_access_token(user_id)
else: else:
if should_issue_refresh_token:
(
refresh_token,
refresh_token_id,
) = await self._auth_handler.get_refresh_token_for_user_id(
user_id,
device_id=registered_device_id,
)
valid_until_ms = self.clock.time_msec() + self.access_token_lifetime
access_token = await self._auth_handler.get_access_token_for_user_id( access_token = await self._auth_handler.get_access_token_for_user_id(
user_id, user_id,
device_id=registered_device_id, device_id=registered_device_id,
valid_until_ms=valid_until_ms, valid_until_ms=valid_until_ms,
is_appservice_ghost=is_appservice_ghost, is_appservice_ghost=is_appservice_ghost,
refresh_token_id=refresh_token_id,
) )
return {"device_id": registered_device_id, "access_token": access_token} return {
"device_id": registered_device_id,
"access_token": access_token,
"valid_until_ms": valid_until_ms,
"refresh_token": refresh_token,
}
async def post_registration_actions( async def post_registration_actions(
self, user_id: str, auth_result: dict, access_token: Optional[str] self, user_id: str, auth_result: dict, access_token: Optional[str]

View file

@ -83,6 +83,7 @@ class RoomCreationHandler(BaseHandler):
self.spam_checker = hs.get_spam_checker() self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler() self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler() self.room_member_handler = hs.get_room_member_handler()
self._event_auth_handler = hs.get_event_auth_handler()
self.config = hs.config self.config = hs.config
# Room state based off defined presets # Room state based off defined presets
@ -226,7 +227,7 @@ class RoomCreationHandler(BaseHandler):
}, },
) )
old_room_version = await self.store.get_room_version_id(old_room_id) old_room_version = await self.store.get_room_version_id(old_room_id)
await self.auth.check_from_context( await self._event_auth_handler.check_from_context(
old_room_version, tombstone_event, tombstone_context old_room_version, tombstone_event, tombstone_context
) )

View file

@ -25,6 +25,7 @@ from synapse.api.constants import (
EventTypes, EventTypes,
HistoryVisibility, HistoryVisibility,
Membership, Membership,
RoomTypes,
) )
from synapse.events import EventBase from synapse.events import EventBase
from synapse.events.utils import format_event_for_client_v2 from synapse.events.utils import format_event_for_client_v2
@ -318,7 +319,8 @@ class SpaceSummaryHandler:
Returns: Returns:
A tuple of: A tuple of:
An iterable of a single value of the room. The room information, if the room should be returned to the
user. None, otherwise.
An iterable of the sorted children events. This may be limited An iterable of the sorted children events. This may be limited
to a maximum size or may include all children. to a maximum size or may include all children.
@ -328,7 +330,11 @@ class SpaceSummaryHandler:
room_entry = await self._build_room_entry(room_id) room_entry = await self._build_room_entry(room_id)
# look for child rooms/spaces. # If the room is not a space, return just the room information.
if room_entry.get("room_type") != RoomTypes.SPACE:
return room_entry, ()
# Otherwise, look for child rooms/spaces.
child_events = await self._get_child_events(room_id) child_events = await self._get_child_events(room_id)
if suggested_only: if suggested_only:
@ -348,6 +354,7 @@ class SpaceSummaryHandler:
event_format=format_event_for_client_v2, event_format=format_event_for_client_v2,
) )
) )
return room_entry, events_result return room_entry, events_result
async def _summarize_remote_room( async def _summarize_remote_room(
@ -465,7 +472,7 @@ class SpaceSummaryHandler:
# If this is a request over federation, check if the host is in the room or # If this is a request over federation, check if the host is in the room or
# is in one of the spaces specified via the join rules. # is in one of the spaces specified via the join rules.
elif origin: elif origin:
if await self._auth.check_host_in_room(room_id, origin): if await self._event_auth_handler.check_host_in_room(room_id, origin):
return True return True
# Alternately, if the host has a user in any of the spaces specified # Alternately, if the host has a user in any of the spaces specified
@ -478,7 +485,9 @@ class SpaceSummaryHandler:
await self._event_auth_handler.get_rooms_that_allow_join(state_ids) await self._event_auth_handler.get_rooms_that_allow_join(state_ids)
) )
for space_id in allowed_rooms: for space_id in allowed_rooms:
if await self._auth.check_host_in_room(space_id, origin): if await self._event_auth_handler.check_host_in_room(
space_id, origin
):
return True return True
# otherwise, check if the room is peekable # otherwise, check if the room is peekable

View file

@ -728,7 +728,7 @@ def set_cors_headers(request: Request):
) )
request.setHeader( request.setHeader(
b"Access-Control-Allow-Headers", b"Access-Control-Allow-Headers",
b"Origin, X-Requested-With, Content-Type, Accept, Authorization, Date", b"X-Requested-With, Content-Type, Authorization, Date",
) )

View file

@ -109,12 +109,22 @@ def parse_boolean_from_args(args, name, default=None, required=False):
return default return default
@overload
def parse_bytes_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[bytes] = None,
) -> Optional[bytes]:
...
@overload @overload
def parse_bytes_from_args( def parse_bytes_from_args(
args: Dict[bytes, List[bytes]], args: Dict[bytes, List[bytes]],
name: str, name: str,
default: Literal[None] = None, default: Literal[None] = None,
required: Literal[True] = True, *,
required: Literal[True],
) -> bytes: ) -> bytes:
... ...
@ -197,7 +207,12 @@ def parse_string(
""" """
args = request.args # type: Dict[bytes, List[bytes]] # type: ignore args = request.args # type: Dict[bytes, List[bytes]] # type: ignore
return parse_string_from_args( return parse_string_from_args(
args, name, default, required, allowed_values, encoding args,
name,
default,
required=required,
allowed_values=allowed_values,
encoding=encoding,
) )
@ -227,7 +242,20 @@ def parse_strings_from_args(
args: Dict[bytes, List[bytes]], args: Dict[bytes, List[bytes]],
name: str, name: str,
default: Optional[List[str]] = None, default: Optional[List[str]] = None,
required: Literal[True] = True, *,
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
) -> Optional[List[str]]:
...
@overload
def parse_strings_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[List[str]] = None,
*,
required: Literal[True],
allowed_values: Optional[Iterable[str]] = None, allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii", encoding: str = "ascii",
) -> List[str]: ) -> List[str]:
@ -239,6 +267,7 @@ def parse_strings_from_args(
args: Dict[bytes, List[bytes]], args: Dict[bytes, List[bytes]],
name: str, name: str,
default: Optional[List[str]] = None, default: Optional[List[str]] = None,
*,
required: bool = False, required: bool = False,
allowed_values: Optional[Iterable[str]] = None, allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii", encoding: str = "ascii",
@ -299,7 +328,20 @@ def parse_string_from_args(
args: Dict[bytes, List[bytes]], args: Dict[bytes, List[bytes]],
name: str, name: str,
default: Optional[str] = None, default: Optional[str] = None,
required: Literal[True] = True, *,
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
) -> Optional[str]:
...
@overload
def parse_string_from_args(
args: Dict[bytes, List[bytes]],
name: str,
default: Optional[str] = None,
*,
required: Literal[True],
allowed_values: Optional[Iterable[str]] = None, allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii", encoding: str = "ascii",
) -> str: ) -> str:

View file

@ -168,7 +168,7 @@ class ModuleApi:
"Using deprecated ModuleApi.register which creates a dummy user device." "Using deprecated ModuleApi.register which creates a dummy user device."
) )
user_id = yield self.register_user(localpart, displayname, emails or []) user_id = yield self.register_user(localpart, displayname, emails or [])
_, access_token = yield self.register_device(user_id) _, access_token, _, _ = yield self.register_device(user_id)
return user_id, access_token return user_id, access_token
def register_user( def register_user(

View file

@ -104,7 +104,7 @@ class BulkPushRuleEvaluator:
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
self.hs = hs self.hs = hs
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.auth = hs.get_auth() self._event_auth_handler = hs.get_event_auth_handler()
# Used by `RulesForRoom` to ensure only one thing mutates the cache at a # Used by `RulesForRoom` to ensure only one thing mutates the cache at a
# time. Keyed off room_id. # time. Keyed off room_id.
@ -172,7 +172,7 @@ class BulkPushRuleEvaluator:
# not having a power level event is an extreme edge case # not having a power level event is an extreme edge case
auth_events = {POWER_KEY: await self.store.get_event(pl_event_id)} auth_events = {POWER_KEY: await self.store.get_event(pl_event_id)}
else: else:
auth_events_ids = self.auth.compute_auth_events( auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=False event, prev_state_ids, for_verification=False
) )
auth_events_dict = await self.store.get_events(auth_events_ids) auth_events_dict = await self.store.get_events(auth_events_ids)

View file

@ -36,20 +36,29 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
@staticmethod @staticmethod
async def _serialize_payload( async def _serialize_payload(
user_id, device_id, initial_display_name, is_guest, is_appservice_ghost user_id,
device_id,
initial_display_name,
is_guest,
is_appservice_ghost,
should_issue_refresh_token,
): ):
""" """
Args: Args:
user_id (int)
device_id (str|None): Device ID to use, if None a new one is device_id (str|None): Device ID to use, if None a new one is
generated. generated.
initial_display_name (str|None) initial_display_name (str|None)
is_guest (bool) is_guest (bool)
is_appservice_ghost (bool)
should_issue_refresh_token (bool)
""" """
return { return {
"device_id": device_id, "device_id": device_id,
"initial_display_name": initial_display_name, "initial_display_name": initial_display_name,
"is_guest": is_guest, "is_guest": is_guest,
"is_appservice_ghost": is_appservice_ghost, "is_appservice_ghost": is_appservice_ghost,
"should_issue_refresh_token": should_issue_refresh_token,
} }
async def _handle_request(self, request, user_id): async def _handle_request(self, request, user_id):
@ -59,6 +68,7 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
initial_display_name = content["initial_display_name"] initial_display_name = content["initial_display_name"]
is_guest = content["is_guest"] is_guest = content["is_guest"]
is_appservice_ghost = content["is_appservice_ghost"] is_appservice_ghost = content["is_appservice_ghost"]
should_issue_refresh_token = content["should_issue_refresh_token"]
res = await self.registration_handler.register_device_inner( res = await self.registration_handler.register_device_inner(
user_id, user_id,
@ -66,6 +76,7 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
initial_display_name, initial_display_name,
is_guest, is_guest,
is_appservice_ghost=is_appservice_ghost, is_appservice_ghost=is_appservice_ghost,
should_issue_refresh_token=should_issue_refresh_token,
) )
return 200, res return 200, res

View file

@ -14,7 +14,9 @@
import logging import logging
import re import re
from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional from typing import TYPE_CHECKING, Any, Awaitable, Callable, Dict, List, Optional
from typing_extensions import TypedDict
from synapse.api.errors import Codes, LoginError, SynapseError from synapse.api.errors import Codes, LoginError, SynapseError
from synapse.api.ratelimiting import Ratelimiter from synapse.api.ratelimiting import Ratelimiter
@ -25,6 +27,8 @@ from synapse.http import get_request_uri
from synapse.http.server import HttpServer, finish_request from synapse.http.server import HttpServer, finish_request
from synapse.http.servlet import ( from synapse.http.servlet import (
RestServlet, RestServlet,
assert_params_in_dict,
parse_boolean,
parse_bytes_from_args, parse_bytes_from_args,
parse_json_object_from_request, parse_json_object_from_request,
parse_string, parse_string,
@ -40,6 +44,21 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
LoginResponse = TypedDict(
"LoginResponse",
{
"user_id": str,
"access_token": str,
"home_server": str,
"expires_in_ms": Optional[int],
"refresh_token": Optional[str],
"device_id": str,
"well_known": Optional[Dict[str, Any]],
},
total=False,
)
class LoginRestServlet(RestServlet): class LoginRestServlet(RestServlet):
PATTERNS = client_patterns("/login$", v1=True) PATTERNS = client_patterns("/login$", v1=True)
CAS_TYPE = "m.login.cas" CAS_TYPE = "m.login.cas"
@ -48,6 +67,7 @@ class LoginRestServlet(RestServlet):
JWT_TYPE = "org.matrix.login.jwt" JWT_TYPE = "org.matrix.login.jwt"
JWT_TYPE_DEPRECATED = "m.login.jwt" JWT_TYPE_DEPRECATED = "m.login.jwt"
APPSERVICE_TYPE = "uk.half-shot.msc2778.login.application_service" APPSERVICE_TYPE = "uk.half-shot.msc2778.login.application_service"
REFRESH_TOKEN_PARAM = "org.matrix.msc2918.refresh_token"
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
super().__init__() super().__init__()
@ -65,9 +85,12 @@ class LoginRestServlet(RestServlet):
self.cas_enabled = hs.config.cas_enabled self.cas_enabled = hs.config.cas_enabled
self.oidc_enabled = hs.config.oidc_enabled self.oidc_enabled = hs.config.oidc_enabled
self._msc2858_enabled = hs.config.experimental.msc2858_enabled self._msc2858_enabled = hs.config.experimental.msc2858_enabled
self._msc2918_enabled = hs.config.access_token_lifetime is not None
self.auth = hs.get_auth() self.auth = hs.get_auth()
self.clock = hs.get_clock()
self.auth_handler = self.hs.get_auth_handler() self.auth_handler = self.hs.get_auth_handler()
self.registration_handler = hs.get_registration_handler() self.registration_handler = hs.get_registration_handler()
self._sso_handler = hs.get_sso_handler() self._sso_handler = hs.get_sso_handler()
@ -138,6 +161,15 @@ class LoginRestServlet(RestServlet):
async def on_POST(self, request: SynapseRequest): async def on_POST(self, request: SynapseRequest):
login_submission = parse_json_object_from_request(request) login_submission = parse_json_object_from_request(request)
if self._msc2918_enabled:
# Check if this login should also issue a refresh token, as per
# MSC2918
should_issue_refresh_token = parse_boolean(
request, name=LoginRestServlet.REFRESH_TOKEN_PARAM, default=False
)
else:
should_issue_refresh_token = False
try: try:
if login_submission["type"] == LoginRestServlet.APPSERVICE_TYPE: if login_submission["type"] == LoginRestServlet.APPSERVICE_TYPE:
appservice = self.auth.get_appservice_by_req(request) appservice = self.auth.get_appservice_by_req(request)
@ -147,19 +179,32 @@ class LoginRestServlet(RestServlet):
None, request.getClientIP() None, request.getClientIP()
) )
result = await self._do_appservice_login(login_submission, appservice) result = await self._do_appservice_login(
login_submission,
appservice,
should_issue_refresh_token=should_issue_refresh_token,
)
elif self.jwt_enabled and ( elif self.jwt_enabled and (
login_submission["type"] == LoginRestServlet.JWT_TYPE login_submission["type"] == LoginRestServlet.JWT_TYPE
or login_submission["type"] == LoginRestServlet.JWT_TYPE_DEPRECATED or login_submission["type"] == LoginRestServlet.JWT_TYPE_DEPRECATED
): ):
await self._address_ratelimiter.ratelimit(None, request.getClientIP()) await self._address_ratelimiter.ratelimit(None, request.getClientIP())
result = await self._do_jwt_login(login_submission) result = await self._do_jwt_login(
login_submission,
should_issue_refresh_token=should_issue_refresh_token,
)
elif login_submission["type"] == LoginRestServlet.TOKEN_TYPE: elif login_submission["type"] == LoginRestServlet.TOKEN_TYPE:
await self._address_ratelimiter.ratelimit(None, request.getClientIP()) await self._address_ratelimiter.ratelimit(None, request.getClientIP())
result = await self._do_token_login(login_submission) result = await self._do_token_login(
login_submission,
should_issue_refresh_token=should_issue_refresh_token,
)
else: else:
await self._address_ratelimiter.ratelimit(None, request.getClientIP()) await self._address_ratelimiter.ratelimit(None, request.getClientIP())
result = await self._do_other_login(login_submission) result = await self._do_other_login(
login_submission,
should_issue_refresh_token=should_issue_refresh_token,
)
except KeyError: except KeyError:
raise SynapseError(400, "Missing JSON keys.") raise SynapseError(400, "Missing JSON keys.")
@ -169,7 +214,10 @@ class LoginRestServlet(RestServlet):
return 200, result return 200, result
async def _do_appservice_login( async def _do_appservice_login(
self, login_submission: JsonDict, appservice: ApplicationService self,
login_submission: JsonDict,
appservice: ApplicationService,
should_issue_refresh_token: bool = False,
): ):
identifier = login_submission.get("identifier") identifier = login_submission.get("identifier")
logger.info("Got appservice login request with identifier: %r", identifier) logger.info("Got appservice login request with identifier: %r", identifier)
@ -198,14 +246,21 @@ class LoginRestServlet(RestServlet):
raise LoginError(403, "Invalid access_token", errcode=Codes.FORBIDDEN) raise LoginError(403, "Invalid access_token", errcode=Codes.FORBIDDEN)
return await self._complete_login( return await self._complete_login(
qualified_user_id, login_submission, ratelimit=appservice.is_rate_limited() qualified_user_id,
login_submission,
ratelimit=appservice.is_rate_limited(),
should_issue_refresh_token=should_issue_refresh_token,
) )
async def _do_other_login(self, login_submission: JsonDict) -> Dict[str, str]: async def _do_other_login(
self, login_submission: JsonDict, should_issue_refresh_token: bool = False
) -> LoginResponse:
"""Handle non-token/saml/jwt logins """Handle non-token/saml/jwt logins
Args: Args:
login_submission: login_submission:
should_issue_refresh_token: True if this login should issue
a refresh token alongside the access token.
Returns: Returns:
HTTP response HTTP response
@ -224,7 +279,10 @@ class LoginRestServlet(RestServlet):
login_submission, ratelimit=True login_submission, ratelimit=True
) )
result = await self._complete_login( result = await self._complete_login(
canonical_user_id, login_submission, callback canonical_user_id,
login_submission,
callback,
should_issue_refresh_token=should_issue_refresh_token,
) )
return result return result
@ -232,11 +290,12 @@ class LoginRestServlet(RestServlet):
self, self,
user_id: str, user_id: str,
login_submission: JsonDict, login_submission: JsonDict,
callback: Optional[Callable[[Dict[str, str]], Awaitable[None]]] = None, callback: Optional[Callable[[LoginResponse], Awaitable[None]]] = None,
create_non_existent_users: bool = False, create_non_existent_users: bool = False,
ratelimit: bool = True, ratelimit: bool = True,
auth_provider_id: Optional[str] = None, auth_provider_id: Optional[str] = None,
) -> Dict[str, str]: should_issue_refresh_token: bool = False,
) -> LoginResponse:
"""Called when we've successfully authed the user and now need to """Called when we've successfully authed the user and now need to
actually login them in (e.g. create devices). This gets called on actually login them in (e.g. create devices). This gets called on
all successful logins. all successful logins.
@ -253,6 +312,8 @@ class LoginRestServlet(RestServlet):
ratelimit: Whether to ratelimit the login request. ratelimit: Whether to ratelimit the login request.
auth_provider_id: The SSO IdP the user used, if any (just used for the auth_provider_id: The SSO IdP the user used, if any (just used for the
prometheus metrics). prometheus metrics).
should_issue_refresh_token: True if this login should issue
a refresh token alongside the access token.
Returns: Returns:
result: Dictionary of account information after successful login. result: Dictionary of account information after successful login.
@ -274,28 +335,48 @@ class LoginRestServlet(RestServlet):
device_id = login_submission.get("device_id") device_id = login_submission.get("device_id")
initial_display_name = login_submission.get("initial_device_display_name") initial_display_name = login_submission.get("initial_device_display_name")
device_id, access_token = await self.registration_handler.register_device( (
user_id, device_id, initial_display_name, auth_provider_id=auth_provider_id device_id,
access_token,
valid_until_ms,
refresh_token,
) = await self.registration_handler.register_device(
user_id,
device_id,
initial_display_name,
auth_provider_id=auth_provider_id,
should_issue_refresh_token=should_issue_refresh_token,
) )
result = { result = LoginResponse(
"user_id": user_id, user_id=user_id,
"access_token": access_token, access_token=access_token,
"home_server": self.hs.hostname, home_server=self.hs.hostname,
"device_id": device_id, device_id=device_id,
} )
if valid_until_ms is not None:
expires_in_ms = valid_until_ms - self.clock.time_msec()
result["expires_in_ms"] = expires_in_ms
if refresh_token is not None:
result["refresh_token"] = refresh_token
if callback is not None: if callback is not None:
await callback(result) await callback(result)
return result return result
async def _do_token_login(self, login_submission: JsonDict) -> Dict[str, str]: async def _do_token_login(
self, login_submission: JsonDict, should_issue_refresh_token: bool = False
) -> LoginResponse:
""" """
Handle the final stage of SSO login. Handle the final stage of SSO login.
Args: Args:
login_submission: The JSON request body. login_submission: The JSON request body.
should_issue_refresh_token: True if this login should issue
a refresh token alongside the access token.
Returns: Returns:
The body of the JSON response. The body of the JSON response.
@ -309,9 +390,12 @@ class LoginRestServlet(RestServlet):
login_submission, login_submission,
self.auth_handler._sso_login_callback, self.auth_handler._sso_login_callback,
auth_provider_id=res.auth_provider_id, auth_provider_id=res.auth_provider_id,
should_issue_refresh_token=should_issue_refresh_token,
) )
async def _do_jwt_login(self, login_submission: JsonDict) -> Dict[str, str]: async def _do_jwt_login(
self, login_submission: JsonDict, should_issue_refresh_token: bool = False
) -> LoginResponse:
token = login_submission.get("token", None) token = login_submission.get("token", None)
if token is None: if token is None:
raise LoginError( raise LoginError(
@ -342,7 +426,10 @@ class LoginRestServlet(RestServlet):
user_id = UserID(user, self.hs.hostname).to_string() user_id = UserID(user, self.hs.hostname).to_string()
result = await self._complete_login( result = await self._complete_login(
user_id, login_submission, create_non_existent_users=True user_id,
login_submission,
create_non_existent_users=True,
should_issue_refresh_token=should_issue_refresh_token,
) )
return result return result
@ -371,6 +458,42 @@ def _get_auth_flow_dict_for_idp(
return e return e
class RefreshTokenServlet(RestServlet):
PATTERNS = client_patterns(
"/org.matrix.msc2918.refresh_token/refresh$", releases=(), unstable=True
)
def __init__(self, hs: "HomeServer"):
self._auth_handler = hs.get_auth_handler()
self._clock = hs.get_clock()
self.access_token_lifetime = hs.config.access_token_lifetime
async def on_POST(
self,
request: SynapseRequest,
):
refresh_submission = parse_json_object_from_request(request)
assert_params_in_dict(refresh_submission, ["refresh_token"])
token = refresh_submission["refresh_token"]
if not isinstance(token, str):
raise SynapseError(400, "Invalid param: refresh_token", Codes.INVALID_PARAM)
valid_until_ms = self._clock.time_msec() + self.access_token_lifetime
access_token, refresh_token = await self._auth_handler.refresh_token(
token, valid_until_ms
)
expires_in_ms = valid_until_ms - self._clock.time_msec()
return (
200,
{
"access_token": access_token,
"refresh_token": refresh_token,
"expires_in_ms": expires_in_ms,
},
)
class SsoRedirectServlet(RestServlet): class SsoRedirectServlet(RestServlet):
PATTERNS = list(client_patterns("/login/(cas|sso)/redirect$", v1=True)) + [ PATTERNS = list(client_patterns("/login/(cas|sso)/redirect$", v1=True)) + [
re.compile( re.compile(
@ -477,6 +600,8 @@ class CasTicketServlet(RestServlet):
def register_servlets(hs, http_server): def register_servlets(hs, http_server):
LoginRestServlet(hs).register(http_server) LoginRestServlet(hs).register(http_server)
if hs.config.access_token_lifetime is not None:
RefreshTokenServlet(hs).register(http_server)
SsoRedirectServlet(hs).register(http_server) SsoRedirectServlet(hs).register(http_server)
if hs.config.cas_enabled: if hs.config.cas_enabled:
CasTicketServlet(hs).register(http_server) CasTicketServlet(hs).register(http_server)

View file

@ -41,11 +41,13 @@ from synapse.http.server import finish_request, respond_with_html
from synapse.http.servlet import ( from synapse.http.servlet import (
RestServlet, RestServlet,
assert_params_in_dict, assert_params_in_dict,
parse_boolean,
parse_json_object_from_request, parse_json_object_from_request,
parse_string, parse_string,
) )
from synapse.metrics import threepid_send_requests from synapse.metrics import threepid_send_requests
from synapse.push.mailer import Mailer from synapse.push.mailer import Mailer
from synapse.types import JsonDict
from synapse.util.msisdn import phone_number_to_msisdn from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.ratelimitutils import FederationRateLimiter from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.stringutils import assert_valid_client_secret, random_string from synapse.util.stringutils import assert_valid_client_secret, random_string
@ -399,6 +401,7 @@ class RegisterRestServlet(RestServlet):
self.password_policy_handler = hs.get_password_policy_handler() self.password_policy_handler = hs.get_password_policy_handler()
self.clock = hs.get_clock() self.clock = hs.get_clock()
self._registration_enabled = self.hs.config.enable_registration self._registration_enabled = self.hs.config.enable_registration
self._msc2918_enabled = hs.config.access_token_lifetime is not None
self._registration_flows = _calculate_registration_flows( self._registration_flows = _calculate_registration_flows(
hs.config, self.auth_handler hs.config, self.auth_handler
@ -424,6 +427,15 @@ class RegisterRestServlet(RestServlet):
"Do not understand membership kind: %s" % (kind.decode("utf8"),) "Do not understand membership kind: %s" % (kind.decode("utf8"),)
) )
if self._msc2918_enabled:
# Check if this registration should also issue a refresh token, as
# per MSC2918
should_issue_refresh_token = parse_boolean(
request, name="org.matrix.msc2918.refresh_token", default=False
)
else:
should_issue_refresh_token = False
# Pull out the provided username and do basic sanity checks early since # Pull out the provided username and do basic sanity checks early since
# the auth layer will store these in sessions. # the auth layer will store these in sessions.
desired_username = None desired_username = None
@ -462,7 +474,10 @@ class RegisterRestServlet(RestServlet):
raise SynapseError(400, "Desired Username is missing or not a string") raise SynapseError(400, "Desired Username is missing or not a string")
result = await self._do_appservice_registration( result = await self._do_appservice_registration(
desired_username, access_token, body desired_username,
access_token,
body,
should_issue_refresh_token=should_issue_refresh_token,
) )
return 200, result return 200, result
@ -665,7 +680,9 @@ class RegisterRestServlet(RestServlet):
registered = True registered = True
return_dict = await self._create_registration_details( return_dict = await self._create_registration_details(
registered_user_id, params registered_user_id,
params,
should_issue_refresh_token=should_issue_refresh_token,
) )
if registered: if registered:
@ -677,7 +694,9 @@ class RegisterRestServlet(RestServlet):
return 200, return_dict return 200, return_dict
async def _do_appservice_registration(self, username, as_token, body): async def _do_appservice_registration(
self, username, as_token, body, should_issue_refresh_token: bool = False
):
user_id = await self.registration_handler.appservice_register( user_id = await self.registration_handler.appservice_register(
username, as_token username, as_token
) )
@ -685,19 +704,27 @@ class RegisterRestServlet(RestServlet):
user_id, user_id,
body, body,
is_appservice_ghost=True, is_appservice_ghost=True,
should_issue_refresh_token=should_issue_refresh_token,
) )
async def _create_registration_details( async def _create_registration_details(
self, user_id, params, is_appservice_ghost=False self,
user_id: str,
params: JsonDict,
is_appservice_ghost: bool = False,
should_issue_refresh_token: bool = False,
): ):
"""Complete registration of newly-registered user """Complete registration of newly-registered user
Allocates device_id if one was not given; also creates access_token. Allocates device_id if one was not given; also creates access_token.
Args: Args:
(str) user_id: full canonical @user:id user_id: full canonical @user:id
(object) params: registration parameters, from which we pull params: registration parameters, from which we pull device_id,
device_id, initial_device_name and inhibit_login initial_device_name and inhibit_login
is_appservice_ghost
should_issue_refresh_token: True if this registration should issue
a refresh token alongside the access token.
Returns: Returns:
dictionary for response from /register dictionary for response from /register
""" """
@ -705,15 +732,29 @@ class RegisterRestServlet(RestServlet):
if not params.get("inhibit_login", False): if not params.get("inhibit_login", False):
device_id = params.get("device_id") device_id = params.get("device_id")
initial_display_name = params.get("initial_device_display_name") initial_display_name = params.get("initial_device_display_name")
device_id, access_token = await self.registration_handler.register_device( (
device_id,
access_token,
valid_until_ms,
refresh_token,
) = await self.registration_handler.register_device(
user_id, user_id,
device_id, device_id,
initial_display_name, initial_display_name,
is_guest=False, is_guest=False,
is_appservice_ghost=is_appservice_ghost, is_appservice_ghost=is_appservice_ghost,
should_issue_refresh_token=should_issue_refresh_token,
) )
result.update({"access_token": access_token, "device_id": device_id}) result.update({"access_token": access_token, "device_id": device_id})
if valid_until_ms is not None:
expires_in_ms = valid_until_ms - self.clock.time_msec()
result["expires_in_ms"] = expires_in_ms
if refresh_token is not None:
result["refresh_token"] = refresh_token
return result return result
async def _do_guest_registration(self, params, address=None): async def _do_guest_registration(self, params, address=None):
@ -727,19 +768,30 @@ class RegisterRestServlet(RestServlet):
# we have nowhere to store it. # we have nowhere to store it.
device_id = synapse.api.auth.GUEST_DEVICE_ID device_id = synapse.api.auth.GUEST_DEVICE_ID
initial_display_name = params.get("initial_device_display_name") initial_display_name = params.get("initial_device_display_name")
device_id, access_token = await self.registration_handler.register_device( (
device_id,
access_token,
valid_until_ms,
refresh_token,
) = await self.registration_handler.register_device(
user_id, device_id, initial_display_name, is_guest=True user_id, device_id, initial_display_name, is_guest=True
) )
return ( result = {
200,
{
"user_id": user_id, "user_id": user_id,
"device_id": device_id, "device_id": device_id,
"access_token": access_token, "access_token": access_token,
"home_server": self.hs.hostname, "home_server": self.hs.hostname,
}, }
)
if valid_until_ms is not None:
expires_in_ms = valid_until_ms - self.clock.time_msec()
result["expires_in_ms"] = expires_in_ms
if refresh_token is not None:
result["refresh_token"] = refresh_token
return 200, result
def _calculate_registration_flows( def _calculate_registration_flows(

View file

@ -13,6 +13,7 @@
# limitations under the License. # limitations under the License.
import itertools import itertools
import logging import logging
from collections import defaultdict
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple
from synapse.api.constants import Membership, PresenceState from synapse.api.constants import Membership, PresenceState
@ -232,29 +233,51 @@ class SyncRestServlet(RestServlet):
) )
logger.debug("building sync response dict") logger.debug("building sync response dict")
return {
"account_data": {"events": sync_result.account_data}, response: dict = defaultdict(dict)
"to_device": {"events": sync_result.to_device}, response["next_batch"] = await sync_result.next_batch.to_string(self.store)
"device_lists": {
"changed": list(sync_result.device_lists.changed), if sync_result.account_data:
"left": list(sync_result.device_lists.left), response["account_data"] = {"events": sync_result.account_data}
}, if sync_result.presence:
"presence": SyncRestServlet.encode_presence(sync_result.presence, time_now), response["presence"] = SyncRestServlet.encode_presence(
"rooms": { sync_result.presence, time_now
Membership.JOIN: joined, )
Membership.INVITE: invited,
Membership.KNOCK: knocked, if sync_result.to_device:
Membership.LEAVE: archived, response["to_device"] = {"events": sync_result.to_device}
},
"groups": { if sync_result.device_lists.changed:
Membership.JOIN: sync_result.groups.join, response["device_lists"]["changed"] = list(sync_result.device_lists.changed)
Membership.INVITE: sync_result.groups.invite, if sync_result.device_lists.left:
Membership.LEAVE: sync_result.groups.leave, response["device_lists"]["left"] = list(sync_result.device_lists.left)
},
"device_one_time_keys_count": sync_result.device_one_time_keys_count, if sync_result.device_one_time_keys_count:
"org.matrix.msc2732.device_unused_fallback_key_types": sync_result.device_unused_fallback_key_types, response[
"next_batch": await sync_result.next_batch.to_string(self.store), "device_one_time_keys_count"
} ] = sync_result.device_one_time_keys_count
if sync_result.device_unused_fallback_key_types:
response[
"org.matrix.msc2732.device_unused_fallback_key_types"
] = sync_result.device_unused_fallback_key_types
if joined:
response["rooms"][Membership.JOIN] = joined
if invited:
response["rooms"][Membership.INVITE] = invited
if knocked:
response["rooms"][Membership.KNOCK] = knocked
if archived:
response["rooms"][Membership.LEAVE] = archived
if sync_result.groups.join:
response["groups"][Membership.JOIN] = sync_result.groups.join
if sync_result.groups.invite:
response["groups"][Membership.INVITE] = sync_result.groups.invite
if sync_result.groups.leave:
response["groups"][Membership.LEAVE] = sync_result.groups.leave
return response
@staticmethod @staticmethod
def encode_presence(events, time_now): def encode_presence(events, time_now):

View file

@ -111,7 +111,7 @@ def make_conn(
db_config: DatabaseConnectionConfig, db_config: DatabaseConnectionConfig,
engine: BaseDatabaseEngine, engine: BaseDatabaseEngine,
default_txn_name: str, default_txn_name: str,
) -> Connection: ) -> "LoggingDatabaseConnection":
"""Make a new connection to the database and return it. """Make a new connection to the database and return it.
Returns: Returns:

View file

@ -16,6 +16,8 @@ import logging
from queue import Empty, PriorityQueue from queue import Empty, PriorityQueue
from typing import Collection, Dict, Iterable, List, Optional, Set, Tuple from typing import Collection, Dict, Iterable, List, Optional, Set, Tuple
from prometheus_client import Gauge
from synapse.api.constants import MAX_DEPTH from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import StoreError from synapse.api.errors import StoreError
from synapse.api.room_versions import RoomVersion from synapse.api.room_versions import RoomVersion
@ -32,6 +34,16 @@ from synapse.util.caches.descriptors import cached
from synapse.util.caches.lrucache import LruCache from synapse.util.caches.lrucache import LruCache
from synapse.util.iterutils import batch_iter from synapse.util.iterutils import batch_iter
oldest_pdu_in_federation_staging = Gauge(
"synapse_federation_server_oldest_inbound_pdu_in_staging",
"The age in seconds since we received the oldest pdu in the federation staging area",
)
number_pdus_in_federation_queue = Gauge(
"synapse_federation_server_number_inbound_pdu_in_staging",
"The total number of events in the inbound federation staging",
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -54,6 +66,8 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
500000, "_event_auth_cache", size_callback=len 500000, "_event_auth_cache", size_callback=len
) # type: LruCache[str, List[Tuple[str, int]]] ) # type: LruCache[str, List[Tuple[str, int]]]
self._clock.looping_call(self._get_stats_for_federation_staging, 30 * 1000)
async def get_auth_chain( async def get_auth_chain(
self, room_id: str, event_ids: Collection[str], include_given: bool = False self, room_id: str, event_ids: Collection[str], include_given: bool = False
) -> List[EventBase]: ) -> List[EventBase]:
@ -1075,15 +1089,61 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
self, self,
origin: str, origin: str,
event_id: str, event_id: str,
) -> None: ) -> Optional[int]:
"""Remove the given event from the staging area""" """Remove the given event from the staging area.
await self.db_pool.simple_delete(
Returns:
The received_ts of the row that was deleted, if any.
"""
if self.db_pool.engine.supports_returning:
def _remove_received_event_from_staging_txn(txn):
sql = """
DELETE FROM federation_inbound_events_staging
WHERE origin = ? AND event_id = ?
RETURNING received_ts
"""
txn.execute(sql, (origin, event_id))
return txn.fetchone()
row = await self.db_pool.runInteraction(
"remove_received_event_from_staging",
_remove_received_event_from_staging_txn,
db_autocommit=True,
)
if row is None:
return None
return row[0]
else:
def _remove_received_event_from_staging_txn(txn):
received_ts = self.db_pool.simple_select_one_onecol_txn(
txn,
table="federation_inbound_events_staging", table="federation_inbound_events_staging",
keyvalues={ keyvalues={
"origin": origin, "origin": origin,
"event_id": event_id, "event_id": event_id,
}, },
desc="remove_received_event_from_staging", retcol="received_ts",
allow_none=True,
)
self.db_pool.simple_delete_txn(
txn,
table="federation_inbound_events_staging",
keyvalues={
"origin": origin,
"event_id": event_id,
},
)
return received_ts
return await self.db_pool.runInteraction(
"remove_received_event_from_staging",
_remove_received_event_from_staging_txn,
) )
async def get_next_staged_event_id_for_room( async def get_next_staged_event_id_for_room(
@ -1147,6 +1207,40 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
return origin, event return origin, event
async def get_all_rooms_with_staged_incoming_events(self) -> List[str]:
"""Get the room IDs of all events currently staged."""
return await self.db_pool.simple_select_onecol(
table="federation_inbound_events_staging",
keyvalues={},
retcol="DISTINCT room_id",
desc="get_all_rooms_with_staged_incoming_events",
)
@wrap_as_background_process("_get_stats_for_federation_staging")
async def _get_stats_for_federation_staging(self):
"""Update the prometheus metrics for the inbound federation staging area."""
def _get_stats_for_federation_staging_txn(txn):
txn.execute(
"SELECT coalesce(count(*), 0) FROM federation_inbound_events_staging"
)
(count,) = txn.fetchone()
txn.execute(
"SELECT coalesce(min(received_ts), 0) FROM federation_inbound_events_staging"
)
(age,) = txn.fetchone()
return count, age
count, age = await self.db_pool.runInteraction(
"_get_stats_for_federation_staging", _get_stats_for_federation_staging_txn
)
number_pdus_in_federation_queue.set(count)
oldest_pdu_in_federation_staging.set(age)
class EventFederationStore(EventFederationWorkerStore): class EventFederationStore(EventFederationWorkerStore):
"""Responsible for storing and serving up the various graphs associated """Responsible for storing and serving up the various graphs associated

View file

@ -29,6 +29,34 @@ from synapse.types import JsonDict
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
_REPLACE_STREAM_ORDERING_SQL_COMMANDS = (
# there should be no leftover rows without a stream_ordering2, but just in case...
"UPDATE events SET stream_ordering2 = stream_ordering WHERE stream_ordering2 IS NULL",
# now we can drop the rule and switch the columns
"DROP RULE populate_stream_ordering2 ON events",
"ALTER TABLE events DROP COLUMN stream_ordering",
"ALTER TABLE events RENAME COLUMN stream_ordering2 TO stream_ordering",
# ... and finally, rename the indexes into place for consistency with sqlite
"ALTER INDEX event_contains_url_index2 RENAME TO event_contains_url_index",
"ALTER INDEX events_order_room2 RENAME TO events_order_room",
"ALTER INDEX events_room_stream2 RENAME TO events_room_stream",
"ALTER INDEX events_ts2 RENAME TO events_ts",
)
class _BackgroundUpdates:
EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
DELETE_SOFT_FAILED_EXTREMITIES = "delete_soft_failed_extremities"
POPULATE_STREAM_ORDERING2 = "populate_stream_ordering2"
INDEX_STREAM_ORDERING2 = "index_stream_ordering2"
INDEX_STREAM_ORDERING2_CONTAINS_URL = "index_stream_ordering2_contains_url"
INDEX_STREAM_ORDERING2_ROOM_ORDER = "index_stream_ordering2_room_order"
INDEX_STREAM_ORDERING2_ROOM_STREAM = "index_stream_ordering2_room_stream"
INDEX_STREAM_ORDERING2_TS = "index_stream_ordering2_ts"
REPLACE_STREAM_ORDERING_COLUMN = "replace_stream_ordering_column"
@attr.s(slots=True, frozen=True) @attr.s(slots=True, frozen=True)
class _CalculateChainCover: class _CalculateChainCover:
"""Return value for _calculate_chain_cover_txn.""" """Return value for _calculate_chain_cover_txn."""
@ -48,19 +76,15 @@ class _CalculateChainCover:
class EventsBackgroundUpdatesStore(SQLBaseStore): class EventsBackgroundUpdatesStore(SQLBaseStore):
EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
DELETE_SOFT_FAILED_EXTREMITIES = "delete_soft_failed_extremities"
def __init__(self, database: DatabasePool, db_conn, hs): def __init__(self, database: DatabasePool, db_conn, hs):
super().__init__(database, db_conn, hs) super().__init__(database, db_conn, hs)
self.db_pool.updates.register_background_update_handler( self.db_pool.updates.register_background_update_handler(
self.EVENT_ORIGIN_SERVER_TS_NAME, self._background_reindex_origin_server_ts _BackgroundUpdates.EVENT_ORIGIN_SERVER_TS_NAME,
self._background_reindex_origin_server_ts,
) )
self.db_pool.updates.register_background_update_handler( self.db_pool.updates.register_background_update_handler(
self.EVENT_FIELDS_SENDER_URL_UPDATE_NAME, _BackgroundUpdates.EVENT_FIELDS_SENDER_URL_UPDATE_NAME,
self._background_reindex_fields_sender, self._background_reindex_fields_sender,
) )
@ -85,7 +109,8 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
) )
self.db_pool.updates.register_background_update_handler( self.db_pool.updates.register_background_update_handler(
self.DELETE_SOFT_FAILED_EXTREMITIES, self._cleanup_extremities_bg_update _BackgroundUpdates.DELETE_SOFT_FAILED_EXTREMITIES,
self._cleanup_extremities_bg_update,
) )
self.db_pool.updates.register_background_update_handler( self.db_pool.updates.register_background_update_handler(
@ -139,6 +164,59 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
self._purged_chain_cover_index, self._purged_chain_cover_index,
) )
################################################################################
# bg updates for replacing stream_ordering with a BIGINT
# (these only run on postgres.)
self.db_pool.updates.register_background_update_handler(
_BackgroundUpdates.POPULATE_STREAM_ORDERING2,
self._background_populate_stream_ordering2,
)
# CREATE UNIQUE INDEX events_stream_ordering ON events(stream_ordering2);
self.db_pool.updates.register_background_index_update(
_BackgroundUpdates.INDEX_STREAM_ORDERING2,
index_name="events_stream_ordering",
table="events",
columns=["stream_ordering2"],
unique=True,
)
# CREATE INDEX event_contains_url_index ON events(room_id, topological_ordering, stream_ordering) WHERE contains_url = true AND outlier = false;
self.db_pool.updates.register_background_index_update(
_BackgroundUpdates.INDEX_STREAM_ORDERING2_CONTAINS_URL,
index_name="event_contains_url_index2",
table="events",
columns=["room_id", "topological_ordering", "stream_ordering2"],
where_clause="contains_url = true AND outlier = false",
)
# CREATE INDEX events_order_room ON events(room_id, topological_ordering, stream_ordering);
self.db_pool.updates.register_background_index_update(
_BackgroundUpdates.INDEX_STREAM_ORDERING2_ROOM_ORDER,
index_name="events_order_room2",
table="events",
columns=["room_id", "topological_ordering", "stream_ordering2"],
)
# CREATE INDEX events_room_stream ON events(room_id, stream_ordering);
self.db_pool.updates.register_background_index_update(
_BackgroundUpdates.INDEX_STREAM_ORDERING2_ROOM_STREAM,
index_name="events_room_stream2",
table="events",
columns=["room_id", "stream_ordering2"],
)
# CREATE INDEX events_ts ON events(origin_server_ts, stream_ordering);
self.db_pool.updates.register_background_index_update(
_BackgroundUpdates.INDEX_STREAM_ORDERING2_TS,
index_name="events_ts2",
table="events",
columns=["origin_server_ts", "stream_ordering2"],
)
self.db_pool.updates.register_background_update_handler(
_BackgroundUpdates.REPLACE_STREAM_ORDERING_COLUMN,
self._background_replace_stream_ordering_column,
)
################################################################################
async def _background_reindex_fields_sender(self, progress, batch_size): async def _background_reindex_fields_sender(self, progress, batch_size):
target_min_stream_id = progress["target_min_stream_id_inclusive"] target_min_stream_id = progress["target_min_stream_id_inclusive"]
max_stream_id = progress["max_stream_id_exclusive"] max_stream_id = progress["max_stream_id_exclusive"]
@ -190,18 +268,18 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
} }
self.db_pool.updates._background_update_progress_txn( self.db_pool.updates._background_update_progress_txn(
txn, self.EVENT_FIELDS_SENDER_URL_UPDATE_NAME, progress txn, _BackgroundUpdates.EVENT_FIELDS_SENDER_URL_UPDATE_NAME, progress
) )
return len(rows) return len(rows)
result = await self.db_pool.runInteraction( result = await self.db_pool.runInteraction(
self.EVENT_FIELDS_SENDER_URL_UPDATE_NAME, reindex_txn _BackgroundUpdates.EVENT_FIELDS_SENDER_URL_UPDATE_NAME, reindex_txn
) )
if not result: if not result:
await self.db_pool.updates._end_background_update( await self.db_pool.updates._end_background_update(
self.EVENT_FIELDS_SENDER_URL_UPDATE_NAME _BackgroundUpdates.EVENT_FIELDS_SENDER_URL_UPDATE_NAME
) )
return result return result
@ -264,18 +342,18 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
} }
self.db_pool.updates._background_update_progress_txn( self.db_pool.updates._background_update_progress_txn(
txn, self.EVENT_ORIGIN_SERVER_TS_NAME, progress txn, _BackgroundUpdates.EVENT_ORIGIN_SERVER_TS_NAME, progress
) )
return len(rows_to_update) return len(rows_to_update)
result = await self.db_pool.runInteraction( result = await self.db_pool.runInteraction(
self.EVENT_ORIGIN_SERVER_TS_NAME, reindex_search_txn _BackgroundUpdates.EVENT_ORIGIN_SERVER_TS_NAME, reindex_search_txn
) )
if not result: if not result:
await self.db_pool.updates._end_background_update( await self.db_pool.updates._end_background_update(
self.EVENT_ORIGIN_SERVER_TS_NAME _BackgroundUpdates.EVENT_ORIGIN_SERVER_TS_NAME
) )
return result return result
@ -454,7 +532,7 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
if not num_handled: if not num_handled:
await self.db_pool.updates._end_background_update( await self.db_pool.updates._end_background_update(
self.DELETE_SOFT_FAILED_EXTREMITIES _BackgroundUpdates.DELETE_SOFT_FAILED_EXTREMITIES
) )
def _drop_table_txn(txn): def _drop_table_txn(txn):
@ -1009,3 +1087,81 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
await self.db_pool.updates._end_background_update("purged_chain_cover") await self.db_pool.updates._end_background_update("purged_chain_cover")
return result return result
async def _background_populate_stream_ordering2(
self, progress: JsonDict, batch_size: int
) -> int:
"""Populate events.stream_ordering2, then replace stream_ordering
This is to deal with the fact that stream_ordering was initially created as a
32-bit integer field.
"""
batch_size = max(batch_size, 1)
def process(txn: Cursor) -> int:
last_stream = progress.get("last_stream", -(1 << 31))
txn.execute(
"""
UPDATE events SET stream_ordering2=stream_ordering
WHERE stream_ordering IN (
SELECT stream_ordering FROM events WHERE stream_ordering > ?
ORDER BY stream_ordering LIMIT ?
)
RETURNING stream_ordering;
""",
(last_stream, batch_size),
)
row_count = txn.rowcount
if row_count == 0:
return 0
last_stream = max(row[0] for row in txn)
logger.info("populated stream_ordering2 up to %i", last_stream)
self.db_pool.updates._background_update_progress_txn(
txn,
_BackgroundUpdates.POPULATE_STREAM_ORDERING2,
{"last_stream": last_stream},
)
return row_count
result = await self.db_pool.runInteraction(
"_background_populate_stream_ordering2", process
)
if result != 0:
return result
await self.db_pool.updates._end_background_update(
_BackgroundUpdates.POPULATE_STREAM_ORDERING2
)
return 0
async def _background_replace_stream_ordering_column(
self, progress: JsonDict, batch_size: int
) -> int:
"""Drop the old 'stream_ordering' column and rename 'stream_ordering2' into its place."""
def process(txn: Cursor) -> None:
for sql in _REPLACE_STREAM_ORDERING_SQL_COMMANDS:
logger.info("completing stream_ordering migration: %s", sql)
txn.execute(sql)
# ANALYZE the new column to build stats on it, to encourage PostgreSQL to use the
# indexes on it.
# We need to pass execute a dummy function to handle the txn's result otherwise
# it tries to call fetchall() on it and fails because there's no result to fetch.
await self.db_pool.execute(
"background_analyze_new_stream_ordering_column",
lambda txn: None,
"ANALYZE events(stream_ordering2)",
)
await self.db_pool.runInteraction(
"_background_replace_stream_ordering_column", process
)
await self.db_pool.updates._end_background_update(
_BackgroundUpdates.REPLACE_STREAM_ORDERING_COLUMN
)
return 0

View file

@ -73,20 +73,20 @@ class ProfileWorkerStore(SQLBaseStore):
async def set_profile_displayname( async def set_profile_displayname(
self, user_localpart: str, new_displayname: Optional[str] self, user_localpart: str, new_displayname: Optional[str]
) -> None: ) -> None:
await self.db_pool.simple_update_one( await self.db_pool.simple_upsert(
table="profiles", table="profiles",
keyvalues={"user_id": user_localpart}, keyvalues={"user_id": user_localpart},
updatevalues={"displayname": new_displayname}, values={"displayname": new_displayname},
desc="set_profile_displayname", desc="set_profile_displayname",
) )
async def set_profile_avatar_url( async def set_profile_avatar_url(
self, user_localpart: str, new_avatar_url: Optional[str] self, user_localpart: str, new_avatar_url: Optional[str]
) -> None: ) -> None:
await self.db_pool.simple_update_one( await self.db_pool.simple_upsert(
table="profiles", table="profiles",
keyvalues={"user_id": user_localpart}, keyvalues={"user_id": user_localpart},
updatevalues={"avatar_url": new_avatar_url}, values={"avatar_url": new_avatar_url},
desc="set_profile_avatar_url", desc="set_profile_avatar_url",
) )

View file

@ -53,6 +53,9 @@ class TokenLookupResult:
valid_until_ms: The timestamp the token expires, if any. valid_until_ms: The timestamp the token expires, if any.
token_owner: The "owner" of the token. This is either the same as the token_owner: The "owner" of the token. This is either the same as the
user, or a server admin who is logged in as the user. user, or a server admin who is logged in as the user.
token_used: True if this token was used at least once in a request.
This field can be out of date since `get_user_by_access_token` is
cached.
""" """
user_id = attr.ib(type=str) user_id = attr.ib(type=str)
@ -62,6 +65,7 @@ class TokenLookupResult:
device_id = attr.ib(type=Optional[str], default=None) device_id = attr.ib(type=Optional[str], default=None)
valid_until_ms = attr.ib(type=Optional[int], default=None) valid_until_ms = attr.ib(type=Optional[int], default=None)
token_owner = attr.ib(type=str) token_owner = attr.ib(type=str)
token_used = attr.ib(type=bool, default=False)
# Make the token owner default to the user ID, which is the common case. # Make the token owner default to the user ID, which is the common case.
@token_owner.default @token_owner.default
@ -69,6 +73,29 @@ class TokenLookupResult:
return self.user_id return self.user_id
@attr.s(frozen=True, slots=True)
class RefreshTokenLookupResult:
"""Result of looking up a refresh token."""
user_id = attr.ib(type=str)
"""The user this token belongs to."""
device_id = attr.ib(type=str)
"""The device associated with this refresh token."""
token_id = attr.ib(type=int)
"""The ID of this refresh token."""
next_token_id = attr.ib(type=Optional[int])
"""The ID of the refresh token which replaced this one."""
has_next_refresh_token_been_refreshed = attr.ib(type=bool)
"""True if the next refresh token was used for another refresh."""
has_next_access_token_been_used = attr.ib(type=bool)
"""True if the next access token was already used at least once."""
class RegistrationWorkerStore(CacheInvalidationWorkerStore): class RegistrationWorkerStore(CacheInvalidationWorkerStore):
def __init__( def __init__(
self, self,
@ -441,7 +468,8 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
access_tokens.id as token_id, access_tokens.id as token_id,
access_tokens.device_id, access_tokens.device_id,
access_tokens.valid_until_ms, access_tokens.valid_until_ms,
access_tokens.user_id as token_owner access_tokens.user_id as token_owner,
access_tokens.used as token_used
FROM users FROM users
INNER JOIN access_tokens on users.name = COALESCE(puppets_user_id, access_tokens.user_id) INNER JOIN access_tokens on users.name = COALESCE(puppets_user_id, access_tokens.user_id)
WHERE token = ? WHERE token = ?
@ -449,8 +477,15 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
txn.execute(sql, (token,)) txn.execute(sql, (token,))
rows = self.db_pool.cursor_to_dict(txn) rows = self.db_pool.cursor_to_dict(txn)
if rows: if rows:
return TokenLookupResult(**rows[0]) row = rows[0]
# This field is nullable, ensure it comes out as a boolean
if row["token_used"] is None:
row["token_used"] = False
return TokenLookupResult(**row)
return None return None
@ -1072,6 +1107,111 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
desc="update_access_token_last_validated", desc="update_access_token_last_validated",
) )
@cached()
async def mark_access_token_as_used(self, token_id: int) -> None:
"""
Mark the access token as used, which invalidates the refresh token used
to obtain it.
Because get_user_by_access_token is cached, this function might be
called multiple times for the same token, effectively doing unnecessary
SQL updates. Because updating the `used` field only goes one way (from
False to True) it is safe to cache this function as well to avoid this
issue.
Args:
token_id: The ID of the access token to update.
Raises:
StoreError if there was a problem updating this.
"""
await self.db_pool.simple_update_one(
"access_tokens",
{"id": token_id},
{"used": True},
desc="mark_access_token_as_used",
)
async def lookup_refresh_token(
self, token: str
) -> Optional[RefreshTokenLookupResult]:
"""Lookup a refresh token with hints about its validity."""
def _lookup_refresh_token_txn(txn) -> Optional[RefreshTokenLookupResult]:
txn.execute(
"""
SELECT
rt.id token_id,
rt.user_id,
rt.device_id,
rt.next_token_id,
(nrt.next_token_id IS NOT NULL) has_next_refresh_token_been_refreshed,
at.used has_next_access_token_been_used
FROM refresh_tokens rt
LEFT JOIN refresh_tokens nrt ON rt.next_token_id = nrt.id
LEFT JOIN access_tokens at ON at.refresh_token_id = nrt.id
WHERE rt.token = ?
""",
(token,),
)
row = txn.fetchone()
if row is None:
return None
return RefreshTokenLookupResult(
token_id=row[0],
user_id=row[1],
device_id=row[2],
next_token_id=row[3],
has_next_refresh_token_been_refreshed=row[4],
# This column is nullable, ensure it's a boolean
has_next_access_token_been_used=(row[5] or False),
)
return await self.db_pool.runInteraction(
"lookup_refresh_token", _lookup_refresh_token_txn
)
async def replace_refresh_token(self, token_id: int, next_token_id: int) -> None:
"""
Set the successor of a refresh token, removing the existing successor
if any.
Args:
token_id: ID of the refresh token to update.
next_token_id: ID of its successor.
"""
def _replace_refresh_token_txn(txn) -> None:
# First check if there was an existing refresh token
old_next_token_id = self.db_pool.simple_select_one_onecol_txn(
txn,
"refresh_tokens",
{"id": token_id},
"next_token_id",
allow_none=True,
)
self.db_pool.simple_update_one_txn(
txn,
"refresh_tokens",
{"id": token_id},
{"next_token_id": next_token_id},
)
# Delete the old "next" token if it exists. This should cascade and
# delete the associated access_token
if old_next_token_id is not None:
self.db_pool.simple_delete_one_txn(
txn,
"refresh_tokens",
{"id": old_next_token_id},
)
await self.db_pool.runInteraction(
"replace_refresh_token", _replace_refresh_token_txn
)
class RegistrationBackgroundUpdateStore(RegistrationWorkerStore): class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
def __init__( def __init__(
@ -1263,6 +1403,7 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
self._ignore_unknown_session_error = hs.config.request_token_inhibit_3pid_errors self._ignore_unknown_session_error = hs.config.request_token_inhibit_3pid_errors
self._access_tokens_id_gen = IdGenerator(db_conn, "access_tokens", "id") self._access_tokens_id_gen = IdGenerator(db_conn, "access_tokens", "id")
self._refresh_tokens_id_gen = IdGenerator(db_conn, "refresh_tokens", "id")
async def add_access_token_to_user( async def add_access_token_to_user(
self, self,
@ -1271,14 +1412,18 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
device_id: Optional[str], device_id: Optional[str],
valid_until_ms: Optional[int], valid_until_ms: Optional[int],
puppets_user_id: Optional[str] = None, puppets_user_id: Optional[str] = None,
refresh_token_id: Optional[int] = None,
) -> int: ) -> int:
"""Adds an access token for the given user. """Adds an access token for the given user.
Args: Args:
user_id: The user ID. user_id: The user ID.
token: The new access token to add. token: The new access token to add.
device_id: ID of the device to associate with the access token device_id: ID of the device to associate with the access token.
valid_until_ms: when the token is valid until. None for no expiry. valid_until_ms: when the token is valid until. None for no expiry.
puppets_user_id
refresh_token_id: ID of the refresh token generated alongside this
access token.
Raises: Raises:
StoreError if there was a problem adding this. StoreError if there was a problem adding this.
Returns: Returns:
@ -1297,12 +1442,47 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
"valid_until_ms": valid_until_ms, "valid_until_ms": valid_until_ms,
"puppets_user_id": puppets_user_id, "puppets_user_id": puppets_user_id,
"last_validated": now, "last_validated": now,
"refresh_token_id": refresh_token_id,
"used": False,
}, },
desc="add_access_token_to_user", desc="add_access_token_to_user",
) )
return next_id return next_id
async def add_refresh_token_to_user(
self,
user_id: str,
token: str,
device_id: Optional[str],
) -> int:
"""Adds a refresh token for the given user.
Args:
user_id: The user ID.
token: The new access token to add.
device_id: ID of the device to associate with the refresh token.
Raises:
StoreError if there was a problem adding this.
Returns:
The token ID
"""
next_id = self._refresh_tokens_id_gen.get_next()
await self.db_pool.simple_insert(
"refresh_tokens",
{
"id": next_id,
"user_id": user_id,
"device_id": device_id,
"token": token,
"next_token_id": None,
},
desc="add_refresh_token_to_user",
)
return next_id
def _set_device_for_access_token_txn(self, txn, token: str, device_id: str) -> str: def _set_device_for_access_token_txn(self, txn, token: str, device_id: str) -> str:
old_device_id = self.db_pool.simple_select_one_onecol_txn( old_device_id = self.db_pool.simple_select_one_onecol_txn(
txn, "access_tokens", {"token": token}, "device_id" txn, "access_tokens", {"token": token}, "device_id"
@ -1545,7 +1725,7 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
device_id: Optional[str] = None, device_id: Optional[str] = None,
) -> List[Tuple[str, int, Optional[str]]]: ) -> List[Tuple[str, int, Optional[str]]]:
""" """
Invalidate access tokens belonging to a user Invalidate access and refresh tokens belonging to a user
Args: Args:
user_id: ID of user the tokens belong to user_id: ID of user the tokens belong to
@ -1565,7 +1745,13 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
items = keyvalues.items() items = keyvalues.items()
where_clause = " AND ".join(k + " = ?" for k, _ in items) where_clause = " AND ".join(k + " = ?" for k, _ in items)
values = [v for _, v in items] # type: List[Union[str, int]] values = [v for _, v in items] # type: List[Union[str, int]]
# Conveniently, refresh_tokens and access_tokens both use the user_id and device_id fields. Only caveat
# is the `except_token_id` param that is tricky to get right, so for now we're just using the same where
# clause and values before we handle that. This seems to be only used in the "set password" handler.
refresh_where_clause = where_clause
refresh_values = values.copy()
if except_token_id: if except_token_id:
# TODO: support that for refresh tokens
where_clause += " AND id != ?" where_clause += " AND id != ?"
values.append(except_token_id) values.append(except_token_id)
@ -1583,6 +1769,11 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
txn.execute("DELETE FROM access_tokens WHERE %s" % where_clause, values) txn.execute("DELETE FROM access_tokens WHERE %s" % where_clause, values)
txn.execute(
"DELETE FROM refresh_tokens WHERE %s" % refresh_where_clause,
refresh_values,
)
return tokens_and_devices return tokens_and_devices
return await self.db_pool.runInteraction("user_delete_access_tokens", f) return await self.db_pool.runInteraction("user_delete_access_tokens", f)
@ -1599,6 +1790,14 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
await self.db_pool.runInteraction("delete_access_token", f) await self.db_pool.runInteraction("delete_access_token", f)
async def delete_refresh_token(self, refresh_token: str) -> None:
def f(txn):
self.db_pool.simple_delete_one_txn(
txn, table="refresh_tokens", keyvalues={"token": refresh_token}
)
await self.db_pool.runInteraction("delete_refresh_token", f)
async def add_user_pending_deactivation(self, user_id: str) -> None: async def add_user_pending_deactivation(self, user_id: str) -> None:
""" """
Adds a user to the table of users who need to be parted from all the rooms they're Adds a user to the table of users who need to be parted from all the rooms they're

View file

@ -49,6 +49,12 @@ class BaseDatabaseEngine(Generic[ConnectionType], metaclass=abc.ABCMeta):
""" """
... ...
@property
@abc.abstractmethod
def supports_returning(self) -> bool:
"""Do we support the `RETURNING` clause in insert/update/delete?"""
...
@abc.abstractmethod @abc.abstractmethod
def check_database( def check_database(
self, db_conn: ConnectionType, allow_outdated_version: bool = False self, db_conn: ConnectionType, allow_outdated_version: bool = False

View file

@ -133,6 +133,11 @@ class PostgresEngine(BaseDatabaseEngine):
"""Do we support using `a = ANY(?)` and passing a list""" """Do we support using `a = ANY(?)` and passing a list"""
return True return True
@property
def supports_returning(self) -> bool:
"""Do we support the `RETURNING` clause in insert/update/delete?"""
return True
def is_deadlock(self, error): def is_deadlock(self, error):
if isinstance(error, self.module.DatabaseError): if isinstance(error, self.module.DatabaseError):
# https://www.postgresql.org/docs/current/static/errcodes-appendix.html # https://www.postgresql.org/docs/current/static/errcodes-appendix.html

View file

@ -60,6 +60,11 @@ class Sqlite3Engine(BaseDatabaseEngine["sqlite3.Connection"]):
"""Do we support using `a = ANY(?)` and passing a list""" """Do we support using `a = ANY(?)` and passing a list"""
return False return False
@property
def supports_returning(self) -> bool:
"""Do we support the `RETURNING` clause in insert/update/delete?"""
return self.module.sqlite_version_info >= (3, 35, 0)
def check_database(self, db_conn, allow_outdated_version: bool = False): def check_database(self, db_conn, allow_outdated_version: bool = False):
if not allow_outdated_version: if not allow_outdated_version:
version = self.module.sqlite_version_info version = self.module.sqlite_version_info

View file

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
SCHEMA_VERSION = 59 SCHEMA_VERSION = 60
"""Represents the expectations made by the codebase about the database schema """Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the This should be incremented whenever the codebase changes its requirements on the

View file

@ -0,0 +1,34 @@
/* Copyright 2021 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- Holds MSC2918 refresh tokens
CREATE TABLE refresh_tokens (
id BIGINT PRIMARY KEY,
user_id TEXT NOT NULL,
device_id TEXT NOT NULL,
token TEXT NOT NULL,
-- When consumed, a new refresh token is generated, which is tracked by
-- this foreign key
next_token_id BIGINT REFERENCES refresh_tokens (id) ON DELETE CASCADE,
UNIQUE(token)
);
-- Add a reference to the refresh token generated alongside each access token
ALTER TABLE "access_tokens"
ADD COLUMN refresh_token_id BIGINT REFERENCES refresh_tokens (id) ON DELETE CASCADE;
-- Add a flag whether the token was already used or not
ALTER TABLE "access_tokens"
ADD COLUMN used BOOLEAN;

View file

@ -0,0 +1,45 @@
/* Copyright 2021 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- This migration handles the process of changing the type of `stream_ordering` to
-- a BIGINT.
--
-- Note that this is only a problem on postgres as sqlite only has one "integer" type
-- which can cope with values up to 2^63.
-- First add a new column to contain the bigger stream_ordering
ALTER TABLE events ADD COLUMN stream_ordering2 BIGINT;
-- Create a rule which will populate it for new rows.
CREATE OR REPLACE RULE "populate_stream_ordering2" AS
ON INSERT TO events
DO UPDATE events SET stream_ordering2=NEW.stream_ordering WHERE stream_ordering=NEW.stream_ordering;
-- Start a bg process to populate it for old events
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(6001, 'populate_stream_ordering2', '{}');
-- ... and some more to build indexes on it. These aren't really interdependent
-- but the backround_updates manager can only handle a single dependency per update.
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(6001, 'index_stream_ordering2', '{}', 'populate_stream_ordering2'),
(6001, 'index_stream_ordering2_room_order', '{}', 'index_stream_ordering2'),
(6001, 'index_stream_ordering2_contains_url', '{}', 'index_stream_ordering2_room_order'),
(6001, 'index_stream_ordering2_room_stream', '{}', 'index_stream_ordering2_contains_url'),
(6001, 'index_stream_ordering2_ts', '{}', 'index_stream_ordering2_room_stream');
-- ... and another to do the switcheroo
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(6001, 'replace_stream_ordering_column', '{}', 'index_stream_ordering2_ts');

View file

@ -0,0 +1,30 @@
/* Copyright 2021 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- This migration is closely related to '01recreate_stream_ordering.sql.postgres'.
--
-- It updates the other tables which use an INTEGER to refer to a stream ordering.
-- These tables are all small enough that a re-create is tractable.
ALTER TABLE pushers ALTER COLUMN last_stream_ordering SET DATA TYPE BIGINT;
ALTER TABLE federation_stream_position ALTER COLUMN stream_id SET DATA TYPE BIGINT;
-- these aren't actually event stream orderings, but they are numbers where 2 billion
-- is a bit limiting, application_services_state is tiny, and I don't want to ever have
-- to do this again.
ALTER TABLE application_services_state ALTER COLUMN last_txn SET DATA TYPE BIGINT;
ALTER TABLE application_services_state ALTER COLUMN read_receipt_stream_id SET DATA TYPE BIGINT;
ALTER TABLE application_services_state ALTER COLUMN presence_stream_id SET DATA TYPE BIGINT;

View file

@ -12,9 +12,12 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
import threading import threading
import weakref
from functools import wraps from functools import wraps
from typing import ( from typing import (
TYPE_CHECKING,
Any, Any,
Callable, Callable,
Collection, Collection,
@ -31,10 +34,19 @@ from typing import (
from typing_extensions import Literal from typing_extensions import Literal
from twisted.internet import reactor
from synapse.config import cache as cache_config from synapse.config import cache as cache_config
from synapse.util import caches from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.util import Clock, caches
from synapse.util.caches import CacheMetric, register_cache from synapse.util.caches import CacheMetric, register_cache
from synapse.util.caches.treecache import TreeCache, iterate_tree_cache_entry from synapse.util.caches.treecache import TreeCache, iterate_tree_cache_entry
from synapse.util.linked_list import ListNode
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
try: try:
from pympler.asizeof import Asizer from pympler.asizeof import Asizer
@ -82,19 +94,126 @@ def enumerate_leaves(node, depth):
yield m yield m
P = TypeVar("P")
class _TimedListNode(ListNode[P]):
"""A `ListNode` that tracks last access time."""
__slots__ = ["last_access_ts_secs"]
def update_last_access(self, clock: Clock):
self.last_access_ts_secs = int(clock.time())
# Whether to insert new cache entries to the global list. We only add to it if
# time based eviction is enabled.
USE_GLOBAL_LIST = False
# A linked list of all cache entries, allowing efficient time based eviction.
GLOBAL_ROOT = ListNode["_Node"].create_root_node()
@wrap_as_background_process("LruCache._expire_old_entries")
async def _expire_old_entries(clock: Clock, expiry_seconds: int):
"""Walks the global cache list to find cache entries that haven't been
accessed in the given number of seconds.
"""
now = int(clock.time())
node = GLOBAL_ROOT.prev_node
assert node is not None
i = 0
logger.debug("Searching for stale caches")
while node is not GLOBAL_ROOT:
# Only the root node isn't a `_TimedListNode`.
assert isinstance(node, _TimedListNode)
if node.last_access_ts_secs > now - expiry_seconds:
break
cache_entry = node.get_cache_entry()
next_node = node.prev_node
# The node should always have a reference to a cache entry and a valid
# `prev_node`, as we only drop them when we remove the node from the
# list.
assert next_node is not None
assert cache_entry is not None
cache_entry.drop_from_cache()
# If we do lots of work at once we yield to allow other stuff to happen.
if (i + 1) % 10000 == 0:
logger.debug("Waiting during drop")
await clock.sleep(0)
logger.debug("Waking during drop")
node = next_node
# If we've yielded then our current node may have been evicted, so we
# need to check that its still valid.
if node.prev_node is None:
break
i += 1
logger.info("Dropped %d items from caches", i)
def setup_expire_lru_cache_entries(hs: "HomeServer"):
"""Start a background job that expires all cache entries if they have not
been accessed for the given number of seconds.
"""
if not hs.config.caches.expiry_time_msec:
return
logger.info(
"Expiring LRU caches after %d seconds", hs.config.caches.expiry_time_msec / 1000
)
global USE_GLOBAL_LIST
USE_GLOBAL_LIST = True
clock = hs.get_clock()
clock.looping_call(
_expire_old_entries, 30 * 1000, clock, hs.config.caches.expiry_time_msec / 1000
)
class _Node: class _Node:
__slots__ = ["prev_node", "next_node", "key", "value", "callbacks", "memory"] __slots__ = [
"_list_node",
"_global_list_node",
"_cache",
"key",
"value",
"callbacks",
"memory",
]
def __init__( def __init__(
self, self,
prev_node, root: "ListNode[_Node]",
next_node,
key, key,
value, value,
cache: "weakref.ReferenceType[LruCache]",
clock: Clock,
callbacks: Collection[Callable[[], None]] = (), callbacks: Collection[Callable[[], None]] = (),
): ):
self.prev_node = prev_node self._list_node = ListNode.insert_after(self, root)
self.next_node = next_node self._global_list_node = None
if USE_GLOBAL_LIST:
self._global_list_node = _TimedListNode.insert_after(self, GLOBAL_ROOT)
self._global_list_node.update_last_access(clock)
# We store a weak reference to the cache object so that this _Node can
# remove itself from the cache. If the cache is dropped we ensure we
# remove our entries in the lists.
self._cache = cache
self.key = key self.key = key
self.value = value self.value = value
@ -116,11 +235,16 @@ class _Node:
self.memory = ( self.memory = (
_get_size_of(key) _get_size_of(key)
+ _get_size_of(value) + _get_size_of(value)
+ _get_size_of(self._list_node, recurse=False)
+ _get_size_of(self.callbacks, recurse=False) + _get_size_of(self.callbacks, recurse=False)
+ _get_size_of(self, recurse=False) + _get_size_of(self, recurse=False)
) )
self.memory += _get_size_of(self.memory, recurse=False) self.memory += _get_size_of(self.memory, recurse=False)
if self._global_list_node:
self.memory += _get_size_of(self._global_list_node, recurse=False)
self.memory += _get_size_of(self._global_list_node.last_access_ts_secs)
def add_callbacks(self, callbacks: Collection[Callable[[], None]]) -> None: def add_callbacks(self, callbacks: Collection[Callable[[], None]]) -> None:
"""Add to stored list of callbacks, removing duplicates.""" """Add to stored list of callbacks, removing duplicates."""
@ -147,6 +271,32 @@ class _Node:
self.callbacks = None self.callbacks = None
def drop_from_cache(self) -> None:
"""Drop this node from the cache.
Ensures that the entry gets removed from the cache and that we get
removed from all lists.
"""
cache = self._cache()
if not cache or not cache.pop(self.key, None):
# `cache.pop` should call `drop_from_lists()`, unless this Node had
# already been removed from the cache.
self.drop_from_lists()
def drop_from_lists(self) -> None:
"""Remove this node from the cache lists."""
self._list_node.remove_from_list()
if self._global_list_node:
self._global_list_node.remove_from_list()
def move_to_front(self, clock: Clock, cache_list_root: ListNode) -> None:
"""Moves this node to the front of all the lists its in."""
self._list_node.move_after(cache_list_root)
if self._global_list_node:
self._global_list_node.move_after(GLOBAL_ROOT)
self._global_list_node.update_last_access(clock)
class LruCache(Generic[KT, VT]): class LruCache(Generic[KT, VT]):
""" """
@ -163,6 +313,7 @@ class LruCache(Generic[KT, VT]):
size_callback: Optional[Callable] = None, size_callback: Optional[Callable] = None,
metrics_collection_callback: Optional[Callable[[], None]] = None, metrics_collection_callback: Optional[Callable[[], None]] = None,
apply_cache_factor_from_config: bool = True, apply_cache_factor_from_config: bool = True,
clock: Optional[Clock] = None,
): ):
""" """
Args: Args:
@ -188,6 +339,13 @@ class LruCache(Generic[KT, VT]):
apply_cache_factor_from_config (bool): If true, `max_size` will be apply_cache_factor_from_config (bool): If true, `max_size` will be
multiplied by a cache factor derived from the homeserver config multiplied by a cache factor derived from the homeserver config
""" """
# Default `clock` to something sensible. Note that we rename it to
# `real_clock` so that mypy doesn't think its still `Optional`.
if clock is None:
real_clock = Clock(reactor)
else:
real_clock = clock
cache = cache_type() cache = cache_type()
self.cache = cache # Used for introspection. self.cache = cache # Used for introspection.
self.apply_cache_factor_from_config = apply_cache_factor_from_config self.apply_cache_factor_from_config = apply_cache_factor_from_config
@ -219,17 +377,31 @@ class LruCache(Generic[KT, VT]):
# this is exposed for access from outside this class # this is exposed for access from outside this class
self.metrics = metrics self.metrics = metrics
list_root = _Node(None, None, None, None) # We create a single weakref to self here so that we don't need to keep
list_root.next_node = list_root # creating more each time we create a `_Node`.
list_root.prev_node = list_root weak_ref_to_self = weakref.ref(self)
list_root = ListNode[_Node].create_root_node()
lock = threading.Lock() lock = threading.Lock()
def evict(): def evict():
while cache_len() > self.max_size: while cache_len() > self.max_size:
# Get the last node in the list (i.e. the oldest node).
todelete = list_root.prev_node todelete = list_root.prev_node
evicted_len = delete_node(todelete)
cache.pop(todelete.key, None) # The list root should always have a valid `prev_node` if the
# cache is not empty.
assert todelete is not None
# The node should always have a reference to a cache entry, as
# we only drop the cache entry when we remove the node from the
# list.
node = todelete.get_cache_entry()
assert node is not None
evicted_len = delete_node(node)
cache.pop(node.key, None)
if metrics: if metrics:
metrics.inc_evictions(evicted_len) metrics.inc_evictions(evicted_len)
@ -255,11 +427,7 @@ class LruCache(Generic[KT, VT]):
self.len = synchronized(cache_len) self.len = synchronized(cache_len)
def add_node(key, value, callbacks: Collection[Callable[[], None]] = ()): def add_node(key, value, callbacks: Collection[Callable[[], None]] = ()):
prev_node = list_root node = _Node(list_root, key, value, weak_ref_to_self, real_clock, callbacks)
next_node = prev_node.next_node
node = _Node(prev_node, next_node, key, value, callbacks)
prev_node.next_node = node
next_node.prev_node = node
cache[key] = node cache[key] = node
if size_callback: if size_callback:
@ -268,23 +436,11 @@ class LruCache(Generic[KT, VT]):
if caches.TRACK_MEMORY_USAGE and metrics: if caches.TRACK_MEMORY_USAGE and metrics:
metrics.inc_memory_usage(node.memory) metrics.inc_memory_usage(node.memory)
def move_node_to_front(node): def move_node_to_front(node: _Node):
prev_node = node.prev_node node.move_to_front(real_clock, list_root)
next_node = node.next_node
prev_node.next_node = next_node
next_node.prev_node = prev_node
prev_node = list_root
next_node = prev_node.next_node
node.prev_node = prev_node
node.next_node = next_node
prev_node.next_node = node
next_node.prev_node = node
def delete_node(node): def delete_node(node: _Node) -> int:
prev_node = node.prev_node node.drop_from_lists()
next_node = node.next_node
prev_node.next_node = next_node
next_node.prev_node = prev_node
deleted_len = 1 deleted_len = 1
if size_callback: if size_callback:
@ -411,10 +567,13 @@ class LruCache(Generic[KT, VT]):
@synchronized @synchronized
def cache_clear() -> None: def cache_clear() -> None:
list_root.next_node = list_root
list_root.prev_node = list_root
for node in cache.values(): for node in cache.values():
node.run_and_clear_callbacks() node.run_and_clear_callbacks()
node.drop_from_lists()
assert list_root.next_node == list_root
assert list_root.prev_node == list_root
cache.clear() cache.clear()
if size_callback: if size_callback:
cached_cache_len[0] = 0 cached_cache_len[0] = 0
@ -484,3 +643,11 @@ class LruCache(Generic[KT, VT]):
self._on_resize() self._on_resize()
return True return True
return False return False
def __del__(self) -> None:
# We're about to be deleted, so we make sure to clear up all the nodes
# and run callbacks, etc.
#
# This happens e.g. in the sync code where we have an expiring cache of
# lru caches.
self.clear()

150
synapse/util/linked_list.py Normal file
View file

@ -0,0 +1,150 @@
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A circular doubly linked list implementation.
"""
import threading
from typing import Generic, Optional, Type, TypeVar
P = TypeVar("P")
LN = TypeVar("LN", bound="ListNode")
class ListNode(Generic[P]):
"""A node in a circular doubly linked list, with an (optional) reference to
a cache entry.
The reference should only be `None` for the root node or if the node has
been removed from the list.
"""
# A lock to protect mutating the list prev/next pointers.
_LOCK = threading.Lock()
# We don't use attrs here as in py3.6 you can't have `attr.s(slots=True)`
# and inherit from `Generic` for some reason
__slots__ = [
"cache_entry",
"prev_node",
"next_node",
]
def __init__(self, cache_entry: Optional[P] = None) -> None:
self.cache_entry = cache_entry
self.prev_node: Optional[ListNode[P]] = None
self.next_node: Optional[ListNode[P]] = None
@classmethod
def create_root_node(cls: Type["ListNode[P]"]) -> "ListNode[P]":
"""Create a new linked list by creating a "root" node, which is a node
that has prev_node/next_node pointing to itself and no associated cache
entry.
"""
root = cls()
root.prev_node = root
root.next_node = root
return root
@classmethod
def insert_after(
cls: Type[LN],
cache_entry: P,
node: "ListNode[P]",
) -> LN:
"""Create a new list node that is placed after the given node.
Args:
cache_entry: The associated cache entry.
node: The existing node in the list to insert the new entry after.
"""
new_node = cls(cache_entry)
with cls._LOCK:
new_node._refs_insert_after(node)
return new_node
def remove_from_list(self):
"""Remove this node from the list."""
with self._LOCK:
self._refs_remove_node_from_list()
# We drop the reference to the cache entry to break the reference cycle
# between the list node and cache entry, allowing the two to be dropped
# immediately rather than at the next GC.
self.cache_entry = None
def move_after(self, node: "ListNode"):
"""Move this node from its current location in the list to after the
given node.
"""
with self._LOCK:
# We assert that both this node and the target node is still "alive".
assert self.prev_node
assert self.next_node
assert node.prev_node
assert node.next_node
assert self is not node
# Remove self from the list
self._refs_remove_node_from_list()
# Insert self back into the list, after target node
self._refs_insert_after(node)
def _refs_remove_node_from_list(self):
"""Internal method to *just* remove the node from the list, without
e.g. clearing out the cache entry.
"""
if self.prev_node is None or self.next_node is None:
# We've already been removed from the list.
return
prev_node = self.prev_node
next_node = self.next_node
prev_node.next_node = next_node
next_node.prev_node = prev_node
# We set these to None so that we don't get circular references,
# allowing us to be dropped without having to go via the GC.
self.prev_node = None
self.next_node = None
def _refs_insert_after(self, node: "ListNode"):
"""Internal method to insert the node after the given node."""
# This method should only be called when we're not already in the list.
assert self.prev_node is None
assert self.next_node is None
# We expect the given node to be in the list and thus have valid
# prev/next refs.
assert node.next_node
assert node.prev_node
prev_node = node
next_node = node.next_node
self.prev_node = prev_node
self.next_node = next_node
prev_node.next_node = self
next_node.prev_node = self
def get_cache_entry(self) -> Optional[P]:
"""Get the cache entry, returns None if this is the root node (i.e.
cache_entry is None) or if the entry has been dropped.
"""
return self.cache_entry

View file

@ -45,5 +45,4 @@ Peeked rooms only turn up in the sync for the device who peeked them
# Blacklisted due to changes made in #10272 # Blacklisted due to changes made in #10272
Outbound federation will ignore a missing event with bad JSON for room version 6 Outbound federation will ignore a missing event with bad JSON for room version 6
Backfilled events whose prev_events are in a different room do not allow cross-room back-pagination
Federation rejects inbound events where the prev_events cannot be found Federation rejects inbound events where the prev_events cannot be found

View file

@ -58,6 +58,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
user_id=self.test_user, token_id=5, device_id="device" user_id=self.test_user, token_id=5, device_id="device"
) )
self.store.get_user_by_access_token = simple_async_mock(user_info) self.store.get_user_by_access_token = simple_async_mock(user_info)
self.store.mark_access_token_as_used = simple_async_mock(None)
request = Mock(args={}) request = Mock(args={})
request.args[b"access_token"] = [self.test_token] request.args[b"access_token"] = [self.test_token]

View file

@ -205,9 +205,7 @@ class FederationKnockingTestCase(
# Have this homeserver skip event auth checks. This is necessary due to # Have this homeserver skip event auth checks. This is necessary due to
# event auth checks ensuring that events were signed by the sender's homeserver. # event auth checks ensuring that events were signed by the sender's homeserver.
async def _check_event_auth( async def _check_event_auth(origin, event, context, *args, **kwargs):
origin, event, context, state, auth_events, backfilled
):
return context return context
homeserver.get_federation_handler()._check_event_auth = _check_event_auth homeserver.get_federation_handler()._check_event_auth = _check_event_auth

View file

@ -257,7 +257,7 @@ class DehydrationTestCase(unittest.HomeserverTestCase):
self.assertEqual(device_data, {"device_data": {"foo": "bar"}}) self.assertEqual(device_data, {"device_data": {"foo": "bar"}})
# Create a new login for the user and dehydrated the device # Create a new login for the user and dehydrated the device
device_id, access_token = self.get_success( device_id, access_token, _expiration_time, _refresh_token = self.get_success(
self.registration.register_device( self.registration.register_device(
user_id=user_id, user_id=user_id,
device_id=None, device_id=None,

View file

@ -251,7 +251,7 @@ class FederationTestCase(unittest.HomeserverTestCase):
join_event.signatures[other_server] = {"x": "y"} join_event.signatures[other_server] = {"x": "y"}
with LoggingContext("send_join"): with LoggingContext("send_join"):
d = run_in_background( d = run_in_background(
self.handler.on_send_join_request, other_server, join_event self.handler.on_send_membership_event, other_server, join_event
) )
self.get_success(d) self.get_success(d)

View file

@ -734,7 +734,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
self.auth = hs.get_auth() self._event_auth_handler = hs.get_event_auth_handler()
# We don't actually check signatures in tests, so lets just create a # We don't actually check signatures in tests, so lets just create a
# random key to use. # random key to use.
@ -846,7 +846,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
builder = EventBuilder( builder = EventBuilder(
state=self.state, state=self.state,
auth=self.auth, event_auth_handler=self._event_auth_handler,
store=self.store, store=self.store,
clock=self.clock, clock=self.clock,
hostname=hostname, hostname=hostname,

View file

@ -19,7 +19,7 @@ from synapse.api.constants import UserTypes
from synapse.api.errors import Codes, ResourceLimitError, SynapseError from synapse.api.errors import Codes, ResourceLimitError, SynapseError
from synapse.events.spamcheck import load_legacy_spam_checkers from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.spam_checker_api import RegistrationBehaviour from synapse.spam_checker_api import RegistrationBehaviour
from synapse.types import RoomAlias, UserID, create_requester from synapse.types import RoomAlias, RoomID, UserID, create_requester
from tests.test_utils import make_awaitable from tests.test_utils import make_awaitable
from tests.unittest import override_config from tests.unittest import override_config
@ -719,3 +719,50 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
) )
return user_id, token return user_id, token
class RemoteAutoJoinTestCase(unittest.HomeserverTestCase):
"""Tests auto-join on remote rooms."""
def make_homeserver(self, reactor, clock):
self.room_id = "!roomid:remotetest"
async def update_membership(*args, **kwargs):
pass
async def lookup_room_alias(*args, **kwargs):
return RoomID.from_string(self.room_id), ["remotetest"]
self.room_member_handler = Mock(spec=["update_membership", "lookup_room_alias"])
self.room_member_handler.update_membership.side_effect = update_membership
self.room_member_handler.lookup_room_alias.side_effect = lookup_room_alias
hs = self.setup_test_homeserver(room_member_handler=self.room_member_handler)
return hs
def prepare(self, reactor, clock, hs):
self.handler = self.hs.get_registration_handler()
self.store = self.hs.get_datastore()
@override_config({"auto_join_rooms": ["#room:remotetest"]})
def test_auto_create_auto_join_remote_room(self):
"""Tests that we don't attempt to create remote rooms, and that we don't attempt
to invite ourselves to rooms we're not in."""
# Register a first user; this should call _create_and_join_rooms
self.get_success(self.handler.register_user(localpart="jeff"))
_, kwargs = self.room_member_handler.update_membership.call_args
self.assertEqual(kwargs["room_id"], self.room_id)
self.assertEqual(kwargs["action"], "join")
self.assertEqual(kwargs["remote_room_hosts"], ["remotetest"])
# Register a second user; this should call _join_rooms
self.get_success(self.handler.register_user(localpart="jeff2"))
_, kwargs = self.room_member_handler.update_membership.call_args
self.assertEqual(kwargs["room_id"], self.room_id)
self.assertEqual(kwargs["action"], "join")
self.assertEqual(kwargs["remote_room_hosts"], ["remotetest"])

View file

@ -14,6 +14,7 @@
from typing import Any, Iterable, Optional, Tuple from typing import Any, Iterable, Optional, Tuple
from unittest import mock from unittest import mock
from synapse.api.constants import EventContentFields, RoomTypes
from synapse.api.errors import AuthError from synapse.api.errors import AuthError
from synapse.handlers.space_summary import _child_events_comparison_key from synapse.handlers.space_summary import _child_events_comparison_key
from synapse.rest import admin from synapse.rest import admin
@ -97,9 +98,21 @@ class SpaceSummaryTestCase(unittest.HomeserverTestCase):
self.hs = hs self.hs = hs
self.handler = self.hs.get_space_summary_handler() self.handler = self.hs.get_space_summary_handler()
# Create a user.
self.user = self.register_user("user", "pass") self.user = self.register_user("user", "pass")
self.token = self.login("user", "pass") self.token = self.login("user", "pass")
# Create a space and a child room.
self.space = self.helper.create_room_as(
self.user,
tok=self.token,
extra_content={
"creation_content": {EventContentFields.ROOM_TYPE: RoomTypes.SPACE}
},
)
self.room = self.helper.create_room_as(self.user, tok=self.token)
self._add_child(self.space, self.room, self.token)
def _add_child(self, space_id: str, room_id: str, token: str) -> None: def _add_child(self, space_id: str, room_id: str, token: str) -> None:
"""Add a child room to a space.""" """Add a child room to a space."""
self.helper.send_state( self.helper.send_state(
@ -128,43 +141,32 @@ class SpaceSummaryTestCase(unittest.HomeserverTestCase):
def test_simple_space(self): def test_simple_space(self):
"""Test a simple space with a single room.""" """Test a simple space with a single room."""
space = self.helper.create_room_as(self.user, tok=self.token) result = self.get_success(self.handler.get_space_summary(self.user, self.space))
room = self.helper.create_room_as(self.user, tok=self.token)
self._add_child(space, room, self.token)
result = self.get_success(self.handler.get_space_summary(self.user, space))
# The result should have the space and the room in it, along with a link # The result should have the space and the room in it, along with a link
# from space -> room. # from space -> room.
self._assert_rooms(result, [space, room]) self._assert_rooms(result, [self.space, self.room])
self._assert_events(result, [(space, room)]) self._assert_events(result, [(self.space, self.room)])
def test_visibility(self): def test_visibility(self):
"""A user not in a space cannot inspect it.""" """A user not in a space cannot inspect it."""
space = self.helper.create_room_as(self.user, tok=self.token)
room = self.helper.create_room_as(self.user, tok=self.token)
self._add_child(space, room, self.token)
user2 = self.register_user("user2", "pass") user2 = self.register_user("user2", "pass")
token2 = self.login("user2", "pass") token2 = self.login("user2", "pass")
# The user cannot see the space. # The user cannot see the space.
self.get_failure(self.handler.get_space_summary(user2, space), AuthError) self.get_failure(self.handler.get_space_summary(user2, self.space), AuthError)
# Joining the room causes it to be visible. # Joining the room causes it to be visible.
self.helper.join(space, user2, tok=token2) self.helper.join(self.space, user2, tok=token2)
result = self.get_success(self.handler.get_space_summary(user2, space)) result = self.get_success(self.handler.get_space_summary(user2, self.space))
# The result should only have the space, but includes the link to the room. # The result should only have the space, but includes the link to the room.
self._assert_rooms(result, [space]) self._assert_rooms(result, [self.space])
self._assert_events(result, [(space, room)]) self._assert_events(result, [(self.space, self.room)])
def test_world_readable(self): def test_world_readable(self):
"""A world-readable room is visible to everyone.""" """A world-readable room is visible to everyone."""
space = self.helper.create_room_as(self.user, tok=self.token)
room = self.helper.create_room_as(self.user, tok=self.token)
self._add_child(space, room, self.token)
self.helper.send_state( self.helper.send_state(
space, self.space,
event_type="m.room.history_visibility", event_type="m.room.history_visibility",
body={"history_visibility": "world_readable"}, body={"history_visibility": "world_readable"},
tok=self.token, tok=self.token,
@ -173,6 +175,6 @@ class SpaceSummaryTestCase(unittest.HomeserverTestCase):
user2 = self.register_user("user2", "pass") user2 = self.register_user("user2", "pass")
# The space should be visible, as well as the link to the room. # The space should be visible, as well as the link to the room.
result = self.get_success(self.handler.get_space_summary(user2, space)) result = self.get_success(self.handler.get_space_summary(user2, self.space))
self._assert_rooms(result, [space]) self._assert_rooms(result, [self.space])
self._assert_events(result, [(space, room)]) self._assert_events(result, [(self.space, self.room)])

View file

@ -228,7 +228,7 @@ class FederationSenderTestCase(BaseMultiWorkerStreamTestCase):
builder.build(prev_event_ids=prev_event_ids, auth_event_ids=None) builder.build(prev_event_ids=prev_event_ids, auth_event_ids=None)
) )
self.get_success(federation.on_send_join_request(remote_server, join_event)) self.get_success(federation.on_send_membership_event(remote_server, join_event))
self.replicate() self.replicate()
return room return room

View file

@ -939,7 +939,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
""" """
channel = self.make_request("POST", self.url, b"{}") channel = self.make_request("POST", self.url, b"{}")
self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(401, channel.code, msg=channel.json_body)
self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"]) self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
def test_requester_is_not_admin(self): def test_requester_is_not_admin(self):
@ -950,7 +950,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
channel = self.make_request("POST", url, access_token=self.other_user_token) channel = self.make_request("POST", url, access_token=self.other_user_token)
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual("You are not a server admin", channel.json_body["error"]) self.assertEqual("You are not a server admin", channel.json_body["error"])
channel = self.make_request( channel = self.make_request(
@ -960,7 +960,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
content=b"{}", content=b"{}",
) )
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual("You are not a server admin", channel.json_body["error"]) self.assertEqual("You are not a server admin", channel.json_body["error"])
def test_user_does_not_exist(self): def test_user_does_not_exist(self):
@ -990,7 +990,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.BAD_JSON, channel.json_body["errcode"]) self.assertEqual(Codes.BAD_JSON, channel.json_body["errcode"])
def test_user_is_not_local(self): def test_user_is_not_local(self):
@ -1006,7 +1006,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
def test_deactivate_user_erase_true(self): def test_deactivate_user_erase_true(self):
""" """
Test deactivating an user and set `erase` to `true` Test deactivating a user and set `erase` to `true`
""" """
# Get user # Get user
@ -1016,24 +1016,22 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(False, channel.json_body["deactivated"]) self.assertEqual(False, channel.json_body["deactivated"])
self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"]) self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"])
self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"]) self.assertEqual("mxc://servername/mediaid", channel.json_body["avatar_url"])
self.assertEqual("User1", channel.json_body["displayname"]) self.assertEqual("User1", channel.json_body["displayname"])
# Deactivate user # Deactivate and erase user
body = json.dumps({"erase": True})
channel = self.make_request( channel = self.make_request(
"POST", "POST",
self.url, self.url,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content={"erase": True},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
# Get user # Get user
channel = self.make_request( channel = self.make_request(
@ -1042,7 +1040,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(True, channel.json_body["deactivated"]) self.assertEqual(True, channel.json_body["deactivated"])
self.assertEqual(0, len(channel.json_body["threepids"])) self.assertEqual(0, len(channel.json_body["threepids"]))
@ -1053,7 +1051,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
def test_deactivate_user_erase_false(self): def test_deactivate_user_erase_false(self):
""" """
Test deactivating an user and set `erase` to `false` Test deactivating a user and set `erase` to `false`
""" """
# Get user # Get user
@ -1063,7 +1061,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(False, channel.json_body["deactivated"]) self.assertEqual(False, channel.json_body["deactivated"])
self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"]) self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"])
@ -1071,13 +1069,11 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
self.assertEqual("User1", channel.json_body["displayname"]) self.assertEqual("User1", channel.json_body["displayname"])
# Deactivate user # Deactivate user
body = json.dumps({"erase": False})
channel = self.make_request( channel = self.make_request(
"POST", "POST",
self.url, self.url,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content={"erase": False},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
@ -1089,7 +1085,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(True, channel.json_body["deactivated"]) self.assertEqual(True, channel.json_body["deactivated"])
self.assertEqual(0, len(channel.json_body["threepids"])) self.assertEqual(0, len(channel.json_body["threepids"]))
@ -1098,6 +1094,60 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
self._is_erased("@user:test", False) self._is_erased("@user:test", False)
def test_deactivate_user_erase_true_no_profile(self):
"""
Test deactivating a user and set `erase` to `true`
if user has no profile information (stored in the database table `profiles`).
"""
# Users normally have an entry in `profiles`, but occasionally they are created without one.
# To test deactivation for users without a profile, we delete the profile information for our user.
self.get_success(
self.store.db_pool.simple_delete_one(
table="profiles", keyvalues={"user_id": "user"}
)
)
# Get user
channel = self.make_request(
"GET",
self.url_other_user,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(False, channel.json_body["deactivated"])
self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"])
self.assertIsNone(channel.json_body["avatar_url"])
self.assertIsNone(channel.json_body["displayname"])
# Deactivate and erase user
channel = self.make_request(
"POST",
self.url,
access_token=self.admin_user_tok,
content={"erase": True},
)
self.assertEqual(200, channel.code, msg=channel.json_body)
# Get user
channel = self.make_request(
"GET",
self.url_other_user,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(True, channel.json_body["deactivated"])
self.assertEqual(0, len(channel.json_body["threepids"]))
self.assertIsNone(channel.json_body["avatar_url"])
self.assertIsNone(channel.json_body["displayname"])
self._is_erased("@user:test", True)
def _is_erased(self, user_id: str, expect: bool) -> None: def _is_erased(self, user_id: str, expect: bool) -> None:
"""Assert that the user is erased or not""" """Assert that the user is erased or not"""
d = self.store.is_user_erased(user_id) d = self.store.is_user_erased(user_id)
@ -1150,7 +1200,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.other_user_token, access_token=self.other_user_token,
) )
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual("You are not a server admin", channel.json_body["error"]) self.assertEqual("You are not a server admin", channel.json_body["error"])
channel = self.make_request( channel = self.make_request(
@ -1160,7 +1210,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content=b"{}", content=b"{}",
) )
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual("You are not a server admin", channel.json_body["error"]) self.assertEqual("You are not a server admin", channel.json_body["error"])
def test_user_does_not_exist(self): def test_user_does_not_exist(self):
@ -1177,6 +1227,58 @@ class UserRestTestCase(unittest.HomeserverTestCase):
self.assertEqual(404, channel.code, msg=channel.json_body) self.assertEqual(404, channel.code, msg=channel.json_body)
self.assertEqual("M_NOT_FOUND", channel.json_body["errcode"]) self.assertEqual("M_NOT_FOUND", channel.json_body["errcode"])
def test_get_user(self):
"""
Test a simple get of a user.
"""
channel = self.make_request(
"GET",
self.url_other_user,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("User", channel.json_body["displayname"])
self._check_fields(channel.json_body)
def test_get_user_with_sso(self):
"""
Test get a user with SSO details.
"""
self.get_success(
self.store.record_user_external_id(
"auth_provider1", "external_id1", self.other_user
)
)
self.get_success(
self.store.record_user_external_id(
"auth_provider2", "external_id2", self.other_user
)
)
channel = self.make_request(
"GET",
self.url_other_user,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual(
"external_id1", channel.json_body["external_ids"][0]["external_id"]
)
self.assertEqual(
"auth_provider1", channel.json_body["external_ids"][0]["auth_provider"]
)
self.assertEqual(
"external_id2", channel.json_body["external_ids"][1]["external_id"]
)
self.assertEqual(
"auth_provider2", channel.json_body["external_ids"][1]["auth_provider"]
)
self._check_fields(channel.json_body)
def test_create_server_admin(self): def test_create_server_admin(self):
""" """
Check that a new admin user is created successfully. Check that a new admin user is created successfully.
@ -1184,30 +1286,29 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test" url = "/_synapse/admin/v2/users/@bob:test"
# Create user (server admin) # Create user (server admin)
body = json.dumps( body = {
{
"password": "abc123", "password": "abc123",
"admin": True, "admin": True,
"displayname": "Bob's name", "displayname": "Bob's name",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}], "threepids": [{"medium": "email", "address": "bob@bob.bob"}],
"avatar_url": "mxc://fibble/wibble", "avatar_url": "mxc://fibble/wibble",
} }
)
channel = self.make_request( channel = self.make_request(
"PUT", "PUT",
url, url,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content=body,
) )
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("Bob's name", channel.json_body["displayname"]) self.assertEqual("Bob's name", channel.json_body["displayname"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"]) self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
self.assertTrue(channel.json_body["admin"]) self.assertTrue(channel.json_body["admin"])
self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"]) self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
self._check_fields(channel.json_body)
# Get user # Get user
channel = self.make_request( channel = self.make_request(
@ -1216,7 +1317,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("Bob's name", channel.json_body["displayname"]) self.assertEqual("Bob's name", channel.json_body["displayname"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
@ -1225,6 +1326,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
self.assertFalse(channel.json_body["is_guest"]) self.assertFalse(channel.json_body["is_guest"])
self.assertFalse(channel.json_body["deactivated"]) self.assertFalse(channel.json_body["deactivated"])
self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"]) self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
self._check_fields(channel.json_body)
def test_create_user(self): def test_create_user(self):
""" """
@ -1233,30 +1335,29 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test" url = "/_synapse/admin/v2/users/@bob:test"
# Create user # Create user
body = json.dumps( body = {
{
"password": "abc123", "password": "abc123",
"admin": False, "admin": False,
"displayname": "Bob's name", "displayname": "Bob's name",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}], "threepids": [{"medium": "email", "address": "bob@bob.bob"}],
"avatar_url": "mxc://fibble/wibble", "avatar_url": "mxc://fibble/wibble",
} }
)
channel = self.make_request( channel = self.make_request(
"PUT", "PUT",
url, url,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content=body,
) )
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("Bob's name", channel.json_body["displayname"]) self.assertEqual("Bob's name", channel.json_body["displayname"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"]) self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
self.assertFalse(channel.json_body["admin"]) self.assertFalse(channel.json_body["admin"])
self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"]) self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
self._check_fields(channel.json_body)
# Get user # Get user
channel = self.make_request( channel = self.make_request(
@ -1265,7 +1366,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("Bob's name", channel.json_body["displayname"]) self.assertEqual("Bob's name", channel.json_body["displayname"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
@ -1275,6 +1376,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
self.assertFalse(channel.json_body["deactivated"]) self.assertFalse(channel.json_body["deactivated"])
self.assertFalse(channel.json_body["shadow_banned"]) self.assertFalse(channel.json_body["shadow_banned"])
self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"]) self.assertEqual("mxc://fibble/wibble", channel.json_body["avatar_url"])
self._check_fields(channel.json_body)
@override_config( @override_config(
{"limit_usage_by_mau": True, "max_mau_value": 2, "mau_trial_days": 0} {"limit_usage_by_mau": True, "max_mau_value": 2, "mau_trial_days": 0}
@ -1311,16 +1413,14 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test" url = "/_synapse/admin/v2/users/@bob:test"
# Create user # Create user
body = json.dumps({"password": "abc123", "admin": False})
channel = self.make_request( channel = self.make_request(
"PUT", "PUT",
url, url,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content={"password": "abc123", "admin": False},
) )
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertFalse(channel.json_body["admin"]) self.assertFalse(channel.json_body["admin"])
@ -1350,17 +1450,15 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test" url = "/_synapse/admin/v2/users/@bob:test"
# Create user # Create user
body = json.dumps({"password": "abc123", "admin": False})
channel = self.make_request( channel = self.make_request(
"PUT", "PUT",
url, url,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content={"password": "abc123", "admin": False},
) )
# Admin user is not blocked by mau anymore # Admin user is not blocked by mau anymore
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertFalse(channel.json_body["admin"]) self.assertFalse(channel.json_body["admin"])
@ -1382,21 +1480,19 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test" url = "/_synapse/admin/v2/users/@bob:test"
# Create user # Create user
body = json.dumps( body = {
{
"password": "abc123", "password": "abc123",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}], "threepids": [{"medium": "email", "address": "bob@bob.bob"}],
} }
)
channel = self.make_request( channel = self.make_request(
"PUT", "PUT",
url, url,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content=body,
) )
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"]) self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
@ -1426,21 +1522,19 @@ class UserRestTestCase(unittest.HomeserverTestCase):
url = "/_synapse/admin/v2/users/@bob:test" url = "/_synapse/admin/v2/users/@bob:test"
# Create user # Create user
body = json.dumps( body = {
{
"password": "abc123", "password": "abc123",
"threepids": [{"medium": "email", "address": "bob@bob.bob"}], "threepids": [{"medium": "email", "address": "bob@bob.bob"}],
} }
)
channel = self.make_request( channel = self.make_request(
"PUT", "PUT",
url, url,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content=body,
) )
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"]) self.assertEqual("bob@bob.bob", channel.json_body["threepids"][0]["address"])
@ -1457,16 +1551,15 @@ class UserRestTestCase(unittest.HomeserverTestCase):
""" """
# Change password # Change password
body = json.dumps({"password": "hahaha"})
channel = self.make_request( channel = self.make_request(
"PUT", "PUT",
self.url_other_user, self.url_other_user,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content={"password": "hahaha"},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self._check_fields(channel.json_body)
def test_set_displayname(self): def test_set_displayname(self):
""" """
@ -1474,16 +1567,14 @@ class UserRestTestCase(unittest.HomeserverTestCase):
""" """
# Modify user # Modify user
body = json.dumps({"displayname": "foobar"})
channel = self.make_request( channel = self.make_request(
"PUT", "PUT",
self.url_other_user, self.url_other_user,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content={"displayname": "foobar"},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("foobar", channel.json_body["displayname"]) self.assertEqual("foobar", channel.json_body["displayname"])
@ -1494,7 +1585,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("foobar", channel.json_body["displayname"]) self.assertEqual("foobar", channel.json_body["displayname"])
@ -1504,18 +1595,14 @@ class UserRestTestCase(unittest.HomeserverTestCase):
""" """
# Delete old and add new threepid to user # Delete old and add new threepid to user
body = json.dumps(
{"threepids": [{"medium": "email", "address": "bob3@bob.bob"}]}
)
channel = self.make_request( channel = self.make_request(
"PUT", "PUT",
self.url_other_user, self.url_other_user,
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content=body.encode(encoding="utf_8"), content={"threepids": [{"medium": "email", "address": "bob3@bob.bob"}]},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob3@bob.bob", channel.json_body["threepids"][0]["address"]) self.assertEqual("bob3@bob.bob", channel.json_body["threepids"][0]["address"])
@ -1527,7 +1614,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertEqual("email", channel.json_body["threepids"][0]["medium"]) self.assertEqual("email", channel.json_body["threepids"][0]["medium"])
self.assertEqual("bob3@bob.bob", channel.json_body["threepids"][0]["address"]) self.assertEqual("bob3@bob.bob", channel.json_body["threepids"][0]["address"])
@ -1552,7 +1639,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertFalse(channel.json_body["deactivated"]) self.assertFalse(channel.json_body["deactivated"])
self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"]) self.assertEqual("foo@bar.com", channel.json_body["threepids"][0]["address"])
@ -1567,7 +1654,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"deactivated": True}, content={"deactivated": True},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["deactivated"]) self.assertTrue(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"]) self.assertIsNone(channel.json_body["password_hash"])
@ -1583,7 +1670,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["deactivated"]) self.assertTrue(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"]) self.assertIsNone(channel.json_body["password_hash"])
@ -1610,7 +1697,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"deactivated": True}, content={"deactivated": True},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["deactivated"]) self.assertTrue(channel.json_body["deactivated"])
@ -1626,7 +1713,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"displayname": "Foobar"}, content={"displayname": "Foobar"},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["deactivated"]) self.assertTrue(channel.json_body["deactivated"])
self.assertEqual("Foobar", channel.json_body["displayname"]) self.assertEqual("Foobar", channel.json_body["displayname"])
@ -1650,7 +1737,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content={"deactivated": False}, content={"deactivated": False},
) )
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(400, channel.code, msg=channel.json_body)
# Reactivate the user. # Reactivate the user.
channel = self.make_request( channel = self.make_request(
@ -1659,7 +1746,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content={"deactivated": False, "password": "foo"}, content={"deactivated": False, "password": "foo"},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertFalse(channel.json_body["deactivated"]) self.assertFalse(channel.json_body["deactivated"])
self.assertIsNotNone(channel.json_body["password_hash"]) self.assertIsNotNone(channel.json_body["password_hash"])
@ -1681,7 +1768,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content={"deactivated": False, "password": "foo"}, content={"deactivated": False, "password": "foo"},
) )
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"]) self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
# Reactivate the user without a password. # Reactivate the user without a password.
@ -1691,7 +1778,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content={"deactivated": False}, content={"deactivated": False},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertFalse(channel.json_body["deactivated"]) self.assertFalse(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"]) self.assertIsNone(channel.json_body["password_hash"])
@ -1713,7 +1800,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content={"deactivated": False, "password": "foo"}, content={"deactivated": False, "password": "foo"},
) )
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"]) self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
# Reactivate the user without a password. # Reactivate the user without a password.
@ -1723,7 +1810,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content={"deactivated": False}, content={"deactivated": False},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertFalse(channel.json_body["deactivated"]) self.assertFalse(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"]) self.assertIsNone(channel.json_body["password_hash"])
@ -1742,7 +1829,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"admin": True}, content={"admin": True},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["admin"]) self.assertTrue(channel.json_body["admin"])
@ -1753,7 +1840,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@user:test", channel.json_body["name"]) self.assertEqual("@user:test", channel.json_body["name"])
self.assertTrue(channel.json_body["admin"]) self.assertTrue(channel.json_body["admin"])
@ -1772,7 +1859,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"password": "abc123"}, content={"password": "abc123"},
) )
self.assertEqual(201, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(201, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("bob", channel.json_body["displayname"]) self.assertEqual("bob", channel.json_body["displayname"])
@ -1783,7 +1870,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("bob", channel.json_body["displayname"]) self.assertEqual("bob", channel.json_body["displayname"])
self.assertEqual(0, channel.json_body["deactivated"]) self.assertEqual(0, channel.json_body["deactivated"])
@ -1796,7 +1883,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
content={"password": "abc123", "deactivated": "false"}, content={"password": "abc123", "deactivated": "false"},
) )
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(400, channel.code, msg=channel.json_body)
# Check user is not deactivated # Check user is not deactivated
channel = self.make_request( channel = self.make_request(
@ -1805,7 +1892,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual("@bob:test", channel.json_body["name"]) self.assertEqual("@bob:test", channel.json_body["name"])
self.assertEqual("bob", channel.json_body["displayname"]) self.assertEqual("bob", channel.json_body["displayname"])
@ -1830,7 +1917,7 @@ class UserRestTestCase(unittest.HomeserverTestCase):
access_token=self.admin_user_tok, access_token=self.admin_user_tok,
content={"deactivated": True}, content={"deactivated": True},
) )
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"]) self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertTrue(channel.json_body["deactivated"]) self.assertTrue(channel.json_body["deactivated"])
self.assertIsNone(channel.json_body["password_hash"]) self.assertIsNone(channel.json_body["password_hash"])
self._is_erased(user_id, False) self._is_erased(user_id, False)
@ -1838,6 +1925,25 @@ class UserRestTestCase(unittest.HomeserverTestCase):
self.assertIsNone(self.get_success(d)) self.assertIsNone(self.get_success(d))
self._is_erased(user_id, True) self._is_erased(user_id, True)
def _check_fields(self, content: JsonDict):
"""Checks that the expected user attributes are present in content
Args:
content: Content dictionary to check
"""
self.assertIn("displayname", content)
self.assertIn("threepids", content)
self.assertIn("avatar_url", content)
self.assertIn("admin", content)
self.assertIn("deactivated", content)
self.assertIn("shadow_banned", content)
self.assertIn("password_hash", content)
self.assertIn("creation_ts", content)
self.assertIn("appservice_id", content)
self.assertIn("consent_server_notice_sent", content)
self.assertIn("consent_version", content)
self.assertIn("external_ids", content)
class UserMembershipRestTestCase(unittest.HomeserverTestCase): class UserMembershipRestTestCase(unittest.HomeserverTestCase):

View file

@ -52,6 +52,7 @@ class RestHelper:
room_version: str = None, room_version: str = None,
tok: str = None, tok: str = None,
expect_code: int = 200, expect_code: int = 200,
extra_content: Optional[Dict] = None,
) -> str: ) -> str:
""" """
Create a room. Create a room.
@ -72,7 +73,7 @@ class RestHelper:
temp_id = self.auth_user_id temp_id = self.auth_user_id
self.auth_user_id = room_creator self.auth_user_id = room_creator
path = "/_matrix/client/r0/createRoom" path = "/_matrix/client/r0/createRoom"
content = {} content = extra_content or {}
if not is_public: if not is_public:
content["visibility"] = "private" content["visibility"] = "private"
if room_version: if room_version:

View file

@ -20,7 +20,7 @@ import synapse.rest.admin
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.handlers.ui_auth.checkers import UserInteractiveAuthChecker from synapse.handlers.ui_auth.checkers import UserInteractiveAuthChecker
from synapse.rest.client.v1 import login from synapse.rest.client.v1 import login
from synapse.rest.client.v2_alpha import auth, devices, register from synapse.rest.client.v2_alpha import account, auth, devices, register
from synapse.rest.synapse.client import build_synapse_client_resource_tree from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.types import JsonDict, UserID from synapse.types import JsonDict, UserID
@ -498,3 +498,221 @@ class UIAuthTests(unittest.HomeserverTestCase):
self.delete_device( self.delete_device(
self.user_tok, self.device_id, 403, body={"auth": {"session": session_id}} self.user_tok, self.device_id, 403, body={"auth": {"session": session_id}}
) )
class RefreshAuthTests(unittest.HomeserverTestCase):
servlets = [
auth.register_servlets,
account.register_servlets,
login.register_servlets,
synapse.rest.admin.register_servlets_for_client_rest_resource,
register.register_servlets,
]
hijack_auth = False
def prepare(self, reactor, clock, hs):
self.user_pass = "pass"
self.user = self.register_user("test", self.user_pass)
def test_login_issue_refresh_token(self):
"""
A login response should include a refresh_token only if asked.
"""
# Test login
body = {"type": "m.login.password", "user": "test", "password": self.user_pass}
login_without_refresh = self.make_request(
"POST", "/_matrix/client/r0/login", body
)
self.assertEqual(login_without_refresh.code, 200, login_without_refresh.result)
self.assertNotIn("refresh_token", login_without_refresh.json_body)
login_with_refresh = self.make_request(
"POST",
"/_matrix/client/r0/login?org.matrix.msc2918.refresh_token=true",
body,
)
self.assertEqual(login_with_refresh.code, 200, login_with_refresh.result)
self.assertIn("refresh_token", login_with_refresh.json_body)
self.assertIn("expires_in_ms", login_with_refresh.json_body)
def test_register_issue_refresh_token(self):
"""
A register response should include a refresh_token only if asked.
"""
register_without_refresh = self.make_request(
"POST",
"/_matrix/client/r0/register",
{
"username": "test2",
"password": self.user_pass,
"auth": {"type": LoginType.DUMMY},
},
)
self.assertEqual(
register_without_refresh.code, 200, register_without_refresh.result
)
self.assertNotIn("refresh_token", register_without_refresh.json_body)
register_with_refresh = self.make_request(
"POST",
"/_matrix/client/r0/register?org.matrix.msc2918.refresh_token=true",
{
"username": "test3",
"password": self.user_pass,
"auth": {"type": LoginType.DUMMY},
},
)
self.assertEqual(register_with_refresh.code, 200, register_with_refresh.result)
self.assertIn("refresh_token", register_with_refresh.json_body)
self.assertIn("expires_in_ms", register_with_refresh.json_body)
def test_token_refresh(self):
"""
A refresh token can be used to issue a new access token.
"""
body = {"type": "m.login.password", "user": "test", "password": self.user_pass}
login_response = self.make_request(
"POST",
"/_matrix/client/r0/login?org.matrix.msc2918.refresh_token=true",
body,
)
self.assertEqual(login_response.code, 200, login_response.result)
refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(refresh_response.code, 200, refresh_response.result)
self.assertIn("access_token", refresh_response.json_body)
self.assertIn("refresh_token", refresh_response.json_body)
self.assertIn("expires_in_ms", refresh_response.json_body)
# The access and refresh tokens should be different from the original ones after refresh
self.assertNotEqual(
login_response.json_body["access_token"],
refresh_response.json_body["access_token"],
)
self.assertNotEqual(
login_response.json_body["refresh_token"],
refresh_response.json_body["refresh_token"],
)
@override_config({"access_token_lifetime": "1m"})
def test_refresh_token_expiration(self):
"""
The access token should have some time as specified in the config.
"""
body = {"type": "m.login.password", "user": "test", "password": self.user_pass}
login_response = self.make_request(
"POST",
"/_matrix/client/r0/login?org.matrix.msc2918.refresh_token=true",
body,
)
self.assertEqual(login_response.code, 200, login_response.result)
self.assertApproximates(
login_response.json_body["expires_in_ms"], 60 * 1000, 100
)
refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(refresh_response.code, 200, refresh_response.result)
self.assertApproximates(
refresh_response.json_body["expires_in_ms"], 60 * 1000, 100
)
def test_refresh_token_invalidation(self):
"""Refresh tokens are invalidated after first use of the next token.
A refresh token is considered invalid if:
- it was already used at least once
- and either
- the next access token was used
- the next refresh token was used
The chain of tokens goes like this:
login -|-> first_refresh -> third_refresh (fails)
|-> second_refresh -> fifth_refresh
|-> fourth_refresh (fails)
"""
body = {"type": "m.login.password", "user": "test", "password": self.user_pass}
login_response = self.make_request(
"POST",
"/_matrix/client/r0/login?org.matrix.msc2918.refresh_token=true",
body,
)
self.assertEqual(login_response.code, 200, login_response.result)
# This first refresh should work properly
first_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(
first_refresh_response.code, 200, first_refresh_response.result
)
# This one as well, since the token in the first one was never used
second_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(
second_refresh_response.code, 200, second_refresh_response.result
)
# This one should not, since the token from the first refresh is not valid anymore
third_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": first_refresh_response.json_body["refresh_token"]},
)
self.assertEqual(
third_refresh_response.code, 401, third_refresh_response.result
)
# The associated access token should also be invalid
whoami_response = self.make_request(
"GET",
"/_matrix/client/r0/account/whoami",
access_token=first_refresh_response.json_body["access_token"],
)
self.assertEqual(whoami_response.code, 401, whoami_response.result)
# But all other tokens should work (they will expire after some time)
for access_token in [
second_refresh_response.json_body["access_token"],
login_response.json_body["access_token"],
]:
whoami_response = self.make_request(
"GET", "/_matrix/client/r0/account/whoami", access_token=access_token
)
self.assertEqual(whoami_response.code, 200, whoami_response.result)
# Now that the access token from the last valid refresh was used once, refreshing with the N-1 token should fail
fourth_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": login_response.json_body["refresh_token"]},
)
self.assertEqual(
fourth_refresh_response.code, 403, fourth_refresh_response.result
)
# But refreshing from the last valid refresh token still works
fifth_refresh_response = self.make_request(
"POST",
"/_matrix/client/unstable/org.matrix.msc2918.refresh_token/refresh",
{"refresh_token": second_refresh_response.json_body["refresh_token"]},
)
self.assertEqual(
fifth_refresh_response.code, 200, fifth_refresh_response.result
)

View file

@ -41,35 +41,7 @@ class FilterTestCase(unittest.HomeserverTestCase):
channel = self.make_request("GET", "/sync") channel = self.make_request("GET", "/sync")
self.assertEqual(channel.code, 200) self.assertEqual(channel.code, 200)
self.assertTrue( self.assertIn("next_batch", channel.json_body)
{
"next_batch",
"rooms",
"presence",
"account_data",
"to_device",
"device_lists",
}.issubset(set(channel.json_body.keys()))
)
def test_sync_presence_disabled(self):
"""
When presence is disabled, the key does not appear in /sync.
"""
self.hs.config.use_presence = False
channel = self.make_request("GET", "/sync")
self.assertEqual(channel.code, 200)
self.assertTrue(
{
"next_batch",
"rooms",
"account_data",
"to_device",
"device_lists",
}.issubset(set(channel.json_body.keys()))
)
class SyncFilterTestCase(unittest.HomeserverTestCase): class SyncFilterTestCase(unittest.HomeserverTestCase):

View file

@ -306,8 +306,9 @@ class TestResourceLimitsServerNoticesWithRealRooms(unittest.HomeserverTestCase):
channel = self.make_request("GET", "/sync?timeout=0", access_token=tok) channel = self.make_request("GET", "/sync?timeout=0", access_token=tok)
invites = channel.json_body["rooms"]["invite"] self.assertNotIn(
self.assertEqual(len(invites), 0, invites) "rooms", channel.json_body, "Got invites without server notice"
)
def test_invite_with_notice(self): def test_invite_with_notice(self):
"""Tests that, if the MAU limit is hit, the server notices user invites each user """Tests that, if the MAU limit is hit, the server notices user invites each user
@ -364,6 +365,7 @@ class TestResourceLimitsServerNoticesWithRealRooms(unittest.HomeserverTestCase):
# We could also pick another user and sync with it, which would return an # We could also pick another user and sync with it, which would return an
# invite to a system notices room, but it doesn't matter which user we're # invite to a system notices room, but it doesn't matter which user we're
# using so we use the last one because it saves us an extra sync. # using so we use the last one because it saves us an extra sync.
if "rooms" in channel.json_body:
invites = channel.json_body["rooms"]["invite"] invites = channel.json_body["rooms"]["invite"]
# Make sure we have an invite to process. # Make sure we have an invite to process.

View file

@ -15,7 +15,7 @@
from unittest.mock import Mock from unittest.mock import Mock
from synapse.util.caches.lrucache import LruCache from synapse.util.caches.lrucache import LruCache, setup_expire_lru_cache_entries
from synapse.util.caches.treecache import TreeCache from synapse.util.caches.treecache import TreeCache
from tests import unittest from tests import unittest
@ -260,3 +260,47 @@ class LruCacheSizedTestCase(unittest.HomeserverTestCase):
self.assertEquals(cache["key3"], [3]) self.assertEquals(cache["key3"], [3])
self.assertEquals(cache["key4"], [4]) self.assertEquals(cache["key4"], [4])
self.assertEquals(cache["key5"], [5, 6]) self.assertEquals(cache["key5"], [5, 6])
class TimeEvictionTestCase(unittest.HomeserverTestCase):
"""Test that time based eviction works correctly."""
def default_config(self):
config = super().default_config()
config.setdefault("caches", {})["expiry_time"] = "30m"
return config
def test_evict(self):
setup_expire_lru_cache_entries(self.hs)
cache = LruCache(5, clock=self.hs.get_clock())
# Check that we evict entries we haven't accessed for 30 minutes.
cache["key1"] = 1
cache["key2"] = 2
self.reactor.advance(20 * 60)
self.assertEqual(cache.get("key1"), 1)
self.reactor.advance(20 * 60)
# We have only touched `key1` in the last 30m, so we expect that to
# still be in the cache while `key2` should have been evicted.
self.assertEqual(cache.get("key1"), 1)
self.assertEqual(cache.get("key2"), None)
# Check that re-adding an expired key works correctly.
cache["key2"] = 3
self.assertEqual(cache.get("key2"), 3)
self.reactor.advance(20 * 60)
self.assertEqual(cache.get("key2"), 3)
self.reactor.advance(20 * 60)
self.assertEqual(cache.get("key1"), None)
self.assertEqual(cache.get("key2"), 3)