No changes since v1.2.0rc2.

-----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEgQG31Z317NrSMt0QiISIDS7+X/QFAl05rSATHGFuZHJld0Bh
 bW9yZ2FuLnh5egAKCRCIhIgNLv5f9D36D/9n2clmrMWaVTtRz7SO5F2rY+JuA61l
 kb3Xq1SlYk2WRWD7/K4RnGbYixjAvCWNsIVs9NYhqG3cyQ+Gb0+X4zO6XzvZ5Th0
 h2bqMyNMBvRhy75rFrygS21iSvUjst+e4nrr1XPnGblXPnXgUE/WjJ05m8xTjn/a
 rMUjQbWllpcMEmqPEEFwo84bBMs77evHWIb9k9jEBT6LUfp1BPodXmLdtIEF22ug
 iewx6pBcwtv5vVpWxi9BiUsio0HJjRsTQOFVR9E9fT0rQEZcH1DJjxqk1Z9kJEDd
 krtj/RV2frtteUmxoMcSO2hdUlfm2/0Qi0vgdFwUDT410wweWLG/NYrHrJ3Tlyyz
 luD0PGPiB8BAanytmUEbzTd0yAACV2uBeheCk4zR04gdQ8zkzQyeSn5JUhj5ba8+
 LGi7eZJhnXJEPeZDqioy8y5CdWh3PlJo2rgICH9se6yD0GYFc693j2+yMgygunnB
 ZkuC2Q5lozbOsPoeL/cW80RrFOy2H8V5nU/VTSwAgjP+xIkm9Yk9qVHVusGINsdv
 Df5NWhN8wOIyah4w45+2kbt4GH92fPxf9D+0EZ1wm3K0AmvzIbK94jdXw5oBB24R
 EkkK/8oTAM/ErbnP+sBxeDjVJJGTXsRYL7qC9HeXVtaDDNmqjp1Lw7seuk0dZgap
 S1U6onGrH92xdg==
 =Ej8S
 -----END PGP SIGNATURE-----

Merge tag 'v1.2.0' into shhs

No changes since v1.2.0rc2.
This commit is contained in:
Amber H. Brown 2019-07-26 01:48:50 +10:00
commit f61cdc14e7
111 changed files with 4039 additions and 1281 deletions

View File

@ -184,6 +184,7 @@ steps:
image: "matrixdotorg/sytest-synapse:py35" image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true propagate-environment: true
always-pull: true always-pull: true
workdir: "/src"
retry: retry:
automatic: automatic:
- exit_status: -1 - exit_status: -1
@ -204,6 +205,7 @@ steps:
image: "matrixdotorg/sytest-synapse:py35" image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true propagate-environment: true
always-pull: true always-pull: true
workdir: "/src"
retry: retry:
automatic: automatic:
- exit_status: -1 - exit_status: -1
@ -226,6 +228,7 @@ steps:
image: "matrixdotorg/sytest-synapse:py35" image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true propagate-environment: true
always-pull: true always-pull: true
workdir: "/src"
soft_fail: true soft_fail: true
retry: retry:
automatic: automatic:

View File

@ -1,3 +1,99 @@
Synapse 1.2.0 (2019-07-25)
==========================
No significant changes.
Synapse 1.2.0rc2 (2019-07-24)
=============================
Bugfixes
--------
- Fix a regression introduced in v1.2.0rc1 which led to incorrect labels on some prometheus metrics. ([\#5734](https://github.com/matrix-org/synapse/issues/5734))
Synapse 1.2.0rc1 (2019-07-22)
=============================
Features
--------
- Add support for opentracing. ([\#5544](https://github.com/matrix-org/synapse/issues/5544), [\#5712](https://github.com/matrix-org/synapse/issues/5712))
- Add ability to pull all locally stored events out of synapse that a particular user can see. ([\#5589](https://github.com/matrix-org/synapse/issues/5589))
- Add a basic admin command app to allow server operators to run Synapse admin commands separately from the main production instance. ([\#5597](https://github.com/matrix-org/synapse/issues/5597))
- Add `sender` and `origin_server_ts` fields to `m.replace`. ([\#5613](https://github.com/matrix-org/synapse/issues/5613))
- Add default push rule to ignore reactions. ([\#5623](https://github.com/matrix-org/synapse/issues/5623))
- Include the original event when asking for its relations. ([\#5626](https://github.com/matrix-org/synapse/issues/5626))
- Implement `session_lifetime` configuration option, after which access tokens will expire. ([\#5660](https://github.com/matrix-org/synapse/issues/5660))
- Return "This account has been deactivated" when a deactivated user tries to login. ([\#5674](https://github.com/matrix-org/synapse/issues/5674))
- Enable aggregations support by default ([\#5714](https://github.com/matrix-org/synapse/issues/5714))
Bugfixes
--------
- Fix 'utime went backwards' errors on daemonization. ([\#5609](https://github.com/matrix-org/synapse/issues/5609))
- Various minor fixes to the federation request rate limiter. ([\#5621](https://github.com/matrix-org/synapse/issues/5621))
- Forbid viewing relations on an event once it has been redacted. ([\#5629](https://github.com/matrix-org/synapse/issues/5629))
- Fix requests to the `/store_invite` endpoint of identity servers being sent in the wrong format. ([\#5638](https://github.com/matrix-org/synapse/issues/5638))
- Fix newly-registered users not being able to lookup their own profile without joining a room. ([\#5644](https://github.com/matrix-org/synapse/issues/5644))
- Fix bug in #5626 that prevented the original_event field from actually having the contents of the original event in a call to `/relations`. ([\#5654](https://github.com/matrix-org/synapse/issues/5654))
- Fix 3PID bind requests being sent to identity servers as `application/x-form-www-urlencoded` data, which is deprecated. ([\#5658](https://github.com/matrix-org/synapse/issues/5658))
- Fix some problems with authenticating redactions in recent room versions. ([\#5699](https://github.com/matrix-org/synapse/issues/5699), [\#5700](https://github.com/matrix-org/synapse/issues/5700), [\#5707](https://github.com/matrix-org/synapse/issues/5707))
- Ignore redactions of m.room.create events. ([\#5701](https://github.com/matrix-org/synapse/issues/5701))
Updates to the Docker image
---------------------------
- Base Docker image on a newer Alpine Linux version (3.8 -> 3.10). ([\#5619](https://github.com/matrix-org/synapse/issues/5619))
- Add missing space in default logging file format generated by the Docker image. ([\#5620](https://github.com/matrix-org/synapse/issues/5620))
Improved Documentation
----------------------
- Add information about nginx normalisation to reverse_proxy.rst. Contributed by @skalarproduktraum - thanks! ([\#5397](https://github.com/matrix-org/synapse/issues/5397))
- --no-pep517 should be --no-use-pep517 in the documentation to setup the development environment. ([\#5651](https://github.com/matrix-org/synapse/issues/5651))
- Improvements to Postgres setup instructions. Contributed by @Lrizika - thanks! ([\#5661](https://github.com/matrix-org/synapse/issues/5661))
- Minor tweaks to postgres documentation. ([\#5675](https://github.com/matrix-org/synapse/issues/5675))
Deprecations and Removals
-------------------------
- Remove support for the `invite_3pid_guest` configuration setting. ([\#5625](https://github.com/matrix-org/synapse/issues/5625))
Internal Changes
----------------
- Move logging code out of `synapse.util` and into `synapse.logging`. ([\#5606](https://github.com/matrix-org/synapse/issues/5606), [\#5617](https://github.com/matrix-org/synapse/issues/5617))
- Add a blacklist file to the repo to blacklist certain sytests from failing CI. ([\#5611](https://github.com/matrix-org/synapse/issues/5611))
- Make runtime errors surrounding password reset emails much clearer. ([\#5616](https://github.com/matrix-org/synapse/issues/5616))
- Remove dead code for persiting outgoing federation transactions. ([\#5622](https://github.com/matrix-org/synapse/issues/5622))
- Add `lint.sh` to the scripts-dev folder which will run all linting steps required by CI. ([\#5627](https://github.com/matrix-org/synapse/issues/5627))
- Move RegistrationHandler.get_or_create_user to test code. ([\#5628](https://github.com/matrix-org/synapse/issues/5628))
- Add some more common python virtual-environment paths to the black exclusion list. ([\#5630](https://github.com/matrix-org/synapse/issues/5630))
- Some counter metrics exposed over Prometheus have been renamed, with the old names preserved for backwards compatibility and deprecated. See `docs/metrics-howto.rst` for details. ([\#5636](https://github.com/matrix-org/synapse/issues/5636))
- Unblacklist some user_directory sytests. ([\#5637](https://github.com/matrix-org/synapse/issues/5637))
- Factor out some redundant code in the login implementation. ([\#5639](https://github.com/matrix-org/synapse/issues/5639))
- Update ModuleApi to avoid register(generate_token=True). ([\#5640](https://github.com/matrix-org/synapse/issues/5640))
- Remove access-token support from `RegistrationHandler.register`, and rename it. ([\#5641](https://github.com/matrix-org/synapse/issues/5641))
- Remove access-token support from `RegistrationStore.register`, and rename it. ([\#5642](https://github.com/matrix-org/synapse/issues/5642))
- Improve logging for auto-join when a new user is created. ([\#5643](https://github.com/matrix-org/synapse/issues/5643))
- Remove unused and unnecessary check for FederationDeniedError in _exception_to_failure. ([\#5645](https://github.com/matrix-org/synapse/issues/5645))
- Fix a small typo in a code comment. ([\#5655](https://github.com/matrix-org/synapse/issues/5655))
- Clean up exception handling around client access tokens. ([\#5656](https://github.com/matrix-org/synapse/issues/5656))
- Add a mechanism for per-test homeserver configuration in the unit tests. ([\#5657](https://github.com/matrix-org/synapse/issues/5657))
- Inline issue_access_token. ([\#5659](https://github.com/matrix-org/synapse/issues/5659))
- Update the sytest BuildKite configuration to checkout Synapse in `/src`. ([\#5664](https://github.com/matrix-org/synapse/issues/5664))
- Add a `docker` type to the towncrier configuration. ([\#5673](https://github.com/matrix-org/synapse/issues/5673))
- Convert `synapse.federation.transport.server` to `async`. Might improve some stack traces. ([\#5689](https://github.com/matrix-org/synapse/issues/5689))
- Documentation for opentracing. ([\#5703](https://github.com/matrix-org/synapse/issues/5703))
Synapse 1.1.0 (2019-07-04) Synapse 1.1.0 (2019-07-04)
========================== ==========================

View File

@ -30,11 +30,10 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release. changes will then land on master when we next do a release.
We use `CircleCI <https://circleci.com/gh/matrix-org>`_ and `Buildkite We use `Buildkite <https://buildkite.com/matrix-dot-org/synapse>`_ for
<https://buildkite.com/matrix-dot-org/synapse>`_ for continuous integration. continuous integration. Buildkite builds need to be authorised by a
Buildkite builds need to be authorised by a maintainer. If your change breaks maintainer. If your change breaks the build, this will be shown in GitHub, so
the build, this will be shown in GitHub, so please keep an eye on the pull please keep an eye on the pull request for feedback.
request for feedback.
To run unit tests in a local development environment, you can use: To run unit tests in a local development environment, you can use:
@ -70,13 +69,21 @@ All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by Towncrier entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier). (https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d`` To create a changelog entry, make a new file in the ``changelog.d`` file named
file named in the format of ``PRnumber.type``. The type can be in the format of ``PRnumber.type``. The type can be one of the following:
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes).
The content of the file is your changelog entry, which can contain Markdown * ``feature``.
formatting. The entry should end with a full stop ('.') for consistency. * ``bugfix``.
* ``docker`` (for updates to the Docker image).
* ``doc`` (for updates to the documentation).
* ``removal`` (also used for deprecations).
* ``misc`` (for internal-only changes).
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our `changelog
<https://github.com/matrix-org/synapse/blob/master/CHANGES.md>`_. The file can
contain Markdown formatting, and should end with a full stop ('.') for
consistency.
Adding credits to the changelog is encouraged, we value your Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes! contributions and would like to have you shouted out in the release notes!

View File

@ -272,7 +272,7 @@ to install using pip and a virtualenv::
virtualenv -p python3 env virtualenv -p python3 env
source env/bin/activate source env/bin/activate
python -m pip install --no-pep-517 -e .[all] python -m pip install --no-use-pep517 -e .[all]
This will run a process of downloading and installing all the needed This will run a process of downloading and installing all the needed
dependencies into a virtual env. dependencies into a virtual env.

View File

@ -49,6 +49,13 @@ returned by the Client-Server API:
# configured on port 443. # configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:" curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to v1.2.0
===================
Some counter metrics have been renamed, with the old names deprecated. See
`the metrics documentation <docs/metrics-howto.rst#renaming-of-metrics--deprecation-of-old-names-in-12>`_
for details.
Upgrading to v1.1.0 Upgrading to v1.1.0
=================== ===================

View File

@ -1 +0,0 @@
Move logging code out of `synapse.util` and into `synapse.logging`.

View File

@ -1 +0,0 @@
Fix 'utime went backwards' errors on daemonization.

View File

@ -1 +0,0 @@
Add a blacklist file to the repo to blacklist certain sytests from failing CI.

View File

@ -1 +0,0 @@
Make runtime errors surrounding password reset emails much clearer.

View File

@ -1 +0,0 @@
Move logging code out of `synapse.util` and into `synapse.logging`.

10
debian/changelog vendored
View File

@ -1,9 +1,15 @@
matrix-synapse-py3 (1.1.0-1) UNRELEASED; urgency=medium matrix-synapse-py3 (1.2.0) stable; urgency=medium
[ Amber Brown ] [ Amber Brown ]
* Update logging config defaults to match API changes in Synapse. * Update logging config defaults to match API changes in Synapse.
-- Erik Johnston <erikj@rae> Thu, 04 Jul 2019 13:59:02 +0100 [ Richard van der Hoff ]
* Add Recommends and Depends for some libraries which you probably want.
[ Synapse Packaging team ]
* New synapse release 1.2.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Jul 2019 14:10:07 +0100
matrix-synapse-py3 (1.1.0) stable; urgency=medium matrix-synapse-py3 (1.1.0) stable; urgency=medium

7
debian/control vendored
View File

@ -2,16 +2,20 @@ Source: matrix-synapse-py3
Section: contrib/python Section: contrib/python
Priority: extra Priority: extra
Maintainer: Synapse Packaging team <packages@matrix.org> Maintainer: Synapse Packaging team <packages@matrix.org>
# keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv.
Build-Depends: Build-Depends:
debhelper (>= 9), debhelper (>= 9),
dh-systemd, dh-systemd,
dh-virtualenv (>= 1.1), dh-virtualenv (>= 1.1),
libsystemd-dev,
libpq-dev,
lsb-release, lsb-release,
python3-dev, python3-dev,
python3, python3,
python3-setuptools, python3-setuptools,
python3-pip, python3-pip,
python3-venv, python3-venv,
libsqlite3-dev,
tar, tar,
Standards-Version: 3.9.8 Standards-Version: 3.9.8
Homepage: https://github.com/matrix-org/synapse Homepage: https://github.com/matrix-org/synapse
@ -28,9 +32,12 @@ Depends:
debconf, debconf,
python3-distutils|libpython3-stdlib (<< 3.6), python3-distutils|libpython3-stdlib (<< 3.6),
${misc:Depends}, ${misc:Depends},
${shlibs:Depends},
${synapse:pydepends}, ${synapse:pydepends},
# some of our scripts use perl, but none of them are important, # some of our scripts use perl, but none of them are important,
# so we put perl:Depends in Suggests rather than Depends. # so we put perl:Depends in Suggests rather than Depends.
Recommends:
${shlibs1:Recommends},
Suggests: Suggests:
sqlite3, sqlite3,
${perl:Depends}, ${perl:Depends},

14
debian/rules vendored
View File

@ -3,15 +3,29 @@
# Build Debian package using https://github.com/spotify/dh-virtualenv # Build Debian package using https://github.com/spotify/dh-virtualenv
# #
# assume we only have one package
PACKAGE_NAME:=`dh_listpackages`
override_dh_systemd_enable: override_dh_systemd_enable:
dh_systemd_enable --name=matrix-synapse dh_systemd_enable --name=matrix-synapse
override_dh_installinit: override_dh_installinit:
dh_installinit --name=matrix-synapse dh_installinit --name=matrix-synapse
# we don't really want to strip the symbols from our object files.
override_dh_strip: override_dh_strip:
override_dh_shlibdeps: override_dh_shlibdeps:
# make the postgres package's dependencies a recommendation
# rather than a hard dependency.
find debian/$(PACKAGE_NAME)/ -path '*/site-packages/psycopg2/*.so' | \
xargs dpkg-shlibdeps -Tdebian/$(PACKAGE_NAME).substvars \
-pshlibs1 -dRecommends
# all the other dependencies can be normal 'Depends' requirements,
# except for PIL's, which is self-contained and which confuses
# dpkg-shlibdeps.
dh_shlibdeps -X site-packages/PIL/.libs -X site-packages/psycopg2
override_dh_virtualenv: override_dh_virtualenv:
./debian/build_virtualenv ./debian/build_virtualenv

View File

@ -16,7 +16,7 @@ ARG PYTHON_VERSION=3.7
### ###
### Stage 0: builder ### Stage 0: builder
### ###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8 as builder FROM docker.io/python:${PYTHON_VERSION}-alpine3.10 as builder
# install the OS build deps # install the OS build deps
@ -55,7 +55,7 @@ RUN pip install --prefix="/install" --no-warn-script-location \
### Stage 1: runtime ### Stage 1: runtime
### ###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8 FROM docker.io/python:${PYTHON_VERSION}-alpine3.10
# xmlsec is required for saml support # xmlsec is required for saml support
RUN apk add --no-cache --virtual .runtime_deps \ RUN apk add --no-cache --virtual .runtime_deps \

View File

@ -43,6 +43,9 @@ RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
FROM ${distro} FROM ${distro}
# Install the build dependencies # Install the build dependencies
#
# NB: keep this list in sync with the list of build-deps in debian/control
# TODO: it would be nice to do that automatically.
RUN apt-get update -qq -o Acquire::Languages=none \ RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \ && env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \ -yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \

View File

@ -2,7 +2,7 @@ version: 1
formatters: formatters:
precise: precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s' format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters: filters:
context: context:

View File

@ -59,6 +59,108 @@ How to monitor Synapse metrics using Prometheus
Restart Prometheus. Restart Prometheus.
Renaming of metrics & deprecation of old names in 1.2
-----------------------------------------------------
Synapse 1.2 updates the Prometheus metrics to match the naming convention of the
upstream ``prometheus_client``. The old names are considered deprecated and will
be removed in a future version of Synapse.
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| New Name | Old Name |
+=============================================================================+=======================================================================+
| python_gc_objects_collected_total | python_gc_objects_collected |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| python_gc_objects_uncollectable_total | python_gc_objects_uncollectable |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| python_gc_collections_total | python_gc_collections |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| process_cpu_seconds_total | process_cpu_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_transactions_total | synapse_federation_client_sent_transactions |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_events_processed_total | synapse_federation_client_events_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_event_processing_loop_count_total | synapse_event_processing_loop_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_event_processing_loop_room_count_total | synapse_event_processing_loop_room_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_count_total | synapse_util_metrics_block_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_time_seconds_total | synapse_util_metrics_block_time_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_ru_utime_seconds_total | synapse_util_metrics_block_ru_utime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_ru_stime_seconds_total | synapse_util_metrics_block_ru_stime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_txn_count_total | synapse_util_metrics_block_db_txn_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_txn_duration_seconds_total | synapse_util_metrics_block_db_txn_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_sched_duration_seconds_total | synapse_util_metrics_block_db_sched_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_start_count_total | synapse_background_process_start_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_ru_utime_seconds_total | synapse_background_process_ru_utime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_ru_stime_seconds_total | synapse_background_process_ru_stime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_txn_count_total | synapse_background_process_db_txn_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_txn_duration_seconds_total | synapse_background_process_db_txn_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_sched_duration_seconds_total | synapse_background_process_db_sched_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_persisted_events_total | synapse_storage_events_persisted_events |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_persisted_events_sep_total | synapse_storage_events_persisted_events_sep |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_total | synapse_storage_events_state_delta |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_single_event_total | synapse_storage_events_state_delta_single_event |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_reuse_delta_total | synapse_storage_events_state_delta_reuse_delta |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_server_received_pdus_total | synapse_federation_server_received_pdus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_server_received_edus_total | synapse_federation_server_received_edus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_notified_presence_total | synapse_handler_presence_notified_presence |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_federation_presence_out_total | synapse_handler_presence_federation_presence_out |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_presence_updates_total | synapse_handler_presence_presence_updates |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_timers_fired_total | synapse_handler_presence_timers_fired |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_federation_presence_total | synapse_handler_presence_federation_presence |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_bump_active_time_total | synapse_handler_presence_bump_active_time |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_edus_total | synapse_federation_client_sent_edus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_pdu_destinations_count_total | synapse_federation_client_sent_pdu_destinations:count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_pdu_destinations_total | synapse_federation_client_sent_pdu_destinations:total |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handlers_appservice_events_processed_total | synapse_handlers_appservice_events_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_notifier_notified_events_total | synapse_notifier_notified_events |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_http_pushes_processed_total | synapse_http_httppusher_http_pushes_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_http_pushes_failed_total | synapse_http_httppusher_http_pushes_failed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_badge_updates_processed_total | synapse_http_httppusher_badge_updates_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_badge_updates_failed_total | synapse_http_httppusher_badge_updates_failed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0 Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
--------------------------------------------------------------------------------- ---------------------------------------------------------------------------------

100
docs/opentracing.rst Normal file
View File

@ -0,0 +1,100 @@
===========
OpenTracing
===========
Background
----------
OpenTracing is a semi-standard being adopted by a number of distributed tracing
platforms. It is a common api for facilitating vendor-agnostic tracing
instrumentation. That is, we can use the OpenTracing api and select one of a
number of tracer implementations to do the heavy lifting in the background.
Our current selected implementation is Jaeger.
OpenTracing is a tool which gives an insight into the causal relationship of
work done in and between servers. The servers each track events and report them
to a centralised server - in Synapse's case: Jaeger. The basic unit used to
represent events is the span. The span roughly represents a single piece of work
that was done and the time at which it occurred. A span can have child spans,
meaning that the work of the child had to be completed for the parent span to
complete, or it can have follow-on spans which represent work that is undertaken
as a result of the parent but is not depended on by the parent to in order to
finish.
Since this is undertaken in a distributed environment a request to another
server, such as an RPC or a simple GET, can be considered a span (a unit or
work) for the local server. This causal link is what OpenTracing aims to
capture and visualise. In order to do this metadata about the local server's
span, i.e the 'span context', needs to be included with the request to the
remote.
It is up to the remote server to decide what it does with the spans
it creates. This is called the sampling policy and it can be configured
through Jaeger's settings.
For OpenTracing concepts see
https://opentracing.io/docs/overview/what-is-tracing/.
For more information about Jaeger's implementation see
https://www.jaegertracing.io/docs/
=====================
Seting up OpenTracing
=====================
To receive OpenTracing spans, start up a Jaeger server. This can be done
using docker like so:
.. code-block:: bash
docker run -d --name jaeger
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
jaegertracing/all-in-one:1.13
Latest documentation is probably at
https://www.jaegertracing.io/docs/1.13/getting-started/
Enable OpenTracing in Synapse
-----------------------------
OpenTracing is not enabled by default. It must be enabled in the homeserver
config by uncommenting the config options under ``opentracing`` as shown in
the `sample config <./sample_config.yaml>`_. For example:
.. code-block:: yaml
opentracing:
tracer_enabled: true
homeserver_whitelist:
- "mytrustedhomeserver.org"
- "*.myotherhomeservers.com"
Homeserver whitelisting
-----------------------
The homeserver whitelist is configured using regular expressions. A list of regular
expressions can be given and their union will be compared when propagating any
spans contexts to another homeserver.
Though it's mostly safe to send and receive span contexts to and from
untrusted users since span contexts are usually opaque ids it can lead to
two problems, namely:
- If the span context is marked as sampled by the sending homeserver the receiver will
sample it. Therefore two homeservers with wildly different sampling policies
could incur higher sampling counts than intended.
- Sending servers can attach arbitrary data to spans, known as 'baggage'. For safety this has been disabled in Synapse
but that doesn't prevent another server sending you baggage which will be logged
to OpenTracing's logs.
==================
Configuring Jaeger
==================
Sampling strategies can be set as in this document:
https://www.jaegertracing.io/docs/1.13/sampling/

View File

@ -11,7 +11,9 @@ a postgres database.
* If you are using the `matrix.org debian/ubuntu * If you are using the `matrix.org debian/ubuntu
packages <../INSTALL.md#matrixorg-packages>`_, packages <../INSTALL.md#matrixorg-packages>`_,
the necessary libraries will already be installed. the necessary python library will already be installed, but you will need to
ensure the low-level postgres library is installed, which you can do with
``apt install libpq5``.
* For other pre-built packages, please consult the documentation from the * For other pre-built packages, please consult the documentation from the
relevant package. relevant package.
@ -34,9 +36,14 @@ Assuming your PostgreSQL database user is called ``postgres``, create a user
su - postgres su - postgres
createuser --pwprompt synapse_user createuser --pwprompt synapse_user
The PostgreSQL database used *must* have the correct encoding set, otherwise it Before you can authenticate with the ``synapse_user``, you must create a
would not be able to store UTF8 strings. To create a database with the correct database that it can access. To create a database, first connect to the database
encoding use, e.g.:: with your database user::
su - postgres
psql
and then run::
CREATE DATABASE synapse CREATE DATABASE synapse
ENCODING 'UTF8' ENCODING 'UTF8'
@ -46,7 +53,13 @@ encoding use, e.g.::
OWNER synapse_user; OWNER synapse_user;
This would create an appropriate database named ``synapse`` owned by the This would create an appropriate database named ``synapse`` owned by the
``synapse_user`` user (which must already exist). ``synapse_user`` user (which must already have been created as above).
Note that the PostgreSQL database *must* have the correct encoding set (as
shown above), otherwise it will not be able to store UTF8 strings.
You may need to enable password authentication so ``synapse_user`` can connect
to the database. See https://www.postgresql.org/docs/11/auth-pg-hba-conf.html.
Tuning Postgres Tuning Postgres
=============== ===============

View File

@ -48,6 +48,8 @@ Let's assume that we expect clients to connect to our server at
proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-For $remote_addr;
} }
} }
Do not add a `/` after the port in `proxy_pass`, otherwise nginx will canonicalise/normalise the URI.
* Caddy:: * Caddy::

View File

@ -797,6 +797,17 @@ uploads_path: "DATADIR/uploads"
# renew_at: 1w # renew_at: 1w
# renew_email_subject: "Renew your %(app)s account" # renew_email_subject: "Renew your %(app)s account"
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
#
# Note also that this is calculated at login time: changes are not applied
# retrospectively to users who have already logged in.
#
# By default, this is infinite.
#
#session_lifetime: 24h
# The user must provide all of the below types of 3PID when registering. # The user must provide all of the below types of 3PID when registering.
# #
#registrations_require_3pid: #registrations_require_3pid:
@ -1406,3 +1417,27 @@ password_config:
# module: "my_custom_project.SuperRulesSet" # module: "my_custom_project.SuperRulesSet"
# config: # config:
# example_option: 'things' # example_option: 'things'
## Opentracing ##
# These settings enable opentracing, which implements distributed tracing.
# This allows you to observe the causal chains of events across servers
# including requests, key lookups etc., across any server running
# synapse or any other other services which supports opentracing
# (specifically those implemented with Jaeger).
#
opentracing:
# tracing is disabled by default. Uncomment the following line to enable it.
#
#enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage.
# See docs/opentracing.rst
# This is a list of regexes which are matched against the server_name of the
# homeserver.
#
# By defult, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - ".*"

View File

@ -14,6 +14,11 @@
name = "Bugfixes" name = "Bugfixes"
showcontent = true showcontent = true
[[tool.towncrier.type]]
directory = "docker"
name = "Updates to the Docker image"
showcontent = true
[[tool.towncrier.type]] [[tool.towncrier.type]]
directory = "doc" directory = "doc"
name = "Improved Documentation" name = "Improved Documentation"
@ -39,6 +44,8 @@ exclude = '''
| \.git # root of the project | \.git # root of the project
| \.tox | \.tox
| \.venv | \.venv
| \.env
| env
| _build | _build
| _trial_temp.* | _trial_temp.*
| build | build

12
scripts-dev/lint.sh Executable file
View File

@ -0,0 +1,12 @@
#!/bin/sh
#
# Runs linting scripts over the local Synapse checkout
# isort - sorts import statements
# flake8 - lints and finds mistakes
# black - opinionated code formatter
set -e
isort -y -rc synapse tests scripts-dev scripts
flake8 synapse tests
python3 -m black synapse tests scripts-dev scripts

View File

@ -35,4 +35,4 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.1.0" __version__ = "1.2.0"

View File

@ -25,7 +25,13 @@ from twisted.internet import defer
import synapse.types import synapse.types
from synapse import event_auth from synapse import event_auth
from synapse.api.constants import EventTypes, JoinRules, Membership from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes, ResourceLimitError from synapse.api.errors import (
AuthError,
Codes,
InvalidClientTokenError,
MissingClientTokenError,
ResourceLimitError,
)
from synapse.config.server import is_threepid_reserved from synapse.config.server import is_threepid_reserved
from synapse.types import UserID from synapse.types import UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
@ -63,7 +69,6 @@ class Auth(object):
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000) self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("cache", "token_cache", self.token_cache) register_cache("cache", "token_cache", self.token_cache)
@ -189,18 +194,17 @@ class Auth(object):
Returns: Returns:
defer.Deferred: resolves to a ``synapse.types.Requester`` object defer.Deferred: resolves to a ``synapse.types.Requester`` object
Raises: Raises:
AuthError if no user by that token exists or the token is invalid. InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
AuthError if access is denied for the user in the access token
""" """
# Can optionally look elsewhere in the request (e.g. headers)
try: try:
ip_addr = self.hs.get_ip_from_request(request) ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders( user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent", default=[b""] b"User-Agent", default=[b""]
)[0].decode("ascii", "surrogateescape") )[0].decode("ascii", "surrogateescape")
access_token = self.get_access_token_from_request( access_token = self.get_access_token_from_request(request)
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
user_id, app_service = yield self._get_appservice_user_id(request) user_id, app_service = yield self._get_appservice_user_id(request)
if user_id: if user_id:
@ -264,18 +268,12 @@ class Auth(object):
) )
) )
except KeyError: except KeyError:
raise AuthError( raise MissingClientTokenError()
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Missing access token.",
errcode=Codes.MISSING_TOKEN,
)
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_appservice_user_id(self, request): def _get_appservice_user_id(self, request):
app_service = self.store.get_app_service_by_token( app_service = self.store.get_app_service_by_token(
self.get_access_token_from_request( self.get_access_token_from_request(request)
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
) )
if app_service is None: if app_service is None:
defer.returnValue((None, None)) defer.returnValue((None, None))
@ -313,13 +311,25 @@ class Auth(object):
`token_id` (int|None): access token id. May be None if guest `token_id` (int|None): access token id. May be None if guest
`device_id` (str|None): device corresponding to access token `device_id` (str|None): device corresponding to access token
Raises: Raises:
AuthError if no user by that token exists or the token is invalid. InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
""" """
if rights == "access": if rights == "access":
# first look in the database # first look in the database
r = yield self._look_up_user_by_access_token(token) r = yield self._look_up_user_by_access_token(token)
if r: if r:
valid_until_ms = r["valid_until_ms"]
if (
valid_until_ms is not None
and valid_until_ms < self.clock.time_msec()
):
# there was a valid access token, but it has expired.
# soft-logout the user.
raise InvalidClientTokenError(
msg="Access token has expired", soft_logout=True
)
defer.returnValue(r) defer.returnValue(r)
# otherwise it needs to be a valid macaroon # otherwise it needs to be a valid macaroon
@ -331,11 +341,7 @@ class Auth(object):
if not guest: if not guest:
# non-guest access tokens must be in the database # non-guest access tokens must be in the database
logger.warning("Unrecognised access token - not in store.") logger.warning("Unrecognised access token - not in store.")
raise AuthError( raise InvalidClientTokenError()
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN,
)
# Guest access tokens are not stored in the database (there can # Guest access tokens are not stored in the database (there can
# only be one access token per guest, anyway). # only be one access token per guest, anyway).
@ -350,16 +356,10 @@ class Auth(object):
# guest tokens. # guest tokens.
stored_user = yield self.store.get_user_by_id(user_id) stored_user = yield self.store.get_user_by_id(user_id)
if not stored_user: if not stored_user:
raise AuthError( raise InvalidClientTokenError("Unknown user_id %s" % user_id)
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unknown user_id %s" % user_id,
errcode=Codes.UNKNOWN_TOKEN,
)
if not stored_user["is_guest"]: if not stored_user["is_guest"]:
raise AuthError( raise InvalidClientTokenError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Guest access token used for regular user"
"Guest access token used for regular user",
errcode=Codes.UNKNOWN_TOKEN,
) )
ret = { ret = {
"user": user, "user": user,
@ -386,11 +386,7 @@ class Auth(object):
ValueError, ValueError,
) as e: ) as e:
logger.warning("Invalid macaroon in auth: %s %s", type(e), e) logger.warning("Invalid macaroon in auth: %s %s", type(e), e)
raise AuthError( raise InvalidClientTokenError("Invalid macaroon passed.")
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN,
)
def _parse_and_validate_macaroon(self, token, rights="access"): def _parse_and_validate_macaroon(self, token, rights="access"):
"""Takes a macaroon and tries to parse and validate it. This is cached """Takes a macaroon and tries to parse and validate it. This is cached
@ -430,11 +426,7 @@ class Auth(object):
macaroon, rights, self.hs.config.expire_access_token, user_id=user_id macaroon, rights, self.hs.config.expire_access_token, user_id=user_id
) )
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError): except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError( raise InvalidClientTokenError("Invalid macaroon passed.")
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN,
)
if not has_expiry and rights == "access": if not has_expiry and rights == "access":
self.token_cache[token] = (user_id, guest) self.token_cache[token] = (user_id, guest)
@ -453,17 +445,14 @@ class Auth(object):
(str) user id (str) user id
Raises: Raises:
AuthError if there is no user_id caveat in the macaroon InvalidClientCredentialsError if there is no user_id caveat in the
macaroon
""" """
user_prefix = "user_id = " user_prefix = "user_id = "
for caveat in macaroon.caveats: for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix): if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix) :] return caveat.caveat_id[len(user_prefix) :]
raise AuthError( raise InvalidClientTokenError("No user caveat in macaroon")
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"No user caveat in macaroon",
errcode=Codes.UNKNOWN_TOKEN,
)
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id): def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
""" """
@ -527,26 +516,18 @@ class Auth(object):
"token_id": ret.get("token_id", None), "token_id": ret.get("token_id", None),
"is_guest": False, "is_guest": False,
"device_id": ret.get("device_id"), "device_id": ret.get("device_id"),
"valid_until_ms": ret.get("valid_until_ms"),
} }
defer.returnValue(user_info) defer.returnValue(user_info)
def get_appservice_by_req(self, request): def get_appservice_by_req(self, request):
try: token = self.get_access_token_from_request(request)
token = self.get_access_token_from_request( service = self.store.get_app_service_by_token(token)
request, self.TOKEN_NOT_FOUND_HTTP_STATUS if not service:
) logger.warn("Unrecognised appservice access token.")
service = self.store.get_app_service_by_token(token) raise InvalidClientTokenError()
if not service: request.authenticated_entity = service.sender
logger.warn("Unrecognised appservice access token.") return defer.succeed(service)
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN,
)
request.authenticated_entity = service.sender
return defer.succeed(service)
except KeyError:
raise AuthError(self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.")
def is_server_admin(self, user): def is_server_admin(self, user):
""" Check if the given user is a local server admin. """ Check if the given user is a local server admin.
@ -625,21 +606,6 @@ class Auth(object):
defer.returnValue(auth_ids) defer.returnValue(auth_ids)
def check_redaction(self, room_version, event, auth_events):
"""Check whether the event sender is allowed to redact the target event.
Returns:
True if the the sender is allowed to redact the target event if the
target event was created by them.
False if the sender is allowed to redact the target event with no
further checks.
Raises:
AuthError if the event sender is definitely not allowed to redact
the target event.
"""
return event_auth.check_redaction(room_version, event, auth_events)
@defer.inlineCallbacks @defer.inlineCallbacks
def check_can_change_room_list(self, room_id, user): def check_can_change_room_list(self, room_id, user):
"""Check if the user is allowed to edit the room's entry in the """Check if the user is allowed to edit the room's entry in the
@ -692,20 +658,16 @@ class Auth(object):
return bool(query_params) or bool(auth_headers) return bool(query_params) or bool(auth_headers)
@staticmethod @staticmethod
def get_access_token_from_request(request, token_not_found_http_status=401): def get_access_token_from_request(request):
"""Extracts the access_token from the request. """Extracts the access_token from the request.
Args: Args:
request: The http request. request: The http request.
token_not_found_http_status(int): The HTTP status code to set in the
AuthError if the token isn't found. This is used in some of the
legacy APIs to change the status code to 403 from the default of
401 since some of the old clients depended on auth errors returning
403.
Returns: Returns:
unicode: The access_token unicode: The access_token
Raises: Raises:
AuthError: If there isn't an access_token in the request. MissingClientTokenError: If there isn't a single access_token in the
request
""" """
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization") auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
@ -714,34 +676,20 @@ class Auth(object):
# Try the get the access_token from a "Authorization: Bearer" # Try the get the access_token from a "Authorization: Bearer"
# header # header
if query_params is not None: if query_params is not None:
raise AuthError( raise MissingClientTokenError(
token_not_found_http_status, "Mixing Authorization headers and access_token query parameters."
"Mixing Authorization headers and access_token query parameters.",
errcode=Codes.MISSING_TOKEN,
) )
if len(auth_headers) > 1: if len(auth_headers) > 1:
raise AuthError( raise MissingClientTokenError("Too many Authorization headers.")
token_not_found_http_status,
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
parts = auth_headers[0].split(b" ") parts = auth_headers[0].split(b" ")
if parts[0] == b"Bearer" and len(parts) == 2: if parts[0] == b"Bearer" and len(parts) == 2:
return parts[1].decode("ascii") return parts[1].decode("ascii")
else: else:
raise AuthError( raise MissingClientTokenError("Invalid Authorization header.")
token_not_found_http_status,
"Invalid Authorization header.",
errcode=Codes.MISSING_TOKEN,
)
else: else:
# Try to get the access_token from the query params. # Try to get the access_token from the query params.
if not query_params: if not query_params:
raise AuthError( raise MissingClientTokenError()
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN,
)
return query_params[0].decode("ascii") return query_params[0].decode("ascii")

View File

@ -139,6 +139,22 @@ class ConsentNotGivenError(SynapseError):
return cs_error(self.msg, self.errcode, consent_uri=self._consent_uri) return cs_error(self.msg, self.errcode, consent_uri=self._consent_uri)
class UserDeactivatedError(SynapseError):
"""The error returned to the client when the user attempted to access an
authenticated endpoint, but the account has been deactivated.
"""
def __init__(self, msg):
"""Constructs a UserDeactivatedError
Args:
msg (str): The human-readable error message
"""
super(UserDeactivatedError, self).__init__(
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.UNKNOWN
)
class RegistrationError(SynapseError): class RegistrationError(SynapseError):
"""An error raised when a registration event fails.""" """An error raised when a registration event fails."""
@ -210,7 +226,9 @@ class NotFoundError(SynapseError):
class AuthError(SynapseError): class AuthError(SynapseError):
"""An error raised when there was a problem authorising an event.""" """An error raised when there was a problem authorising an event, and at various
other poorly-defined times.
"""
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
if "errcode" not in kwargs: if "errcode" not in kwargs:
@ -218,6 +236,41 @@ class AuthError(SynapseError):
super(AuthError, self).__init__(*args, **kwargs) super(AuthError, self).__init__(*args, **kwargs)
class InvalidClientCredentialsError(SynapseError):
"""An error raised when there was a problem with the authorisation credentials
in a client request.
https://matrix.org/docs/spec/client_server/r0.5.0#using-access-tokens:
When credentials are required but missing or invalid, the HTTP call will
return with a status of 401 and the error code, M_MISSING_TOKEN or
M_UNKNOWN_TOKEN respectively.
"""
def __init__(self, msg, errcode):
super().__init__(code=401, msg=msg, errcode=errcode)
class MissingClientTokenError(InvalidClientCredentialsError):
"""Raised when we couldn't find the access token in a request"""
def __init__(self, msg="Missing access token"):
super().__init__(msg=msg, errcode="M_MISSING_TOKEN")
class InvalidClientTokenError(InvalidClientCredentialsError):
"""Raised when we didn't understand the access token in a request"""
def __init__(self, msg="Unrecognised access token", soft_logout=False):
super().__init__(msg=msg, errcode="M_UNKNOWN_TOKEN")
self._soft_logout = soft_logout
def error_dict(self):
d = super().error_dict()
d["soft_logout"] = self._soft_logout
return d
class ResourceLimitError(SynapseError): class ResourceLimitError(SynapseError):
""" """
Any error raised when there is a problem with resource usage. Any error raised when there is a problem with resource usage.

View File

@ -48,7 +48,7 @@ def register_sighup(func):
_sighup_callbacks.append(func) _sighup_callbacks.append(func)
def start_worker_reactor(appname, config): def start_worker_reactor(appname, config, run_command=reactor.run):
""" Run the reactor in the main process """ Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting Daemonizes if necessary, and then configures some resources, before starting
@ -57,6 +57,7 @@ def start_worker_reactor(appname, config):
Args: Args:
appname (str): application name which will be sent to syslog appname (str): application name which will be sent to syslog
config (synapse.config.Config): config object config (synapse.config.Config): config object
run_command (Callable[]): callable that actually runs the reactor
""" """
logger = logging.getLogger(config.worker_app) logger = logging.getLogger(config.worker_app)
@ -69,11 +70,19 @@ def start_worker_reactor(appname, config):
daemonize=config.worker_daemonize, daemonize=config.worker_daemonize,
print_pidfile=config.print_pidfile, print_pidfile=config.print_pidfile,
logger=logger, logger=logger,
run_command=run_command,
) )
def start_reactor( def start_reactor(
appname, soft_file_limit, gc_thresholds, pid_file, daemonize, print_pidfile, logger appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
print_pidfile,
logger,
run_command=reactor.run,
): ):
""" Run the reactor in the main process """ Run the reactor in the main process
@ -88,6 +97,7 @@ def start_reactor(
daemonize (bool): true to run the reactor in a background process daemonize (bool): true to run the reactor in a background process
print_pidfile (bool): whether to print the pid file, if daemonize is True print_pidfile (bool): whether to print the pid file, if daemonize is True
logger (logging.Logger): logger instance to pass to Daemonize logger (logging.Logger): logger instance to pass to Daemonize
run_command (Callable[]): callable that actually runs the reactor
""" """
install_dns_limiter(reactor) install_dns_limiter(reactor)
@ -97,7 +107,7 @@ def start_reactor(
change_resource_limit(soft_file_limit) change_resource_limit(soft_file_limit)
if gc_thresholds: if gc_thresholds:
gc.set_threshold(*gc_thresholds) gc.set_threshold(*gc_thresholds)
reactor.run() run_command()
# make sure that we run the reactor with the sentinel log context, # make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused # otherwise other PreserveLoggingContext instances will get confused
@ -139,8 +149,7 @@ def listen_metrics(bind_addresses, port):
""" """
Start Prometheus metrics server. Start Prometheus metrics server.
""" """
from synapse.metrics import RegistryProxy from synapse.metrics import RegistryProxy, start_http_server
from prometheus_client import start_http_server
for host in bind_addresses: for host in bind_addresses:
logger.info("Starting metrics listener on %s:%d", host, port) logger.info("Starting metrics listener on %s:%d", host, port)
@ -243,6 +252,9 @@ def start(hs, listeners=None):
# Load the certificate from disk. # Load the certificate from disk.
refresh_certificate(hs) refresh_certificate(hs)
# Start the tracer
synapse.logging.opentracing.init_tracer(hs.config)
# It is now safe to start your Synapse. # It is now safe to start your Synapse.
hs.start_listening(listeners) hs.start_listening(listeners)
hs.get_datastore().start_profiling() hs.get_datastore().start_profiling()

264
synapse/app/admin_cmd.py Normal file
View File

@ -0,0 +1,264 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import os
import sys
import tempfile
from canonicaljson import json
from twisted.internet import defer, task
import synapse
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.handlers.admin import ExfiltrationWriter
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.logcontext import LoggingContext
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.admin_cmd")
class AdminCmdSlavedStore(
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
):
pass
class AdminCmdServer(HomeServer):
DATASTORE_CLASS = AdminCmdSlavedStore
def _listen_http(self, listener_config):
pass
def start_listening(self, listeners):
pass
def build_tcp_replication(self):
return AdminCmdReplicationHandler(self)
class AdminCmdReplicationHandler(ReplicationClientHandler):
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
pass
def get_streams_to_replicate(self):
return {}
@defer.inlineCallbacks
def export_data_command(hs, args):
"""Export data for a user.
Args:
hs (HomeServer)
args (argparse.Namespace)
"""
user_id = args.user_id
directory = args.output_directory
res = yield hs.get_handlers().admin_handler.export_user_data(
user_id, FileExfiltrationWriter(user_id, directory=directory)
)
print(res)
class FileExfiltrationWriter(ExfiltrationWriter):
"""An ExfiltrationWriter that writes the users data to a directory.
Returns the directory location on completion.
Note: This writes to disk on the main reactor thread.
Args:
user_id (str): The user whose data is being exfiltrated.
directory (str|None): The directory to write the data to, if None then
will write to a temporary directory.
"""
def __init__(self, user_id, directory=None):
self.user_id = user_id
if directory:
self.base_directory = directory
else:
self.base_directory = tempfile.mkdtemp(
prefix="synapse-exfiltrate__%s__" % (user_id,)
)
os.makedirs(self.base_directory, exist_ok=True)
if list(os.listdir(self.base_directory)):
raise Exception("Directory must be empty")
def write_events(self, room_id, events):
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
events_file = os.path.join(room_directory, "events")
with open(events_file, "a") as f:
for event in events:
print(json.dumps(event.get_pdu_json()), file=f)
def write_state(self, room_id, event_id, state):
room_directory = os.path.join(self.base_directory, "rooms", room_id)
state_directory = os.path.join(room_directory, "state")
os.makedirs(state_directory, exist_ok=True)
event_file = os.path.join(state_directory, event_id)
with open(event_file, "a") as f:
for event in state.values():
print(json.dumps(event.get_pdu_json()), file=f)
def write_invite(self, room_id, event, state):
self.write_events(room_id, [event])
# We write the invite state somewhere else as they aren't full events
# and are only a subset of the state at the event.
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
invite_state = os.path.join(room_directory, "invite_state")
with open(invite_state, "a") as f:
for event in state.values():
print(json.dumps(event), file=f)
def finished(self):
return self.base_directory
def start(config_options):
parser = argparse.ArgumentParser(description="Synapse Admin Command")
HomeServerConfig.add_arguments_to_parser(parser)
subparser = parser.add_subparsers(
title="Admin Commands",
required=True,
dest="command",
metavar="<admin_command>",
help="The admin command to perform.",
)
export_data_parser = subparser.add_parser(
"export-data", help="Export all data for a user"
)
export_data_parser.add_argument("user_id", help="User to extra data from")
export_data_parser.add_argument(
"--output-directory",
action="store",
metavar="DIRECTORY",
required=False,
help="The directory to store the exported data in. Must be empty. Defaults"
" to creating a temp directory.",
)
export_data_parser.set_defaults(func=export_data_command)
try:
config, args = HomeServerConfig.load_config_with_parser(parser, config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
if config.worker_app is not None:
assert config.worker_app == "synapse.app.admin_cmd"
# Update the config with some basic overrides so that don't have to specify
# a full worker config.
config.worker_app = "synapse.app.admin_cmd"
if (
not config.worker_daemonize
and not config.worker_log_file
and not config.worker_log_config
):
# Since we're meant to be run as a "command" let's not redirect stdio
# unless we've actually set log config.
config.no_redirect_stdio = True
# Explicitly disable background processes
config.update_user_directory = False
config.start_pushers = False
config.send_federation = False
setup_logging(config, use_worker_options=True)
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = AdminCmdServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
# We use task.react as the basic run command as it correctly handles tearing
# down the reactor when the deferreds resolve and setting the return value.
# We also make sure that `_base.start` gets run before we actually run the
# command.
@defer.inlineCallbacks
def run(_reactor):
with LoggingContext("command"):
yield _base.start(ss, [])
yield args.func(ss, args)
_base.start_worker_reactor(
"synapse-admin-cmd", config, run_command=lambda: task.react(run)
)
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@ -27,8 +27,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.events import SlavedEventStore

View File

@ -28,8 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore

View File

@ -28,8 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore

View File

@ -29,8 +29,7 @@ from synapse.config.logger import setup_logging
from synapse.federation.transport.server import TransportLayerServer from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore

View File

@ -28,9 +28,8 @@ from synapse.config.logger import setup_logging
from synapse.federation import send_queue from synapse.federation import send_queue
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.events import SlavedEventStore

View File

@ -30,8 +30,7 @@ from synapse.http.server import JsonResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore

View File

@ -55,9 +55,8 @@ from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.module_api import ModuleApi from synapse.module_api import ModuleApi
from synapse.python_dependencies import check_requirements from synapse.python_dependencies import check_requirements
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource

View File

@ -28,8 +28,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore

View File

@ -27,8 +27,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import __func__ from synapse.replication.slave.storage._base import __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.events import SlavedEventStore

View File

@ -32,8 +32,7 @@ from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore, __func__ from synapse.replication.slave.storage._base import BaseSlavedStore, __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore

View File

@ -29,8 +29,7 @@ from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore

View File

@ -137,12 +137,42 @@ class Config(object):
return file_stream.read() return file_stream.read()
def invoke_all(self, name, *args, **kargs): def invoke_all(self, name, *args, **kargs):
"""Invoke all instance methods with the given name and arguments in the
class's MRO.
Args:
name (str): Name of function to invoke
*args
**kwargs
Returns:
list: The list of the return values from each method called
"""
results = [] results = []
for cls in type(self).mro(): for cls in type(self).mro():
if name in cls.__dict__: if name in cls.__dict__:
results.append(getattr(cls, name)(self, *args, **kargs)) results.append(getattr(cls, name)(self, *args, **kargs))
return results return results
@classmethod
def invoke_all_static(cls, name, *args, **kargs):
"""Invoke all static methods with the given name and arguments in the
class's MRO.
Args:
name (str): Name of function to invoke
*args
**kwargs
Returns:
list: The list of the return values from each method called
"""
results = []
for c in cls.mro():
if name in c.__dict__:
results.append(getattr(c, name)(*args, **kargs))
return results
def generate_config( def generate_config(
self, self,
config_dir_path, config_dir_path,
@ -202,6 +232,23 @@ class Config(object):
Returns: Config object. Returns: Config object.
""" """
config_parser = argparse.ArgumentParser(description=description) config_parser = argparse.ArgumentParser(description=description)
cls.add_arguments_to_parser(config_parser)
obj, _ = cls.load_config_with_parser(config_parser, argv)
return obj
@classmethod
def add_arguments_to_parser(cls, config_parser):
"""Adds all the config flags to an ArgumentParser.
Doesn't support config-file-generation: used by the worker apps.
Used for workers where we want to add extra flags/subcommands.
Args:
config_parser (ArgumentParser): App description
"""
config_parser.add_argument( config_parser.add_argument(
"-c", "-c",
"--config-path", "--config-path",
@ -219,16 +266,34 @@ class Config(object):
" Defaults to the directory containing the last config file", " Defaults to the directory containing the last config file",
) )
cls.invoke_all_static("add_arguments", config_parser)
@classmethod
def load_config_with_parser(cls, parser, argv):
"""Parse the commandline and config files with the given parser
Doesn't support config-file-generation: used by the worker apps.
Used for workers where we want to add extra flags/subcommands.
Args:
parser (ArgumentParser)
argv (list[str])
Returns:
tuple[HomeServerConfig, argparse.Namespace]: Returns the parsed
config object and the parsed argparse.Namespace object from
`parser.parse_args(..)`
"""
obj = cls() obj = cls()
obj.invoke_all("add_arguments", config_parser) config_args = parser.parse_args(argv)
config_args = config_parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path) config_files = find_config_files(search_paths=config_args.config_path)
if not config_files: if not config_files:
config_parser.error("Must supply a config file.") parser.error("Must supply a config file.")
if config_args.keys_directory: if config_args.keys_directory:
config_dir_path = config_args.keys_directory config_dir_path = config_args.keys_directory
@ -244,7 +309,7 @@ class Config(object):
obj.invoke_all("read_arguments", config_args) obj.invoke_all("read_arguments", config_args)
return obj return obj, config_args
@classmethod @classmethod
def load_or_generate_config(cls, description, argv): def load_or_generate_config(cls, description, argv):
@ -401,7 +466,7 @@ class Config(object):
formatter_class=argparse.RawDescriptionHelpFormatter, formatter_class=argparse.RawDescriptionHelpFormatter,
) )
obj.invoke_all("add_arguments", parser) obj.invoke_all_static("add_arguments", parser)
args = parser.parse_args(remaining_args) args = parser.parse_args(remaining_args)
config_dict = read_config_files(config_files) config_dict = read_config_files(config_files)

View File

@ -69,7 +69,8 @@ class DatabaseConfig(Config):
if database_path is not None: if database_path is not None:
self.database_config["args"]["database"] = database_path self.database_config["args"]["database"] = database_path
def add_arguments(self, parser): @staticmethod
def add_arguments(parser):
db_group = parser.add_argument_group("database") db_group = parser.add_argument_group("database")
db_group.add_argument( db_group.add_argument(
"-d", "-d",

View File

@ -40,6 +40,7 @@ from .spam_checker import SpamCheckerConfig
from .stats import StatsConfig from .stats import StatsConfig
from .third_party_event_rules import ThirdPartyRulesConfig from .third_party_event_rules import ThirdPartyRulesConfig
from .tls import TlsConfig from .tls import TlsConfig
from .tracer import TracerConfig
from .user_directory import UserDirectoryConfig from .user_directory import UserDirectoryConfig
from .voip import VoipConfig from .voip import VoipConfig
from .workers import WorkerConfig from .workers import WorkerConfig
@ -75,5 +76,6 @@ class HomeServerConfig(
ServerNoticesConfig, ServerNoticesConfig,
RoomDirectoryConfig, RoomDirectoryConfig,
ThirdPartyRulesConfig, ThirdPartyRulesConfig,
TracerConfig,
): ):
pass pass

View File

@ -103,7 +103,8 @@ class LoggingConfig(Config):
if args.log_file is not None: if args.log_file is not None:
self.log_file = args.log_file self.log_file = args.log_file
def add_arguments(cls, parser): @staticmethod
def add_arguments(parser):
logging_group = parser.add_argument_group("logging") logging_group = parser.add_argument_group("logging")
logging_group.add_argument( logging_group.add_argument(
"-v", "-v",

View File

@ -23,7 +23,7 @@ class RateLimitConfig(object):
class FederationRateLimitConfig(object): class FederationRateLimitConfig(object):
_items_and_default = { _items_and_default = {
"window_size": 10000, "window_size": 1000,
"sleep_limit": 10, "sleep_limit": 10,
"sleep_delay": 500, "sleep_delay": 500,
"reject_limit": 50, "reject_limit": 50,
@ -54,7 +54,7 @@ class RatelimitConfig(Config):
# Load the new-style federation config, if it exists. Otherwise, fall # Load the new-style federation config, if it exists. Otherwise, fall
# back to the old method. # back to the old method.
if "federation_rc" in config: if "rc_federation" in config:
self.rc_federation = FederationRateLimitConfig(**config["rc_federation"]) self.rc_federation = FederationRateLimitConfig(**config["rc_federation"])
else: else:
self.rc_federation = FederationRateLimitConfig( self.rc_federation = FederationRateLimitConfig(

View File

@ -71,9 +71,8 @@ class RegistrationConfig(Config):
self.default_identity_server = config.get("default_identity_server") self.default_identity_server = config.get("default_identity_server")
self.allow_guest_access = config.get("allow_guest_access", False) self.allow_guest_access = config.get("allow_guest_access", False)
self.invite_3pid_guest = self.allow_guest_access and config.get( if config.get("invite_3pid_guest", False):
"invite_3pid_guest", False raise ConfigError("invite_3pid_guest is no longer supported")
)
self.auto_join_rooms = config.get("auto_join_rooms", []) self.auto_join_rooms = config.get("auto_join_rooms", [])
for room_alias in self.auto_join_rooms: for room_alias in self.auto_join_rooms:
@ -85,6 +84,11 @@ class RegistrationConfig(Config):
"disable_msisdn_registration", False "disable_msisdn_registration", False
) )
session_lifetime = config.get("session_lifetime")
if session_lifetime is not None:
session_lifetime = self.parse_duration(session_lifetime)
self.session_lifetime = session_lifetime
def generate_config_section(self, generate_secrets=False, **kwargs): def generate_config_section(self, generate_secrets=False, **kwargs):
if generate_secrets: if generate_secrets:
registration_shared_secret = 'registration_shared_secret: "%s"' % ( registration_shared_secret = 'registration_shared_secret: "%s"' % (
@ -142,6 +146,17 @@ class RegistrationConfig(Config):
# renew_at: 1w # renew_at: 1w
# renew_email_subject: "Renew your %%(app)s account" # renew_email_subject: "Renew your %%(app)s account"
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
#
# Note also that this is calculated at login time: changes are not applied
# retrospectively to users who have already logged in.
#
# By default, this is infinite.
#
#session_lifetime: 24h
# The user must provide all of the below types of 3PID when registering. # The user must provide all of the below types of 3PID when registering.
# #
#registrations_require_3pid: #registrations_require_3pid:
@ -222,7 +237,8 @@ class RegistrationConfig(Config):
% locals() % locals()
) )
def add_arguments(self, parser): @staticmethod
def add_arguments(parser):
reg_group = parser.add_argument_group("registration") reg_group = parser.add_argument_group("registration")
reg_group.add_argument( reg_group.add_argument(
"--enable-registration", "--enable-registration",

View File

@ -136,7 +136,7 @@ class ServerConfig(Config):
# Whether to enable experimental MSC1849 (aka relations) support # Whether to enable experimental MSC1849 (aka relations) support
self.experimental_msc1849_support_enabled = config.get( self.experimental_msc1849_support_enabled = config.get(
"experimental_msc1849_support_enabled", False "experimental_msc1849_support_enabled", True
) )
# Options to control access by tracking MAU # Options to control access by tracking MAU
@ -656,7 +656,8 @@ class ServerConfig(Config):
if args.print_pidfile is not None: if args.print_pidfile is not None:
self.print_pidfile = args.print_pidfile self.print_pidfile = args.print_pidfile
def add_arguments(self, parser): @staticmethod
def add_arguments(parser):
server_group = parser.add_argument_group("server") server_group = parser.add_argument_group("server")
server_group.add_argument( server_group.add_argument(
"-D", "-D",

59
synapse/config/tracer.py Normal file
View File

@ -0,0 +1,59 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.d
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
class TracerConfig(Config):
def read_config(self, config, **kwargs):
opentracing_config = config.get("opentracing")
if opentracing_config is None:
opentracing_config = {}
self.opentracer_enabled = opentracing_config.get("enabled", False)
if not self.opentracer_enabled:
return
# The tracer is enabled so sanitize the config
self.opentracer_whitelist = opentracing_config.get("homeserver_whitelist", [])
if not isinstance(self.opentracer_whitelist, list):
raise ConfigError("Tracer homeserver_whitelist config is malformed")
def generate_config_section(cls, **kwargs):
return """\
## Opentracing ##
# These settings enable opentracing, which implements distributed tracing.
# This allows you to observe the causal chains of events across servers
# including requests, key lookups etc., across any server running
# synapse or any other other services which supports opentracing
# (specifically those implemented with Jaeger).
#
opentracing:
# tracing is disabled by default. Uncomment the following line to enable it.
#
#enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage.
# See docs/opentracing.rst
# This is a list of regexes which are matched against the server_name of the
# homeserver.
#
# By defult, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - ".*"
"""

View File

@ -104,6 +104,17 @@ class _EventInternalMetadata(object):
""" """
return getattr(self, "proactively_send", True) return getattr(self, "proactively_send", True)
def is_redacted(self):
"""Whether the event has been redacted.
This is used for efficiently checking whether an event has been
marked as redacted without needing to make another database call.
Returns:
bool
"""
return getattr(self, "redacted", False)
def _event_dict_property(key): def _event_dict_property(key):
# We want to be able to use hasattr with the event dict properties. # We want to be able to use hasattr with the event dict properties.

View File

@ -52,10 +52,15 @@ def prune_event(event):
from . import event_type_from_format_version from . import event_type_from_format_version
return event_type_from_format_version(event.format_version)( pruned_event = event_type_from_format_version(event.format_version)(
pruned_event_dict, event.internal_metadata.get_dict() pruned_event_dict, event.internal_metadata.get_dict()
) )
# Mark the event as redacted
pruned_event.internal_metadata.redacted = True
return pruned_event
def prune_event_dict(event_dict): def prune_event_dict(event_dict):
"""Redacts the event_dict in the same way as `prune_event`, except it """Redacts the event_dict in the same way as `prune_event`, except it
@ -360,9 +365,12 @@ class EventClientSerializer(object):
event_id = event.event_id event_id = event.event_id
serialized_event = serialize_event(event, time_now, **kwargs) serialized_event = serialize_event(event, time_now, **kwargs)
# If MSC1849 is enabled then we need to look if thre are any relations # If MSC1849 is enabled then we need to look if there are any relations
# we need to bundle in with the event # we need to bundle in with the event.
if self.experimental_msc1849_support_enabled and bundle_aggregations: # Do not bundle relations if the event has been redacted
if not event.internal_metadata.is_redacted() and (
self.experimental_msc1849_support_enabled and bundle_aggregations
):
annotations = yield self.store.get_aggregation_groups_for_event(event_id) annotations = yield self.store.get_aggregation_groups_for_event(event_id)
references = yield self.store.get_relations_for_event( references = yield self.store.get_relations_for_event(
event_id, RelationTypes.REFERENCE, direction="f" event_id, RelationTypes.REFERENCE, direction="f"
@ -392,7 +400,11 @@ class EventClientSerializer(object):
serialized_event["content"].pop("m.relates_to", None) serialized_event["content"].pop("m.relates_to", None)
r = serialized_event["unsigned"].setdefault("m.relations", {}) r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.REPLACE] = {"event_id": edit.event_id} r[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
"origin_server_ts": edit.origin_server_ts,
"sender": edit.sender,
}
defer.returnValue(serialized_event) defer.returnValue(serialized_event)

View File

@ -21,8 +21,6 @@ These actions are mostly only used by the :py:mod:`.replication` module.
import logging import logging
from twisted.internet import defer
from synapse.logging.utils import log_function from synapse.logging.utils import log_function
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -63,33 +61,3 @@ class TransactionActions(object):
return self.store.set_received_txn_response( return self.store.set_received_txn_response(
transaction.transaction_id, origin, code, response transaction.transaction_id, origin, code, response
) )
@defer.inlineCallbacks
@log_function
def prepare_to_send(self, transaction):
""" Persists the `Transaction` we are about to send and works out the
correct value for the `prev_ids` key.
Returns:
Deferred
"""
transaction.prev_ids = yield self.store.prep_send_transaction(
transaction.transaction_id,
transaction.destination,
transaction.origin_server_ts,
)
@log_function
def delivered(self, transaction, response_code, response_dict):
""" Marks the given `Transaction` as having been successfully
delivered to the remote homeserver, and what the response was.
Returns:
Deferred
"""
return self.store.delivered_txn(
transaction.transaction_id,
transaction.destination,
response_code,
response_dict,
)

View File

@ -63,8 +63,6 @@ class TransactionManager(object):
len(edus), len(edus),
) )
logger.debug("TX [%s] Persisting transaction...", destination)
transaction = Transaction.create_new( transaction = Transaction.create_new(
origin_server_ts=int(self.clock.time_msec()), origin_server_ts=int(self.clock.time_msec()),
transaction_id=txn_id, transaction_id=txn_id,
@ -76,9 +74,6 @@ class TransactionManager(object):
self._next_txn_id += 1 self._next_txn_id += 1
yield self._transaction_actions.prepare_to_send(transaction)
logger.debug("TX [%s] Persisted transaction", destination)
logger.info( logger.info(
"TX [%s] {%s} Sending transaction [%s]," " (PDUs: %d, EDUs: %d)", "TX [%s] {%s} Sending transaction [%s]," " (PDUs: %d, EDUs: %d)",
destination, destination,
@ -118,10 +113,6 @@ class TransactionManager(object):
logger.info("TX [%s] {%s} got %d response", destination, txn_id, code) logger.info("TX [%s] {%s} got %d response", destination, txn_id, code)
yield self._transaction_actions.delivered(transaction, code, response)
logger.debug("TX [%s] {%s} Marked as delivered", destination, txn_id)
if code == 200: if code == 200:
for e_id, r in response.get("pdus", {}).items(): for e_id, r in response.get("pdus", {}).items():
if "error" in r: if "error" in r:

File diff suppressed because it is too large Load Diff

View File

@ -17,6 +17,10 @@ import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.types import RoomStreamToken
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler from ._base import BaseHandler
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -89,3 +93,182 @@ class AdminHandler(BaseHandler):
ret = yield self.store.search_users(term) ret = yield self.store.search_users(term)
defer.returnValue(ret) defer.returnValue(ret)
@defer.inlineCallbacks
def export_user_data(self, user_id, writer):
"""Write all data we have on the user to the given writer.
Args:
user_id (str)
writer (ExfiltrationWriter)
Returns:
defer.Deferred: Resolves when all data for a user has been written.
The returned value is that returned by `writer.finished()`.
"""
# Get all rooms the user is in or has been in
rooms = yield self.store.get_rooms_for_user_where_membership_is(
user_id,
membership_list=(
Membership.JOIN,
Membership.LEAVE,
Membership.BAN,
Membership.INVITE,
),
)
# We only try and fetch events for rooms the user has been in. If
# they've been e.g. invited to a room without joining then we handle
# those seperately.
rooms_user_has_been_in = yield self.store.get_rooms_user_has_been_in(user_id)
for index, room in enumerate(rooms):
room_id = room.room_id
logger.info(
"[%s] Handling room %s, %d/%d", user_id, room_id, index + 1, len(rooms)
)
forgotten = yield self.store.did_forget(user_id, room_id)
if forgotten:
logger.info("[%s] User forgot room %d, ignoring", user_id, room_id)
continue
if room_id not in rooms_user_has_been_in:
# If we haven't been in the rooms then the filtering code below
# won't return anything, so we need to handle these cases
# explicitly.
if room.membership == Membership.INVITE:
event_id = room.event_id
invite = yield self.store.get_event(event_id, allow_none=True)
if invite:
invited_state = invite.unsigned["invite_room_state"]
writer.write_invite(room_id, invite, invited_state)
continue
# We only want to bother fetching events up to the last time they
# were joined. We estimate that point by looking at the
# stream_ordering of the last membership if it wasn't a join.
if room.membership == Membership.JOIN:
stream_ordering = yield self.store.get_room_max_stream_ordering()
else:
stream_ordering = room.stream_ordering
from_key = str(RoomStreamToken(0, 0))
to_key = str(RoomStreamToken(None, stream_ordering))
written_events = set() # Events that we've processed in this room
# We need to track gaps in the events stream so that we can then
# write out the state at those events. We do this by keeping track
# of events whose prev events we haven't seen.
# Map from event ID to prev events that haven't been processed,
# dict[str, set[str]].
event_to_unseen_prevs = {}
# The reverse mapping to above, i.e. map from unseen event to events
# that have the unseen event in their prev_events, i.e. the unseen
# events "children". dict[str, set[str]]
unseen_to_child_events = {}
# We fetch events in the room the user could see by fetching *all*
# events that we have and then filtering, this isn't the most
# efficient method perhaps but it does guarantee we get everything.
while True:
events, _ = yield self.store.paginate_room_events(
room_id, from_key, to_key, limit=100, direction="f"
)
if not events:
break
from_key = events[-1].internal_metadata.after
events = yield filter_events_for_client(self.store, user_id, events)
writer.write_events(room_id, events)
# Update the extremity tracking dicts
for event in events:
# Check if we have any prev events that haven't been
# processed yet, and add those to the appropriate dicts.
unseen_events = set(event.prev_event_ids()) - written_events
if unseen_events:
event_to_unseen_prevs[event.event_id] = unseen_events
for unseen in unseen_events:
unseen_to_child_events.setdefault(unseen, set()).add(
event.event_id
)
# Now check if this event is an unseen prev event, if so
# then we remove this event from the appropriate dicts.
for child_id in unseen_to_child_events.pop(event.event_id, []):
event_to_unseen_prevs[child_id].discard(event.event_id)
written_events.add(event.event_id)
logger.info(
"Written %d events in room %s", len(written_events), room_id
)
# Extremities are the events who have at least one unseen prev event.
extremities = (
event_id
for event_id, unseen_prevs in event_to_unseen_prevs.items()
if unseen_prevs
)
for event_id in extremities:
if not event_to_unseen_prevs[event_id]:
continue
state = yield self.store.get_state_for_event(event_id)
writer.write_state(room_id, event_id, state)
defer.returnValue(writer.finished())
class ExfiltrationWriter(object):
"""Interface used to specify how to write exported data.
"""
def write_events(self, room_id, events):
"""Write a batch of events for a room.
Args:
room_id (str)
events (list[FrozenEvent])
"""
pass
def write_state(self, room_id, event_id, state):
"""Write the state at the given event in the room.
This only gets called for backward extremities rather than for each
event.
Args:
room_id (str)
event_id (str)
state (dict[tuple[str, str], FrozenEvent])
"""
pass
def write_invite(self, room_id, event, state):
"""Write an invite for the room, with associated invite state.
Args:
room_id (str)
event (FrozenEvent)
state (dict[tuple[str, str], dict]): A subset of the state at the
invite, with a subset of the event keys (type, state_key
content and sender)
"""
def finished(self):
"""Called when all data has succesfully been exported and written.
This functions return value is passed to the caller of
`export_user_data`.
"""
pass

View File

@ -15,6 +15,7 @@
# limitations under the License. # limitations under the License.
import logging import logging
import time
import unicodedata import unicodedata
import attr import attr
@ -34,6 +35,7 @@ from synapse.api.errors import (
LoginError, LoginError,
StoreError, StoreError,
SynapseError, SynapseError,
UserDeactivatedError,
) )
from synapse.api.ratelimiting import Ratelimiter from synapse.api.ratelimiting import Ratelimiter
from synapse.logging.context import defer_to_thread from synapse.logging.context import defer_to_thread
@ -558,7 +560,7 @@ class AuthHandler(BaseHandler):
return self.sessions[session_id] return self.sessions[session_id]
@defer.inlineCallbacks @defer.inlineCallbacks
def get_access_token_for_user_id(self, user_id, device_id=None): def get_access_token_for_user_id(self, user_id, device_id, valid_until_ms):
""" """
Creates a new access token for the user with the given user ID. Creates a new access token for the user with the given user ID.
@ -572,15 +574,27 @@ class AuthHandler(BaseHandler):
device_id (str|None): the device ID to associate with the tokens. device_id (str|None): the device ID to associate with the tokens.
None to leave the tokens unassociated with a device (deprecated: None to leave the tokens unassociated with a device (deprecated:
we should always have a device ID) we should always have a device ID)
valid_until_ms (int|None): when the token is valid until. None for
no expiry.
Returns: Returns:
The access token for the user's session. The access token for the user's session.
Raises: Raises:
StoreError if there was a problem storing the token. StoreError if there was a problem storing the token.
""" """
logger.info("Logging in user %s on device %s", user_id, device_id) fmt_expiry = ""
access_token = yield self.issue_access_token(user_id, device_id) if valid_until_ms is not None:
fmt_expiry = time.strftime(
" until %Y-%m-%d %H:%M:%S", time.localtime(valid_until_ms / 1000.0)
)
logger.info("Logging in user %s on device %s%s", user_id, device_id, fmt_expiry)
yield self.auth.check_auth_blocking(user_id) yield self.auth.check_auth_blocking(user_id)
access_token = self.macaroon_gen.generate_access_token(user_id)
yield self.store.add_access_token_to_user(
user_id, access_token, device_id, valid_until_ms
)
# the device *should* have been registered before we got here; however, # the device *should* have been registered before we got here; however,
# it's possible we raced against a DELETE operation. The thing we # it's possible we raced against a DELETE operation. The thing we
# really don't want is active access_tokens without a record of the # really don't want is active access_tokens without a record of the
@ -610,6 +624,7 @@ class AuthHandler(BaseHandler):
Raises: Raises:
LimitExceededError if the ratelimiter's login requests count for this LimitExceededError if the ratelimiter's login requests count for this
user is too high too proceed. user is too high too proceed.
UserDeactivatedError if a user is found but is deactivated.
""" """
self.ratelimit_login_per_account(user_id) self.ratelimit_login_per_account(user_id)
res = yield self._find_user_id_and_pwd_hash(user_id) res = yield self._find_user_id_and_pwd_hash(user_id)
@ -825,18 +840,19 @@ class AuthHandler(BaseHandler):
if not lookupres: if not lookupres:
defer.returnValue(None) defer.returnValue(None)
(user_id, password_hash) = lookupres (user_id, password_hash) = lookupres
# If the password hash is None, the account has likely been deactivated
if not password_hash:
deactivated = yield self.store.get_user_deactivated_status(user_id)
if deactivated:
raise UserDeactivatedError("This account has been deactivated")
result = yield self.validate_hash(password, password_hash) result = yield self.validate_hash(password, password_hash)
if not result: if not result:
logger.warn("Failed password login for user %s", user_id) logger.warn("Failed password login for user %s", user_id)
defer.returnValue(None) defer.returnValue(None)
defer.returnValue(user_id) defer.returnValue(user_id)
@defer.inlineCallbacks
def issue_access_token(self, user_id, device_id=None):
access_token = self.macaroon_gen.generate_access_token(user_id)
yield self.store.add_access_token_to_user(user_id, access_token, device_id)
defer.returnValue(access_token)
@defer.inlineCallbacks @defer.inlineCallbacks
def validate_short_term_login_token_and_get_user_id(self, login_token): def validate_short_term_login_token_and_get_user_id(self, login_token):
auth_api = self.hs.get_auth() auth_api = self.hs.get_auth()

View File

@ -22,7 +22,7 @@ from canonicaljson import encode_canonical_json, json
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import CodeMessageException, FederationDeniedError, SynapseError from synapse.api.errors import CodeMessageException, SynapseError
from synapse.logging.context import make_deferred_yieldable, run_in_background from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import UserID, get_domain_from_id from synapse.types import UserID, get_domain_from_id
from synapse.util.retryutils import NotRetryingDestination from synapse.util.retryutils import NotRetryingDestination
@ -350,9 +350,6 @@ def _exception_to_failure(e):
if isinstance(e, NotRetryingDestination): if isinstance(e, NotRetryingDestination):
return {"status": 503, "message": "Not ready for retry"} return {"status": 503, "message": "Not ready for retry"}
if isinstance(e, FederationDeniedError):
return {"status": 403, "message": "Federation Denied"}
# include ConnectionRefused and other errors # include ConnectionRefused and other errors
# #
# Note that some Exceptions (notably twisted's ResponseFailed etc) don't # Note that some Exceptions (notably twisted's ResponseFailed etc) don't

View File

@ -118,7 +118,7 @@ class IdentityHandler(BaseHandler):
raise SynapseError(400, "No client_secret in creds") raise SynapseError(400, "No client_secret in creds")
try: try:
data = yield self.http_client.post_urlencoded_get_json( data = yield self.http_client.post_json_get_json(
"https://%s%s" % (id_server, "/_matrix/identity/api/v1/3pid/bind"), "https://%s%s" % (id_server, "/_matrix/identity/api/v1/3pid/bind"),
{"sid": creds["sid"], "client_secret": client_secret, "mxid": mxid}, {"sid": creds["sid"], "client_secret": client_secret, "mxid": mxid},
) )

View File

@ -23,6 +23,7 @@ from canonicaljson import encode_canonical_json, json
from twisted.internet import defer from twisted.internet import defer
from twisted.internet.defer import succeed from twisted.internet.defer import succeed
from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, RelationTypes from synapse.api.constants import EventTypes, Membership, RelationTypes
from synapse.api.errors import ( from synapse.api.errors import (
AuthError, AuthError,
@ -784,6 +785,20 @@ class EventCreationHandler(object):
event.signatures.update(returned_invite.signatures) event.signatures.update(returned_invite.signatures)
if event.type == EventTypes.Redaction: if event.type == EventTypes.Redaction:
original_event = yield self.store.get_event(
event.redacts,
check_redacted=False,
get_prev_content=False,
allow_rejected=False,
allow_none=True,
check_room_id=event.room_id,
)
# we can make some additional checks now if we have the original event.
if original_event:
if original_event.type == EventTypes.Create:
raise AuthError(403, "Redacting create events is not permitted")
prev_state_ids = yield context.get_prev_state_ids(self.store) prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_events_ids = yield self.auth.compute_auth_events( auth_events_ids = yield self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True event, prev_state_ids, for_verification=True
@ -791,18 +806,18 @@ class EventCreationHandler(object):
auth_events = yield self.store.get_events(auth_events_ids) auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events.values()} auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
room_version = yield self.store.get_room_version(event.room_id) room_version = yield self.store.get_room_version(event.room_id)
if self.auth.check_redaction(room_version, event, auth_events=auth_events):
original_event = yield self.store.get_event( if event_auth.check_redaction(room_version, event, auth_events=auth_events):
event.redacts, # this user doesn't have 'redact' rights, so we need to do some more
check_redacted=False, # checks on the original event. Let's start by checking the original
get_prev_content=False, # event exists.
allow_rejected=False, if not original_event:
allow_none=False, raise NotFoundError("Could not find event %s" % (event.redacts,))
)
if event.user_id != original_event.user_id: if event.user_id != original_event.user_id:
raise AuthError(403, "You don't have permission to redact events") raise AuthError(403, "You don't have permission to redact events")
# We've already checked. # all the checks are done.
event.internal_metadata.recheck_redaction = False event.internal_metadata.recheck_redaction = False
if event.type == EventTypes.Create: if event.type == EventTypes.Create:

View File

@ -303,6 +303,10 @@ class BaseProfileHandler(BaseHandler):
if not self.hs.config.require_auth_for_profile_requests or not requester: if not self.hs.config.require_auth_for_profile_requests or not requester:
return return
# Always allow the user to query their own profile.
if target_user.to_string() == requester.to_string():
return
try: try:
requester_rooms = yield self.store.get_rooms_for_user(requester.to_string()) requester_rooms = yield self.store.get_rooms_for_user(requester.to_string())
target_user_rooms = yield self.store.get_rooms_for_user( target_user_rooms = yield self.store.get_rooms_for_user(

View File

@ -84,6 +84,8 @@ class RegistrationHandler(BaseHandler):
self.device_handler = hs.get_device_handler() self.device_handler = hs.get_device_handler()
self.pusher_pool = hs.get_pusherpool() self.pusher_pool = hs.get_pusherpool()
self.session_lifetime = hs.config.session_lifetime
@defer.inlineCallbacks @defer.inlineCallbacks
def check_username(self, localpart, guest_access_token=None, assigned_user_id=None): def check_username(self, localpart, guest_access_token=None, assigned_user_id=None):
if types.contains_invalid_mxid_characters(localpart): if types.contains_invalid_mxid_characters(localpart):
@ -138,11 +140,10 @@ class RegistrationHandler(BaseHandler):
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def register( def register_user(
self, self,
localpart=None, localpart=None,
password=None, password=None,
generate_token=True,
guest_access_token=None, guest_access_token=None,
make_guest=False, make_guest=False,
admin=False, admin=False,
@ -160,11 +161,6 @@ class RegistrationHandler(BaseHandler):
password (unicode) : The password to assign to this user so they can password (unicode) : The password to assign to this user so they can
login again. This can be None which means they cannot login again login again. This can be None which means they cannot login again
via a password (e.g. the user is an application service user). via a password (e.g. the user is an application service user).
generate_token (bool): Whether a new access token should be
generated. Having this be True should be considered deprecated,
since it offers no means of associating a device_id with the
access_token. Instead you should call auth_handler.issue_access_token
after registration.
user_type (str|None): type of user. One of the values from user_type (str|None): type of user. One of the values from
api.constants.UserTypes, or None for a normal user. api.constants.UserTypes, or None for a normal user.
default_display_name (unicode|None): if set, the new user's displayname default_display_name (unicode|None): if set, the new user's displayname
@ -172,7 +168,7 @@ class RegistrationHandler(BaseHandler):
address (str|None): the IP address used to perform the registration. address (str|None): the IP address used to perform the registration.
bind_emails (List[str]): list of emails to bind to this account. bind_emails (List[str]): list of emails to bind to this account.
Returns: Returns:
A tuple of (user_id, access_token). Deferred[str]: user_id
Raises: Raises:
RegistrationError if there was a problem registering. RegistrationError if there was a problem registering.
""" """
@ -206,12 +202,8 @@ class RegistrationHandler(BaseHandler):
elif default_display_name is None: elif default_display_name is None:
default_display_name = localpart default_display_name = localpart
token = None
if generate_token:
token = self.macaroon_gen.generate_access_token(user_id)
yield self.register_with_store( yield self.register_with_store(
user_id=user_id, user_id=user_id,
token=token,
password_hash=password_hash, password_hash=password_hash,
was_guest=was_guest, was_guest=was_guest,
make_guest=make_guest, make_guest=make_guest,
@ -230,21 +222,17 @@ class RegistrationHandler(BaseHandler):
else: else:
# autogen a sequential user ID # autogen a sequential user ID
attempts = 0 attempts = 0
token = None
user = None user = None
while not user: while not user:
localpart = yield self._generate_user_id(attempts > 0) localpart = yield self._generate_user_id(attempts > 0)
user = UserID(localpart, self.hs.hostname) user = UserID(localpart, self.hs.hostname)
user_id = user.to_string() user_id = user.to_string()
yield self.check_user_id_not_appservice_exclusive(user_id) yield self.check_user_id_not_appservice_exclusive(user_id)
if generate_token:
token = self.macaroon_gen.generate_access_token(user_id)
if default_display_name is None: if default_display_name is None:
default_display_name = localpart default_display_name = localpart
try: try:
yield self.register_with_store( yield self.register_with_store(
user_id=user_id, user_id=user_id,
token=token,
password_hash=password_hash, password_hash=password_hash,
make_guest=make_guest, make_guest=make_guest,
create_profile_with_displayname=default_display_name, create_profile_with_displayname=default_display_name,
@ -254,10 +242,15 @@ class RegistrationHandler(BaseHandler):
# if user id is taken, just generate another # if user id is taken, just generate another
user = None user = None
user_id = None user_id = None
token = None
attempts += 1 attempts += 1
if not self.hs.config.user_consent_at_registration: if not self.hs.config.user_consent_at_registration:
yield self._auto_join_rooms(user_id) yield self._auto_join_rooms(user_id)
else:
logger.info(
"Skipping auto-join for %s because consent is required at registration",
user_id,
)
# Bind any specified emails to this account # Bind any specified emails to this account
current_time = self.hs.get_clock().time_msec() current_time = self.hs.get_clock().time_msec()
@ -272,7 +265,7 @@ class RegistrationHandler(BaseHandler):
# Bind email to new account # Bind email to new account
yield self._register_email_threepid(user_id, threepid_dict, None, False) yield self._register_email_threepid(user_id, threepid_dict, None, False)
defer.returnValue((user_id, token)) defer.returnValue(user_id)
@defer.inlineCallbacks @defer.inlineCallbacks
def _auto_join_rooms(self, user_id): def _auto_join_rooms(self, user_id):
@ -298,6 +291,7 @@ class RegistrationHandler(BaseHandler):
count = yield self.store.count_all_users() count = yield self.store.count_all_users()
should_auto_create_rooms = count == 1 should_auto_create_rooms = count == 1
for r in self.hs.config.auto_join_rooms: for r in self.hs.config.auto_join_rooms:
logger.info("Auto-joining %s to %s", user_id, r)
try: try:
if should_auto_create_rooms: if should_auto_create_rooms:
room_alias = RoomAlias.from_string(r) room_alias = RoomAlias.from_string(r)
@ -505,87 +499,6 @@ class RegistrationHandler(BaseHandler):
) )
defer.returnValue(data) defer.returnValue(data)
@defer.inlineCallbacks
def get_or_create_user(self, requester, localpart, displayname, password_hash=None):
"""Creates a new user if the user does not exist,
else revokes all previous access tokens and generates a new one.
Args:
localpart : The local part of the user ID to register. If None,
one will be randomly generated.
Returns:
A tuple of (user_id, access_token).
Raises:
RegistrationError if there was a problem registering.
NB this is only used in tests. TODO: move it to the test package!
"""
if localpart is None:
raise SynapseError(400, "Request must include user id")
yield self.auth.check_auth_blocking()
need_register = True
try:
yield self.check_username(localpart)
except SynapseError as e:
if e.errcode == Codes.USER_IN_USE:
need_register = False
else:
raise
user = UserID(localpart, self.hs.hostname)
user_id = user.to_string()
token = self.macaroon_gen.generate_access_token(user_id)
if need_register:
yield self.register_with_store(
user_id=user_id,
token=token,
password_hash=password_hash,
create_profile_with_displayname=user.localpart,
)
else:
yield self._auth_handler.delete_access_tokens_for_user(user_id)
yield self.store.add_access_token_to_user(user_id=user_id, token=token)
if displayname is not None:
logger.info("setting user display name: %s -> %s", user_id, displayname)
yield self.profile_handler.set_displayname(
user, requester, displayname, by_admin=True
)
defer.returnValue((user_id, token))
@defer.inlineCallbacks
def get_or_register_3pid_guest(self, medium, address, inviter_user_id):
"""Get a guest access token for a 3PID, creating a guest account if
one doesn't already exist.
Args:
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
Returns:
Deferred[(str, str)]: A 2-tuple of `(user_id, access_token)` of the
3PID guest account.
"""
access_token = yield self.store.get_3pid_guest_access_token(medium, address)
if access_token:
user_info = yield self.auth.get_user_by_access_token(access_token)
defer.returnValue((user_info["user"].to_string(), access_token))
user_id, access_token = yield self.register(
generate_token=True, make_guest=True
)
access_token = yield self.store.save_or_get_3pid_guest_access_token(
medium, address, access_token, inviter_user_id
)
defer.returnValue((user_id, access_token))
@defer.inlineCallbacks @defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier): def _join_user_to_room(self, requester, room_identifier):
room_id = None room_id = None
@ -615,7 +528,6 @@ class RegistrationHandler(BaseHandler):
def register_with_store( def register_with_store(
self, self,
user_id, user_id,
token=None,
password_hash=None, password_hash=None,
was_guest=False, was_guest=False,
make_guest=False, make_guest=False,
@ -629,9 +541,6 @@ class RegistrationHandler(BaseHandler):
Args: Args:
user_id (str): The desired user ID to register. user_id (str): The desired user ID to register.
token (str): The desired access token to use for this user. If this
is not None, the given access token is associated with the user
id.
password_hash (str|None): Optional. The password hash for this user. password_hash (str|None): Optional. The password hash for this user.
was_guest (bool): Optional. Whether this is a guest account being was_guest (bool): Optional. Whether this is a guest account being
upgraded to a non-guest account. upgraded to a non-guest account.
@ -667,7 +576,6 @@ class RegistrationHandler(BaseHandler):
if self.hs.config.worker_app: if self.hs.config.worker_app:
return self._register_client( return self._register_client(
user_id=user_id, user_id=user_id,
token=token,
password_hash=password_hash, password_hash=password_hash,
was_guest=was_guest, was_guest=was_guest,
make_guest=make_guest, make_guest=make_guest,
@ -678,9 +586,8 @@ class RegistrationHandler(BaseHandler):
address=address, address=address,
) )
else: else:
return self.store.register( return self.store.register_user(
user_id=user_id, user_id=user_id,
token=token,
password_hash=password_hash, password_hash=password_hash,
was_guest=was_guest, was_guest=was_guest,
make_guest=make_guest, make_guest=make_guest,
@ -694,6 +601,8 @@ class RegistrationHandler(BaseHandler):
def register_device(self, user_id, device_id, initial_display_name, is_guest=False): def register_device(self, user_id, device_id, initial_display_name, is_guest=False):
"""Register a device for a user and generate an access token. """Register a device for a user and generate an access token.
The access token will be limited by the homeserver's session_lifetime config.
Args: Args:
user_id (str): full canonical @user:id user_id (str): full canonical @user:id
device_id (str|None): The device ID to check, or None to generate device_id (str|None): The device ID to check, or None to generate
@ -714,20 +623,29 @@ class RegistrationHandler(BaseHandler):
is_guest=is_guest, is_guest=is_guest,
) )
defer.returnValue((r["device_id"], r["access_token"])) defer.returnValue((r["device_id"], r["access_token"]))
else:
device_id = yield self.device_handler.check_device_registered(
user_id, device_id, initial_display_name
)
if is_guest:
access_token = self.macaroon_gen.generate_access_token(
user_id, ["guest = true"]
)
else:
access_token = yield self._auth_handler.get_access_token_for_user_id(
user_id, device_id=device_id
)
defer.returnValue((device_id, access_token)) valid_until_ms = None
if self.session_lifetime is not None:
if is_guest:
raise Exception(
"session_lifetime is not currently implemented for guest access"
)
valid_until_ms = self.clock.time_msec() + self.session_lifetime
device_id = yield self.device_handler.check_device_registered(
user_id, device_id, initial_display_name
)
if is_guest:
assert valid_until_ms is None
access_token = self.macaroon_gen.generate_access_token(
user_id, ["guest = true"]
)
else:
access_token = yield self._auth_handler.get_access_token_for_user_id(
user_id, device_id=device_id, valid_until_ms=valid_until_ms
)
defer.returnValue((device_id, access_token))
@defer.inlineCallbacks @defer.inlineCallbacks
def post_registration_actions( def post_registration_actions(

View File

@ -28,7 +28,7 @@ from twisted.internet import defer
from synapse import types from synapse import types
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError from synapse.api.errors import AuthError, Codes, HttpResponseException, SynapseError
from synapse.types import RoomID, UserID from synapse.types import RoomID, UserID
from synapse.util.async_helpers import Linearizer from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_joined_room, user_left_room from synapse.util.distributor import user_joined_room, user_left_room
@ -122,24 +122,6 @@ class RoomMemberHandler(object):
""" """
raise NotImplementedError() raise NotImplementedError()
@abc.abstractmethod
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Get a guest access token for a 3PID, creating a guest account if
one doesn't already exist.
Args:
requester (Requester)
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
Returns:
Deferred[(str, str)]: A 2-tuple of `(user_id, access_token)` of the
3PID guest account.
"""
raise NotImplementedError()
@abc.abstractmethod @abc.abstractmethod
def _user_joined_room(self, target, room_id): def _user_joined_room(self, target, room_id):
"""Notifies distributor on master process that the user has joined the """Notifies distributor on master process that the user has joined the
@ -894,24 +876,23 @@ class RoomMemberHandler(object):
"sender_avatar_url": inviter_avatar_url, "sender_avatar_url": inviter_avatar_url,
} }
if self.config.invite_3pid_guest: try:
guest_user_id, guest_access_token = yield self.get_or_register_3pid_guest( data = yield self.simple_http_client.post_json_get_json(
requester=requester, is_url, invite_config
medium=medium, )
address=address, except HttpResponseException as e:
inviter_user_id=inviter_user_id, # Some identity servers may only support application/x-www-form-urlencoded
# types. This is especially true with old instances of Sydent, see
# https://github.com/matrix-org/sydent/pull/170
logger.info(
"Failed to POST %s with JSON, falling back to urlencoded form: %s",
is_url,
e,
)
data = yield self.simple_http_client.post_urlencoded_get_json(
is_url, invite_config
) )
invite_config.update(
{
"guest_access_token": guest_access_token,
"guest_user_id": guest_user_id,
}
)
data = yield self.simple_http_client.post_urlencoded_get_json(
is_url, invite_config
)
# TODO: Check for success # TODO: Check for success
token = data["token"] token = data["token"]
public_keys = data.get("public_keys", []) public_keys = data.get("public_keys", [])
@ -1091,12 +1072,6 @@ class RoomMemberMasterHandler(RoomMemberHandler):
yield self.store.locally_reject_invite(target.to_string(), room_id) yield self.store.locally_reject_invite(target.to_string(), room_id)
defer.returnValue({}) defer.returnValue({})
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Implements RoomMemberHandler.get_or_register_3pid_guest
"""
rg = self.registration_handler
return rg.get_or_register_3pid_guest(medium, address, inviter_user_id)
def _user_joined_room(self, target, room_id): def _user_joined_room(self, target, room_id):
"""Implements RoomMemberHandler._user_joined_room """Implements RoomMemberHandler._user_joined_room
""" """

View File

@ -20,7 +20,6 @@ from twisted.internet import defer
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.handlers.room_member import RoomMemberHandler from synapse.handlers.room_member import RoomMemberHandler
from synapse.replication.http.membership import ( from synapse.replication.http.membership import (
ReplicationRegister3PIDGuestRestServlet as Repl3PID,
ReplicationRemoteJoinRestServlet as ReplRemoteJoin, ReplicationRemoteJoinRestServlet as ReplRemoteJoin,
ReplicationRemoteRejectInviteRestServlet as ReplRejectInvite, ReplicationRemoteRejectInviteRestServlet as ReplRejectInvite,
ReplicationUserJoinedLeftRoomRestServlet as ReplJoinedLeft, ReplicationUserJoinedLeftRoomRestServlet as ReplJoinedLeft,
@ -33,7 +32,6 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
def __init__(self, hs): def __init__(self, hs):
super(RoomMemberWorkerHandler, self).__init__(hs) super(RoomMemberWorkerHandler, self).__init__(hs)
self._get_register_3pid_client = Repl3PID.make_client(hs)
self._remote_join_client = ReplRemoteJoin.make_client(hs) self._remote_join_client = ReplRemoteJoin.make_client(hs)
self._remote_reject_client = ReplRejectInvite.make_client(hs) self._remote_reject_client = ReplRejectInvite.make_client(hs)
self._notify_change_client = ReplJoinedLeft.make_client(hs) self._notify_change_client = ReplJoinedLeft.make_client(hs)
@ -80,13 +78,3 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
return self._notify_change_client( return self._notify_change_client(
user_id=target.to_string(), room_id=room_id, change="left" user_id=target.to_string(), room_id=room_id, change="left"
) )
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Implements RoomMemberHandler.get_or_register_3pid_guest
"""
return self._get_register_3pid_client(
requester=requester,
medium=medium,
address=address,
inviter_user_id=inviter_user_id,
)

View File

@ -36,6 +36,7 @@ from twisted.internet.task import _EPSILON, Cooperator
from twisted.web._newclient import ResponseDone from twisted.web._newclient import ResponseDone
from twisted.web.http_headers import Headers from twisted.web.http_headers import Headers
import synapse.logging.opentracing as opentracing
import synapse.metrics import synapse.metrics
import synapse.util.retryutils import synapse.util.retryutils
from synapse.api.errors import ( from synapse.api.errors import (
@ -339,9 +340,25 @@ class MatrixFederationHttpClient(object):
else: else:
query_bytes = b"" query_bytes = b""
headers_dict = {b"User-Agent": [self.version_string_bytes]} # Retreive current span
scope = opentracing.start_active_span(
"outgoing-federation-request",
tags={
opentracing.tags.SPAN_KIND: opentracing.tags.SPAN_KIND_RPC_CLIENT,
opentracing.tags.PEER_ADDRESS: request.destination,
opentracing.tags.HTTP_METHOD: request.method,
opentracing.tags.HTTP_URL: request.path,
},
finish_on_close=True,
)
with limiter: # Inject the span into the headers
headers_dict = {}
opentracing.inject_active_span_byte_dict(headers_dict, request.destination)
headers_dict[b"User-Agent"] = [self.version_string_bytes]
with limiter, scope:
# XXX: Would be much nicer to retry only at the transaction-layer # XXX: Would be much nicer to retry only at the transaction-layer
# (once we have reliable transactions in place) # (once we have reliable transactions in place)
if long_retries: if long_retries:
@ -419,6 +436,10 @@ class MatrixFederationHttpClient(object):
response.phrase.decode("ascii", errors="replace"), response.phrase.decode("ascii", errors="replace"),
) )
opentracing.set_tag(
opentracing.tags.HTTP_STATUS_CODE, response.code
)
if 200 <= response.code < 300: if 200 <= response.code < 300:
pass pass
else: else:
@ -499,8 +520,7 @@ class MatrixFederationHttpClient(object):
_flatten_response_never_received(e), _flatten_response_never_received(e),
) )
raise raise
defer.returnValue(response)
defer.returnValue(response)
def build_auth_headers( def build_auth_headers(
self, destination, method, url_bytes, content=None, destination_is=None self, destination, method, url_bytes, content=None, destination_is=None

View File

@ -245,7 +245,9 @@ class JsonResource(HttpServer, resource.Resource):
isLeaf = True isLeaf = True
_PathEntry = collections.namedtuple("_PathEntry", ["pattern", "callback"]) _PathEntry = collections.namedtuple(
"_PathEntry", ["pattern", "callback", "servlet_classname"]
)
def __init__(self, hs, canonical_json=True): def __init__(self, hs, canonical_json=True):
resource.Resource.__init__(self) resource.Resource.__init__(self)
@ -255,12 +257,28 @@ class JsonResource(HttpServer, resource.Resource):
self.path_regexs = {} self.path_regexs = {}
self.hs = hs self.hs = hs
def register_paths(self, method, path_patterns, callback): def register_paths(self, method, path_patterns, callback, servlet_classname):
"""
Registers a request handler against a regular expression. Later request URLs are
checked against these regular expressions in order to identify an appropriate
handler for that request.
Args:
method (str): GET, POST etc
path_patterns (Iterable[str]): A list of regular expressions to which
the request URLs are compared.
callback (function): The handler for the request. Usually a Servlet
servlet_classname (str): The name of the handler to be used in prometheus
and opentracing logs.
"""
method = method.encode("utf-8") # method is bytes on py3 method = method.encode("utf-8") # method is bytes on py3
for path_pattern in path_patterns: for path_pattern in path_patterns:
logger.debug("Registering for %s %s", method, path_pattern.pattern) logger.debug("Registering for %s %s", method, path_pattern.pattern)
self.path_regexs.setdefault(method, []).append( self.path_regexs.setdefault(method, []).append(
self._PathEntry(path_pattern, callback) self._PathEntry(path_pattern, callback, servlet_classname)
) )
def render(self, request): def render(self, request):
@ -275,13 +293,9 @@ class JsonResource(HttpServer, resource.Resource):
This checks if anyone has registered a callback for that method and This checks if anyone has registered a callback for that method and
path. path.
""" """
callback, group_dict = self._get_handler_for_request(request) callback, servlet_classname, group_dict = self._get_handler_for_request(request)
servlet_instance = getattr(callback, "__self__", None) # Make sure we have a name for this handler in prometheus.
if servlet_instance is not None:
servlet_classname = servlet_instance.__class__.__name__
else:
servlet_classname = "%r" % callback
request.request_metrics.name = servlet_classname request.request_metrics.name = servlet_classname
# Now trigger the callback. If it returns a response, we send it # Now trigger the callback. If it returns a response, we send it
@ -311,7 +325,8 @@ class JsonResource(HttpServer, resource.Resource):
request (twisted.web.http.Request): request (twisted.web.http.Request):
Returns: Returns:
Tuple[Callable, dict[unicode, unicode]]: callback method, and the Tuple[Callable, str, dict[unicode, unicode]]: callback method, the
label to use for that method in prometheus metrics, and the
dict mapping keys to path components as specified in the dict mapping keys to path components as specified in the
handler's path match regexp. handler's path match regexp.
@ -320,7 +335,7 @@ class JsonResource(HttpServer, resource.Resource):
None, or a tuple of (http code, response body). None, or a tuple of (http code, response body).
""" """
if request.method == b"OPTIONS": if request.method == b"OPTIONS":
return _options_handler, {} return _options_handler, "options_request_handler", {}
# Loop through all the registered callbacks to check if the method # Loop through all the registered callbacks to check if the method
# and path regex match # and path regex match
@ -328,10 +343,10 @@ class JsonResource(HttpServer, resource.Resource):
m = path_entry.pattern.match(request.path.decode("ascii")) m = path_entry.pattern.match(request.path.decode("ascii"))
if m: if m:
# We found a match! # We found a match!
return path_entry.callback, m.groupdict() return path_entry.callback, path_entry.servlet_classname, m.groupdict()
# Huh. No one wanted to handle that? Fiiiiiine. Send 400. # Huh. No one wanted to handle that? Fiiiiiine. Send 400.
return _unrecognised_request_handler, {} return _unrecognised_request_handler, "unrecognised_request_handler", {}
def _send_response( def _send_response(
self, request, code, response_json_object, response_code_message=None self, request, code, response_json_object, response_code_message=None

View File

@ -20,6 +20,7 @@ import logging
from canonicaljson import json from canonicaljson import json
from synapse.api.errors import Codes, SynapseError from synapse.api.errors import Codes, SynapseError
from synapse.logging.opentracing import trace_servlet
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -289,8 +290,14 @@ class RestServlet(object):
for method in ("GET", "PUT", "POST", "OPTIONS", "DELETE"): for method in ("GET", "PUT", "POST", "OPTIONS", "DELETE"):
if hasattr(self, "on_%s" % (method,)): if hasattr(self, "on_%s" % (method,)):
servlet_classname = self.__class__.__name__
method_handler = getattr(self, "on_%s" % (method,)) method_handler = getattr(self, "on_%s" % (method,))
http_server.register_paths(method, patterns, method_handler) http_server.register_paths(
method,
patterns,
trace_servlet(servlet_classname, method_handler),
servlet_classname,
)
else: else:
raise NotImplementedError("RestServlet must register something.") raise NotImplementedError("RestServlet must register something.")

View File

@ -186,6 +186,7 @@ class LoggingContext(object):
"alive", "alive",
"request", "request",
"tag", "tag",
"scope",
] ]
thread_local = threading.local() thread_local = threading.local()
@ -238,6 +239,7 @@ class LoggingContext(object):
self.request = None self.request = None
self.tag = "" self.tag = ""
self.alive = True self.alive = True
self.scope = None
self.parent_context = parent_context self.parent_context = parent_context
@ -322,10 +324,12 @@ class LoggingContext(object):
another LoggingContext another LoggingContext
""" """
# 'request' is the only field we currently use in the logger, so that's # we track the current request
# all we need to copy
record.request = self.request record.request = self.request
# we also track the current scope:
record.scope = self.scope
def start(self): def start(self):
if get_thread_id() != self.main_thread: if get_thread_id() != self.main_thread:
logger.warning("Started logcontext %s on different thread", self) logger.warning("Started logcontext %s on different thread", self)

View File

@ -0,0 +1,483 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.import opentracing
# NOTE
# This is a small wrapper around opentracing because opentracing is not currently
# packaged downstream (specifically debian). Since opentracing instrumentation is
# fairly invasive it was awkward to make it optional. As a result we opted to encapsulate
# all opentracing state in these methods which effectively noop if opentracing is
# not present. We should strongly consider encouraging the downstream distributers
# to package opentracing and making opentracing a full dependency. In order to facilitate
# this move the methods have work very similarly to opentracing's and it should only
# be a matter of few regexes to move over to opentracing's access patterns proper.
"""
============================
Using OpenTracing in Synapse
============================
Python-specific tracing concepts are at https://opentracing.io/guides/python/.
Note that Synapse wraps OpenTracing in a small module (this one) in order to make the
OpenTracing dependency optional. That means that the access patterns are
different to those demonstrated in the OpenTracing guides. However, it is
still useful to know, especially if OpenTracing is included as a full dependency
in the future or if you are modifying this module.
OpenTracing is encapsulated so that
no span objects from OpenTracing are exposed in Synapse's code. This allows
OpenTracing to be easily disabled in Synapse and thereby have OpenTracing as
an optional dependency. This does however limit the number of modifiable spans
at any point in the code to one. From here out references to `opentracing`
in the code snippets refer to the Synapses module.
Tracing
-------
In Synapse it is not possible to start a non-active span. Spans can be started
using the ``start_active_span`` method. This returns a scope (see
OpenTracing docs) which is a context manager that needs to be entered and
exited. This is usually done by using ``with``.
.. code-block:: python
from synapse.logging.opentracing import start_active_span
with start_active_span("operation name"):
# Do something we want to tracer
Forgetting to enter or exit a scope will result in some mysterious and grievous log
context errors.
At anytime where there is an active span ``opentracing.set_tag`` can be used to
set a tag on the current active span.
Tracing functions
-----------------
Functions can be easily traced using decorators. There is a decorator for
'normal' function and for functions which are actually deferreds. The name of
the function becomes the operation name for the span.
.. code-block:: python
from synapse.logging.opentracing import trace, trace_deferred
# Start a span using 'normal_function' as the operation name
@trace
def normal_function(*args, **kwargs):
# Does all kinds of cool and expected things
return something_usual_and_useful
# Start a span using 'deferred_function' as the operation name
@trace_deferred
@defer.inlineCallbacks
def deferred_function(*args, **kwargs):
# We start
yield we_wait
# we finish
defer.returnValue(something_usual_and_useful)
Operation names can be explicitly set for functions by using
``trace_using_operation_name`` and
``trace_deferred_using_operation_name``
.. code-block:: python
from synapse.logging.opentracing import (
trace_using_operation_name,
trace_deferred_using_operation_name
)
@trace_using_operation_name("A *much* better operation name")
def normal_function(*args, **kwargs):
# Does all kinds of cool and expected things
return something_usual_and_useful
@trace_deferred_using_operation_name("Another exciting operation name!")
@defer.inlineCallbacks
def deferred_function(*args, **kwargs):
# We start
yield we_wait
# we finish
defer.returnValue(something_usual_and_useful)
Contexts and carriers
---------------------
There are a selection of wrappers for injecting and extracting contexts from
carriers provided. Unfortunately OpenTracing's three context injection
techniques are not adequate for our inject of OpenTracing span-contexts into
Twisted's http headers, EDU contents and our database tables. Also note that
the binary encoding format mandated by OpenTracing is not actually implemented
by jaeger_client v4.0.0 - it will silently noop.
Please refer to the end of ``logging/opentracing.py`` for the available
injection and extraction methods.
Homeserver whitelisting
-----------------------
Most of the whitelist checks are encapsulated in the modules's injection
and extraction method but be aware that using custom carriers or crossing
unchartered waters will require the enforcement of the whitelist.
``logging/opentracing.py`` has a ``whitelisted_homeserver`` method which takes
in a destination and compares it to the whitelist.
=======
Gotchas
=======
- Checking whitelists on span propagation
- Inserting pii
- Forgetting to enter or exit a scope
- Span source: make sure that the span you expect to be active across a
function call really will be that one. Does the current function have more
than one caller? Will all of those calling functions have be in a context
with an active span?
"""
import contextlib
import logging
import re
from functools import wraps
from twisted.internet import defer
from synapse.config import ConfigError
try:
import opentracing
except ImportError:
opentracing = None
try:
from jaeger_client import Config as JaegerConfig
from synapse.logging.scopecontextmanager import LogContextScopeManager
except ImportError:
JaegerConfig = None
LogContextScopeManager = None
logger = logging.getLogger(__name__)
class _DumTagNames(object):
"""wrapper of opentracings tags. We need to have them if we
want to reference them without opentracing around. Clearly they
should never actually show up in a trace. `set_tags` overwrites
these with the correct ones."""
INVALID_TAG = "invalid-tag"
COMPONENT = INVALID_TAG
DATABASE_INSTANCE = INVALID_TAG
DATABASE_STATEMENT = INVALID_TAG
DATABASE_TYPE = INVALID_TAG
DATABASE_USER = INVALID_TAG
ERROR = INVALID_TAG
HTTP_METHOD = INVALID_TAG
HTTP_STATUS_CODE = INVALID_TAG
HTTP_URL = INVALID_TAG
MESSAGE_BUS_DESTINATION = INVALID_TAG
PEER_ADDRESS = INVALID_TAG
PEER_HOSTNAME = INVALID_TAG
PEER_HOST_IPV4 = INVALID_TAG
PEER_HOST_IPV6 = INVALID_TAG
PEER_PORT = INVALID_TAG
PEER_SERVICE = INVALID_TAG
SAMPLING_PRIORITY = INVALID_TAG
SERVICE = INVALID_TAG
SPAN_KIND = INVALID_TAG
SPAN_KIND_CONSUMER = INVALID_TAG
SPAN_KIND_PRODUCER = INVALID_TAG
SPAN_KIND_RPC_CLIENT = INVALID_TAG
SPAN_KIND_RPC_SERVER = INVALID_TAG
def only_if_tracing(func):
"""Executes the function only if we're tracing. Otherwise return.
Assumes the function wrapped may return None"""
@wraps(func)
def _only_if_tracing_inner(*args, **kwargs):
if opentracing:
return func(*args, **kwargs)
else:
return
return _only_if_tracing_inner
# A regex which matches the server_names to expose traces for.
# None means 'block everything'.
_homeserver_whitelist = None
tags = _DumTagNames
def init_tracer(config):
"""Set the whitelists and initialise the JaegerClient tracer
Args:
config (HomeserverConfig): The config used by the homeserver
"""
global opentracing
if not config.opentracer_enabled:
# We don't have a tracer
opentracing = None
return
if not opentracing or not JaegerConfig:
raise ConfigError(
"The server has been configured to use opentracing but opentracing is not "
"installed."
)
# Include the worker name
name = config.worker_name if config.worker_name else "master"
set_homeserver_whitelist(config.opentracer_whitelist)
jaeger_config = JaegerConfig(
config={"sampler": {"type": "const", "param": 1}, "logging": True},
service_name="{} {}".format(config.server_name, name),
scope_manager=LogContextScopeManager(config),
)
jaeger_config.initialize_tracer()
# Set up tags to be opentracing's tags
global tags
tags = opentracing.tags
@contextlib.contextmanager
def _noop_context_manager(*args, **kwargs):
"""Does absolutely nothing really well. Can be entered and exited arbitrarily.
Good substitute for an opentracing scope."""
yield
# Could use kwargs but I want these to be explicit
def start_active_span(
operation_name,
child_of=None,
references=None,
tags=None,
start_time=None,
ignore_active_span=False,
finish_on_close=True,
):
"""Starts an active opentracing span. Note, the scope doesn't become active
until it has been entered, however, the span starts from the time this
message is called.
Args:
See opentracing.tracer
Returns:
scope (Scope) or noop_context_manager
"""
if opentracing is None:
return _noop_context_manager()
else:
# We need to enter the scope here for the logcontext to become active
return opentracing.tracer.start_active_span(
operation_name,
child_of=child_of,
references=references,
tags=tags,
start_time=start_time,
ignore_active_span=ignore_active_span,
finish_on_close=finish_on_close,
)
@only_if_tracing
def close_active_span():
"""Closes the active span. This will close it's logcontext if the context
was made for the span"""
opentracing.tracer.scope_manager.active.__exit__(None, None, None)
@only_if_tracing
def set_tag(key, value):
"""Set's a tag on the active span"""
opentracing.tracer.active_span.set_tag(key, value)
@only_if_tracing
def log_kv(key_values, timestamp=None):
"""Log to the active span"""
opentracing.tracer.active_span.log_kv(key_values, timestamp)
# Note: we don't have a get baggage items because we're trying to hide all
# scope and span state from synapse. I think this method may also be useless
# as a result
@only_if_tracing
def set_baggage_item(key, value):
"""Attach baggage to the active span"""
opentracing.tracer.active_span.set_baggage_item(key, value)
@only_if_tracing
def set_operation_name(operation_name):
"""Sets the operation name of the active span"""
opentracing.tracer.active_span.set_operation_name(operation_name)
@only_if_tracing
def set_homeserver_whitelist(homeserver_whitelist):
"""Sets the whitelist
Args:
homeserver_whitelist (iterable of strings): regex of whitelisted homeservers
"""
global _homeserver_whitelist
if homeserver_whitelist:
# Makes a single regex which accepts all passed in regexes in the list
_homeserver_whitelist = re.compile(
"({})".format(")|(".join(homeserver_whitelist))
)
@only_if_tracing
def whitelisted_homeserver(destination):
"""Checks if a destination matches the whitelist
Args:
destination (String)"""
if _homeserver_whitelist:
return _homeserver_whitelist.match(destination)
return False
def start_active_span_from_context(
headers,
operation_name,
references=None,
tags=None,
start_time=None,
ignore_active_span=False,
finish_on_close=True,
):
"""
Extracts a span context from Twisted Headers.
args:
headers (twisted.web.http_headers.Headers)
returns:
span_context (opentracing.span.SpanContext)
"""
# Twisted encodes the values as lists whereas opentracing doesn't.
# So, we take the first item in the list.
# Also, twisted uses byte arrays while opentracing expects strings.
if opentracing is None:
return _noop_context_manager()
header_dict = {k.decode(): v[0].decode() for k, v in headers.getAllRawHeaders()}
context = opentracing.tracer.extract(opentracing.Format.HTTP_HEADERS, header_dict)
return opentracing.tracer.start_active_span(
operation_name,
child_of=context,
references=references,
tags=tags,
start_time=start_time,
ignore_active_span=ignore_active_span,
finish_on_close=finish_on_close,
)
@only_if_tracing
def inject_active_span_twisted_headers(headers, destination):
"""
Injects a span context into twisted headers inplace
Args:
headers (twisted.web.http_headers.Headers)
span (opentracing.Span)
Returns:
Inplace modification of headers
Note:
The headers set by the tracer are custom to the tracer implementation which
should be unique enough that they don't interfere with any headers set by
synapse or twisted. If we're still using jaeger these headers would be those
here:
https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
"""
if not whitelisted_homeserver(destination):
return
span = opentracing.tracer.active_span
carrier = {}
opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
for key, value in carrier.items():
headers.addRawHeaders(key, value)
@only_if_tracing
def inject_active_span_byte_dict(headers, destination):
"""
Injects a span context into a dict where the headers are encoded as byte
strings
Args:
headers (dict)
span (opentracing.Span)
Returns:
Inplace modification of headers
Note:
The headers set by the tracer are custom to the tracer implementation which
should be unique enough that they don't interfere with any headers set by
synapse or twisted. If we're still using jaeger these headers would be those
here:
https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
"""
if not whitelisted_homeserver(destination):
return
span = opentracing.tracer.active_span
carrier = {}
opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
for key, value in carrier.items():
headers[key.encode()] = [value.encode()]
def trace_servlet(servlet_name, func):
"""Decorator which traces a serlet. It starts a span with some servlet specific
tags such as the servlet_name and request information"""
@wraps(func)
@defer.inlineCallbacks
def _trace_servlet_inner(request, *args, **kwargs):
with start_active_span_from_context(
request.requestHeaders,
"incoming-client-request",
tags={
"request_id": request.get_request_id(),
tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER,
tags.HTTP_METHOD: request.get_method(),
tags.HTTP_URL: request.get_redacted_uri(),
tags.PEER_HOST_IPV6: request.getClientIP(),
"servlet_name": servlet_name,
},
):
result = yield defer.maybeDeferred(func, request, *args, **kwargs)
defer.returnValue(result)
return _trace_servlet_inner

View File

@ -0,0 +1,138 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.import logging
import logging
from opentracing import Scope, ScopeManager
import twisted
from synapse.logging.context import LoggingContext, nested_logging_context
logger = logging.getLogger(__name__)
class LogContextScopeManager(ScopeManager):
"""
The LogContextScopeManager tracks the active scope in opentracing
by using the log contexts which are native to synapse. This is so
that the basic opentracing api can be used across twisted defereds.
(I would love to break logcontexts and this into an OS package. but
let's wait for twisted's contexts to be released.)
"""
def __init__(self, config):
pass
@property
def active(self):
"""
Returns the currently active Scope which can be used to access the
currently active Scope.span.
If there is a non-null Scope, its wrapped Span
becomes an implicit parent of any newly-created Span at
Tracer.start_active_span() time.
Return:
(Scope) : the Scope that is active, or None if not
available.
"""
ctx = LoggingContext.current_context()
if ctx is LoggingContext.sentinel:
return None
else:
return ctx.scope
def activate(self, span, finish_on_close):
"""
Makes a Span active.
Args
span (Span): the span that should become active.
finish_on_close (Boolean): whether Span should be automatically
finished when Scope.close() is called.
Returns:
Scope to control the end of the active period for
*span*. It is a programming error to neglect to call
Scope.close() on the returned instance.
"""
enter_logcontext = False
ctx = LoggingContext.current_context()
if ctx is LoggingContext.sentinel:
# We don't want this scope to affect.
logger.error("Tried to activate scope outside of loggingcontext")
return Scope(None, span)
elif ctx.scope is not None:
# We want the logging scope to look exactly the same so we give it
# a blank suffix
ctx = nested_logging_context("")
enter_logcontext = True
scope = _LogContextScope(self, span, ctx, enter_logcontext, finish_on_close)
ctx.scope = scope
return scope
class _LogContextScope(Scope):
"""
A custom opentracing scope. The only significant difference is that it will
close the log context it's related to if the logcontext was created specifically
for this scope.
"""
def __init__(self, manager, span, logcontext, enter_logcontext, finish_on_close):
"""
Args:
manager (LogContextScopeManager):
the manager that is responsible for this scope.
span (Span):
the opentracing span which this scope represents the local
lifetime for.
logcontext (LogContext):
the logcontext to which this scope is attached.
enter_logcontext (Boolean):
if True the logcontext will be entered and exited when the scope
is entered and exited respectively
finish_on_close (Boolean):
if True finish the span when the scope is closed
"""
super(_LogContextScope, self).__init__(manager, span)
self.logcontext = logcontext
self._finish_on_close = finish_on_close
self._enter_logcontext = enter_logcontext
def __enter__(self):
if self._enter_logcontext:
self.logcontext.__enter__()
def __exit__(self, type, value, traceback):
if type == twisted.internet.defer._DefGen_Return:
super(_LogContextScope, self).__exit__(None, None, None)
else:
super(_LogContextScope, self).__exit__(type, value, traceback)
if self._enter_logcontext:
self.logcontext.__exit__(type, value, traceback)
else: # the logcontext existed before the creation of the scope
self.logcontext.scope = None
def close(self):
if self.manager.active is not self:
logger.error("Tried to close a none active scope!")
return
if self._finish_on_close:
self.span.finish()

View File

@ -29,8 +29,16 @@ from prometheus_client.core import REGISTRY, GaugeMetricFamily, HistogramMetricF
from twisted.internet import reactor from twisted.internet import reactor
from synapse.metrics._exposition import (
MetricsResource,
generate_latest,
start_http_server,
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
METRICS_PREFIX = "/_synapse/metrics"
running_on_pypy = platform.python_implementation() == "PyPy" running_on_pypy = platform.python_implementation() == "PyPy"
all_metrics = [] all_metrics = []
all_collectors = [] all_collectors = []
@ -470,3 +478,12 @@ try:
gc.disable() gc.disable()
except AttributeError: except AttributeError:
pass pass
__all__ = [
"MetricsResource",
"generate_latest",
"start_http_server",
"LaterGauge",
"InFlightGauge",
"BucketCollector",
]

View File

@ -0,0 +1,258 @@
# -*- coding: utf-8 -*-
# Copyright 2015-2019 Prometheus Python Client Developers
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This code is based off `prometheus_client/exposition.py` from version 0.7.1.
Due to the renaming of metrics in prometheus_client 0.4.0, this customised
vendoring of the code will emit both the old versions that Synapse dashboards
expect, and the newer "best practice" version of the up-to-date official client.
"""
import math
import threading
from collections import namedtuple
from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn
from urllib.parse import parse_qs, urlparse
from prometheus_client import REGISTRY
from twisted.web.resource import Resource
try:
from prometheus_client.samples import Sample
except ImportError:
Sample = namedtuple("Sample", ["name", "labels", "value", "timestamp", "exemplar"])
CONTENT_TYPE_LATEST = str("text/plain; version=0.0.4; charset=utf-8")
INF = float("inf")
MINUS_INF = float("-inf")
def floatToGoString(d):
d = float(d)
if d == INF:
return "+Inf"
elif d == MINUS_INF:
return "-Inf"
elif math.isnan(d):
return "NaN"
else:
s = repr(d)
dot = s.find(".")
# Go switches to exponents sooner than Python.
# We only need to care about positive values for le/quantile.
if d > 0 and dot > 6:
mantissa = "{0}.{1}{2}".format(s[0], s[1:dot], s[dot + 1 :]).rstrip("0.")
return "{0}e+0{1}".format(mantissa, dot - 1)
return s
def sample_line(line, name):
if line.labels:
labelstr = "{{{0}}}".format(
",".join(
[
'{0}="{1}"'.format(
k,
v.replace("\\", r"\\").replace("\n", r"\n").replace('"', r"\""),
)
for k, v in sorted(line.labels.items())
]
)
)
else:
labelstr = ""
timestamp = ""
if line.timestamp is not None:
# Convert to milliseconds.
timestamp = " {0:d}".format(int(float(line.timestamp) * 1000))
return "{0}{1} {2}{3}\n".format(
name, labelstr, floatToGoString(line.value), timestamp
)
def nameify_sample(sample):
"""
If we get a prometheus_client<0.4.0 sample as a tuple, transform it into a
namedtuple which has the names we expect.
"""
if not isinstance(sample, Sample):
sample = Sample(*sample, None, None)
return sample
def generate_latest(registry, emit_help=False):
output = []
for metric in registry.collect():
if metric.name.startswith("__unused"):
continue
if not metric.samples:
# No samples, don't bother.
continue
mname = metric.name
mnewname = metric.name
mtype = metric.type
# OpenMetrics -> Prometheus
if mtype == "counter":
mnewname = mnewname + "_total"
elif mtype == "info":
mtype = "gauge"
mnewname = mnewname + "_info"
elif mtype == "stateset":
mtype = "gauge"
elif mtype == "gaugehistogram":
mtype = "histogram"
elif mtype == "unknown":
mtype = "untyped"
# Output in the old format for compatibility.
if emit_help:
output.append(
"# HELP {0} {1}\n".format(
mname,
metric.documentation.replace("\\", r"\\").replace("\n", r"\n"),
)
)
output.append("# TYPE {0} {1}\n".format(mname, mtype))
for sample in map(nameify_sample, metric.samples):
# Get rid of the OpenMetrics specific samples
for suffix in ["_created", "_gsum", "_gcount"]:
if sample.name.endswith(suffix):
break
else:
newname = sample.name.replace(mnewname, mname)
if ":" in newname and newname.endswith("_total"):
newname = newname[: -len("_total")]
output.append(sample_line(sample, newname))
# Get rid of the weird colon things while we're at it
if mtype == "counter":
mnewname = mnewname.replace(":total", "")
mnewname = mnewname.replace(":", "_")
if mname == mnewname:
continue
# Also output in the new format, if it's different.
if emit_help:
output.append(
"# HELP {0} {1}\n".format(
mnewname,
metric.documentation.replace("\\", r"\\").replace("\n", r"\n"),
)
)
output.append("# TYPE {0} {1}\n".format(mnewname, mtype))
for sample in map(nameify_sample, metric.samples):
# Get rid of the OpenMetrics specific samples
for suffix in ["_created", "_gsum", "_gcount"]:
if sample.name.endswith(suffix):
break
else:
output.append(
sample_line(
sample, sample.name.replace(":total", "").replace(":", "_")
)
)
return "".join(output).encode("utf-8")
class MetricsHandler(BaseHTTPRequestHandler):
"""HTTP handler that gives metrics from ``REGISTRY``."""
registry = REGISTRY
def do_GET(self):
registry = self.registry
params = parse_qs(urlparse(self.path).query)
if "help" in params:
emit_help = True
else:
emit_help = False
try:
output = generate_latest(registry, emit_help=emit_help)
except Exception:
self.send_error(500, "error generating metric output")
raise
self.send_response(200)
self.send_header("Content-Type", CONTENT_TYPE_LATEST)
self.end_headers()
self.wfile.write(output)
def log_message(self, format, *args):
"""Log nothing."""
@classmethod
def factory(cls, registry):
"""Returns a dynamic MetricsHandler class tied
to the passed registry.
"""
# This implementation relies on MetricsHandler.registry
# (defined above and defaulted to REGISTRY).
# As we have unicode_literals, we need to create a str()
# object for type().
cls_name = str(cls.__name__)
MyMetricsHandler = type(cls_name, (cls, object), {"registry": registry})
return MyMetricsHandler
class _ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
"""Thread per request HTTP server."""
# Make worker threads "fire and forget". Beginning with Python 3.7 this
# prevents a memory leak because ``ThreadingMixIn`` starts to gather all
# non-daemon threads in a list in order to join on them at server close.
# Enabling daemon threads virtually makes ``_ThreadingSimpleServer`` the
# same as Python 3.7's ``ThreadingHTTPServer``.
daemon_threads = True
def start_http_server(port, addr="", registry=REGISTRY):
"""Starts an HTTP server for prometheus metrics as a daemon thread"""
CustomMetricsHandler = MetricsHandler.factory(registry)
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
t = threading.Thread(target=httpd.serve_forever)
t.daemon = True
t.start()
class MetricsResource(Resource):
"""
Twisted ``Resource`` that serves prometheus metrics.
"""
isLeaf = True
def __init__(self, registry=REGISTRY):
self.registry = registry
def render_GET(self, request):
request.setHeader(b"Content-Type", CONTENT_TYPE_LATEST.encode("ascii"))
return generate_latest(self.registry)

View File

@ -1,20 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from prometheus_client.twisted import MetricsResource
METRICS_PREFIX = "/_synapse/metrics"
__all__ = ["MetricsResource", "METRICS_PREFIX"]

View File

@ -12,10 +12,14 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.types import UserID from synapse.types import UserID
logger = logging.getLogger(__name__)
class ModuleApi(object): class ModuleApi(object):
"""A proxy object that gets passed to password auth providers so they """A proxy object that gets passed to password auth providers so they
@ -76,8 +80,13 @@ class ModuleApi(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def register(self, localpart, displayname=None, emails=[]): def register(self, localpart, displayname=None, emails=[]):
"""Registers a new user with given localpart and optional """Registers a new user with given localpart and optional displayname, emails.
displayname, emails.
Also returns an access token for the new user.
Deprecated: avoid this, as it generates a new device with no way to
return that device to the user. Prefer separate calls to register_user and
register_device.
Args: Args:
localpart (str): The localpart of the new user. localpart (str): The localpart of the new user.
@ -85,15 +94,48 @@ class ModuleApi(object):
emails (List[str]): Emails to bind to the new user. emails (List[str]): Emails to bind to the new user.
Returns: Returns:
Deferred: a 2-tuple of (user_id, access_token) Deferred[tuple[str, str]]: a 2-tuple of (user_id, access_token)
""" """
# Register the user logger.warning(
reg = self.hs.get_registration_handler() "Using deprecated ModuleApi.register which creates a dummy user device."
user_id, access_token = yield reg.register( )
user_id = yield self.register_user(localpart, displayname, emails)
_, access_token = yield self.register_device(user_id)
defer.returnValue((user_id, access_token))
def register_user(self, localpart, displayname=None, emails=[]):
"""Registers a new user with given localpart and optional displayname, emails.
Args:
localpart (str): The localpart of the new user.
displayname (str|None): The displayname of the new user.
emails (List[str]): Emails to bind to the new user.
Returns:
Deferred[str]: user_id
"""
return self.hs.get_registration_handler().register_user(
localpart=localpart, default_display_name=displayname, bind_emails=emails localpart=localpart, default_display_name=displayname, bind_emails=emails
) )
defer.returnValue((user_id, access_token)) def register_device(self, user_id, device_id=None, initial_display_name=None):
"""Register a device for a user and generate an access token.
Args:
user_id (str): full canonical @user:id
device_id (str|None): The device ID to check, or None to generate
a new one.
initial_display_name (str|None): An optional display name for the
device.
Returns:
defer.Deferred[tuple[str, str]]: Tuple of device ID and access token
"""
return self.hs.get_registration_handler().register_device(
user_id=user_id,
device_id=device_id,
initial_display_name=initial_display_name,
)
@defer.inlineCallbacks @defer.inlineCallbacks
def invalidate_access_token(self, access_token): def invalidate_access_token(self, access_token):

View File

@ -1,5 +1,6 @@
# Copyright 2015, 2016 OpenMarket Ltd # Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd # Copyright 2017 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -248,6 +249,18 @@ BASE_APPEND_OVERRIDE_RULES = [
], ],
"actions": ["notify", {"set_tweak": "highlight", "value": True}], "actions": ["notify", {"set_tweak": "highlight", "value": True}],
}, },
{
"rule_id": "global/override/.m.rule.reaction",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.reaction",
"_id": "_reaction",
}
],
"actions": ["dont_notify"],
},
] ]

View File

@ -65,9 +65,7 @@ REQUIREMENTS = [
"msgpack>=0.5.2", "msgpack>=0.5.2",
"phonenumbers>=8.2.0", "phonenumbers>=8.2.0",
"six>=1.10", "six>=1.10",
# prometheus_client 0.4.0 changed the format of counter metrics "prometheus_client>=0.0.18,<0.8.0",
# (cf https://github.com/matrix-org/synapse/issues/4001)
"prometheus_client>=0.0.18,<0.4.0",
# we use attr.s(slots), which arrived in 16.0.0 # we use attr.s(slots), which arrived in 16.0.0
# Twisted 18.7.0 requires attrs>=17.4.0 # Twisted 18.7.0 requires attrs>=17.4.0
"attrs>=17.4.0", "attrs>=17.4.0",
@ -95,6 +93,7 @@ CONDITIONAL_REQUIREMENTS = {
"url_preview": ["lxml>=3.5.0"], "url_preview": ["lxml>=3.5.0"],
"test": ["mock>=2.0", "parameterized"], "test": ["mock>=2.0", "parameterized"],
"sentry": ["sentry-sdk>=0.7.2"], "sentry": ["sentry-sdk>=0.7.2"],
"opentracing": ["jaeger-client>=4.0.0", "opentracing>=2.2.0"],
"jwt": ["pyjwt>=1.6.4"], "jwt": ["pyjwt>=1.6.4"],
} }

View File

@ -205,7 +205,7 @@ class ReplicationEndpoint(object):
args = "/".join("(?P<%s>[^/]+)" % (arg,) for arg in url_args) args = "/".join("(?P<%s>[^/]+)" % (arg,) for arg in url_args)
pattern = re.compile("^/_synapse/replication/%s/%s$" % (self.NAME, args)) pattern = re.compile("^/_synapse/replication/%s/%s$" % (self.NAME, args))
http_server.register_paths(method, [pattern], handler) http_server.register_paths(method, [pattern], handler, self.__class__.__name__)
def _cached_handler(self, request, txn_id, **kwargs): def _cached_handler(self, request, txn_id, **kwargs):
"""Called on new incoming requests when caching is enabled. Checks """Called on new incoming requests when caching is enabled. Checks

View File

@ -156,70 +156,6 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
defer.returnValue((200, ret)) defer.returnValue((200, ret))
class ReplicationRegister3PIDGuestRestServlet(ReplicationEndpoint):
"""Gets/creates a guest account for given 3PID.
Request format:
POST /_synapse/replication/get_or_register_3pid_guest/
{
"requester": ...,
"medium": ...,
"address": ...,
"inviter_user_id": ...
}
"""
NAME = "get_or_register_3pid_guest"
PATH_ARGS = ()
def __init__(self, hs):
super(ReplicationRegister3PIDGuestRestServlet, self).__init__(hs)
self.registeration_handler = hs.get_registration_handler()
self.store = hs.get_datastore()
self.clock = hs.get_clock()
@staticmethod
def _serialize_payload(requester, medium, address, inviter_user_id):
"""
Args:
requester(Requester)
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
"""
return {
"requester": requester.serialize(),
"medium": medium,
"address": address,
"inviter_user_id": inviter_user_id,
}
@defer.inlineCallbacks
def _handle_request(self, request):
content = parse_json_object_from_request(request)
medium = content["medium"]
address = content["address"]
inviter_user_id = content["inviter_user_id"]
requester = Requester.deserialize(self.store, content["requester"])
if requester.user:
request.authenticated_entity = requester.user.to_string()
logger.info("get_or_register_3pid_guest: %r", content)
ret = yield self.registeration_handler.get_or_register_3pid_guest(
medium, address, inviter_user_id
)
defer.returnValue((200, ret))
class ReplicationUserJoinedLeftRoomRestServlet(ReplicationEndpoint): class ReplicationUserJoinedLeftRoomRestServlet(ReplicationEndpoint):
"""Notifies that a user has joined or left the room """Notifies that a user has joined or left the room
@ -272,5 +208,4 @@ class ReplicationUserJoinedLeftRoomRestServlet(ReplicationEndpoint):
def register_servlets(hs, http_server): def register_servlets(hs, http_server):
ReplicationRemoteJoinRestServlet(hs).register(http_server) ReplicationRemoteJoinRestServlet(hs).register(http_server)
ReplicationRemoteRejectInviteRestServlet(hs).register(http_server) ReplicationRemoteRejectInviteRestServlet(hs).register(http_server)
ReplicationRegister3PIDGuestRestServlet(hs).register(http_server)
ReplicationUserJoinedLeftRoomRestServlet(hs).register(http_server) ReplicationUserJoinedLeftRoomRestServlet(hs).register(http_server)

View File

@ -38,7 +38,6 @@ class ReplicationRegisterServlet(ReplicationEndpoint):
@staticmethod @staticmethod
def _serialize_payload( def _serialize_payload(
user_id, user_id,
token,
password_hash, password_hash,
was_guest, was_guest,
make_guest, make_guest,
@ -51,9 +50,6 @@ class ReplicationRegisterServlet(ReplicationEndpoint):
""" """
Args: Args:
user_id (str): The desired user ID to register. user_id (str): The desired user ID to register.
token (str): The desired access token to use for this user. If this
is not None, the given access token is associated with the user
id.
password_hash (str|None): Optional. The password hash for this user. password_hash (str|None): Optional. The password hash for this user.
was_guest (bool): Optional. Whether this is a guest account being was_guest (bool): Optional. Whether this is a guest account being
upgraded to a non-guest account. upgraded to a non-guest account.
@ -68,7 +64,6 @@ class ReplicationRegisterServlet(ReplicationEndpoint):
address (str|None): the IP address used to perform the regitration. address (str|None): the IP address used to perform the regitration.
""" """
return { return {
"token": token,
"password_hash": password_hash, "password_hash": password_hash,
"was_guest": was_guest, "was_guest": was_guest,
"make_guest": make_guest, "make_guest": make_guest,
@ -85,7 +80,6 @@ class ReplicationRegisterServlet(ReplicationEndpoint):
yield self.registration_handler.register_with_store( yield self.registration_handler.register_with_store(
user_id=user_id, user_id=user_id,
token=content["token"],
password_hash=content["password_hash"], password_hash=content["password_hash"],
was_guest=content["was_guest"], was_guest=content["was_guest"],
make_guest=content["make_guest"], make_guest=content["make_guest"],

View File

@ -219,11 +219,10 @@ class UserRegisterServlet(RestServlet):
register = RegisterRestServlet(self.hs) register = RegisterRestServlet(self.hs)
(user_id, _) = yield register.registration_handler.register( user_id = yield register.registration_handler.register_user(
localpart=body["username"].lower(), localpart=body["username"].lower(),
password=body["password"], password=body["password"],
admin=bool(admin), admin=bool(admin),
generate_token=False,
user_type=user_type, user_type=user_type,
) )

View File

@ -59,9 +59,14 @@ class SendServerNoticeServlet(RestServlet):
def register(self, json_resource): def register(self, json_resource):
PATTERN = "^/_synapse/admin/v1/send_server_notice" PATTERN = "^/_synapse/admin/v1/send_server_notice"
json_resource.register_paths("POST", (re.compile(PATTERN + "$"),), self.on_POST)
json_resource.register_paths( json_resource.register_paths(
"PUT", (re.compile(PATTERN + "/(?P<txn_id>[^/]*)$"),), self.on_PUT "POST", (re.compile(PATTERN + "$"),), self.on_POST, self.__class__.__name__
)
json_resource.register_paths(
"PUT",
(re.compile(PATTERN + "/(?P<txn_id>[^/]*)$"),),
self.on_PUT,
self.__class__.__name__,
) )
@defer.inlineCallbacks @defer.inlineCallbacks

View File

@ -18,7 +18,13 @@ import logging
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError from synapse.api.errors import (
AuthError,
Codes,
InvalidClientCredentialsError,
NotFoundError,
SynapseError,
)
from synapse.http.servlet import RestServlet, parse_json_object_from_request from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.rest.client.v2_alpha._base import client_patterns from synapse.rest.client.v2_alpha._base import client_patterns
from synapse.types import RoomAlias from synapse.types import RoomAlias
@ -97,7 +103,7 @@ class ClientDirectoryServer(RestServlet):
room_alias.to_string(), room_alias.to_string(),
) )
defer.returnValue((200, {})) defer.returnValue((200, {}))
except AuthError: except InvalidClientCredentialsError:
# fallback to default user behaviour if they aren't an AS # fallback to default user behaviour if they aren't an AS
pass pass

View File

@ -283,19 +283,7 @@ class LoginRestServlet(RestServlet):
yield auth_handler.validate_short_term_login_token_and_get_user_id(token) yield auth_handler.validate_short_term_login_token_and_get_user_id(token)
) )
device_id = login_submission.get("device_id") result = yield self._register_device_with_callback(user_id, login_submission)
initial_display_name = login_submission.get("initial_device_display_name")
device_id, access_token = yield self.registration_handler.register_device(
user_id, device_id, initial_display_name
)
result = {
"user_id": user_id, # may have changed
"access_token": access_token,
"home_server": self.hs.hostname,
"device_id": device_id,
}
defer.returnValue(result) defer.returnValue(result)
@defer.inlineCallbacks @defer.inlineCallbacks
@ -323,35 +311,16 @@ class LoginRestServlet(RestServlet):
raise LoginError(401, "Invalid JWT", errcode=Codes.UNAUTHORIZED) raise LoginError(401, "Invalid JWT", errcode=Codes.UNAUTHORIZED)
user_id = UserID(user, self.hs.hostname).to_string() user_id = UserID(user, self.hs.hostname).to_string()
device_id = login_submission.get("device_id")
initial_display_name = login_submission.get("initial_device_display_name")
auth_handler = self.auth_handler registered_user_id = yield self.auth_handler.check_user_exists(user_id)
registered_user_id = yield auth_handler.check_user_exists(user_id) if not registered_user_id:
if registered_user_id: registered_user_id = yield self.registration_handler.register_user(
device_id, access_token = yield self.registration_handler.register_device( localpart=user
registered_user_id, device_id, initial_display_name
) )
result = { result = yield self._register_device_with_callback(
"user_id": registered_user_id, registered_user_id, login_submission
"access_token": access_token, )
"home_server": self.hs.hostname,
}
else:
user_id, access_token = (
yield self.registration_handler.register(localpart=user)
)
device_id, access_token = yield self.registration_handler.register_device(
user_id, device_id, initial_display_name
)
result = {
"user_id": user_id, # may have changed
"access_token": access_token,
"home_server": self.hs.hostname,
}
defer.returnValue(result) defer.returnValue(result)
@ -534,12 +503,8 @@ class SSOAuthHandler(object):
user_id = UserID(localpart, self._hostname).to_string() user_id = UserID(localpart, self._hostname).to_string()
registered_user_id = yield self._auth_handler.check_user_exists(user_id) registered_user_id = yield self._auth_handler.check_user_exists(user_id)
if not registered_user_id: if not registered_user_id:
registered_user_id, _ = ( registered_user_id = yield self._registration_handler.register_user(
yield self._registration_handler.register( localpart=localpart, default_display_name=user_display_name
localpart=localpart,
generate_token=False,
default_display_name=user_display_name,
)
) )
login_token = self._macaroon_gen.generate_short_term_login_token( login_token = self._macaroon_gen.generate_short_term_login_token(

View File

@ -24,7 +24,12 @@ from canonicaljson import json
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError from synapse.api.errors import (
AuthError,
Codes,
InvalidClientCredentialsError,
SynapseError,
)
from synapse.api.filtering import Filter from synapse.api.filtering import Filter
from synapse.events.utils import format_event_for_client_v2 from synapse.events.utils import format_event_for_client_v2
from synapse.http.servlet import ( from synapse.http.servlet import (
@ -62,11 +67,17 @@ class RoomCreateRestServlet(TransactionRestServlet):
register_txn_path(self, PATTERNS, http_server) register_txn_path(self, PATTERNS, http_server)
# define CORS for all of /rooms in RoomCreateRestServlet for simplicity # define CORS for all of /rooms in RoomCreateRestServlet for simplicity
http_server.register_paths( http_server.register_paths(
"OPTIONS", client_patterns("/rooms(?:/.*)?$", v1=True), self.on_OPTIONS "OPTIONS",
client_patterns("/rooms(?:/.*)?$", v1=True),
self.on_OPTIONS,
self.__class__.__name__,
) )
# define CORS for /createRoom[/txnid] # define CORS for /createRoom[/txnid]
http_server.register_paths( http_server.register_paths(
"OPTIONS", client_patterns("/createRoom(?:/.*)?$", v1=True), self.on_OPTIONS "OPTIONS",
client_patterns("/createRoom(?:/.*)?$", v1=True),
self.on_OPTIONS,
self.__class__.__name__,
) )
def on_PUT(self, request, txn_id): def on_PUT(self, request, txn_id):
@ -111,16 +122,28 @@ class RoomStateEventRestServlet(TransactionRestServlet):
) )
http_server.register_paths( http_server.register_paths(
"GET", client_patterns(state_key, v1=True), self.on_GET "GET",
client_patterns(state_key, v1=True),
self.on_GET,
self.__class__.__name__,
) )
http_server.register_paths( http_server.register_paths(
"PUT", client_patterns(state_key, v1=True), self.on_PUT "PUT",
client_patterns(state_key, v1=True),
self.on_PUT,
self.__class__.__name__,
) )
http_server.register_paths( http_server.register_paths(
"GET", client_patterns(no_state_key, v1=True), self.on_GET_no_state_key "GET",
client_patterns(no_state_key, v1=True),
self.on_GET_no_state_key,
self.__class__.__name__,
) )
http_server.register_paths( http_server.register_paths(
"PUT", client_patterns(no_state_key, v1=True), self.on_PUT_no_state_key "PUT",
client_patterns(no_state_key, v1=True),
self.on_PUT_no_state_key,
self.__class__.__name__,
) )
def on_GET_no_state_key(self, request, room_id, event_type): def on_GET_no_state_key(self, request, room_id, event_type):
@ -307,7 +330,7 @@ class PublicRoomListRestServlet(TransactionRestServlet):
try: try:
yield self.auth.get_user_by_req(request, allow_guest=True) yield self.auth.get_user_by_req(request, allow_guest=True)
except AuthError as e: except InvalidClientCredentialsError as e:
# Option to allow servers to require auth when accessing # Option to allow servers to require auth when accessing
# /publicRooms via CS API. This is especially helpful in private # /publicRooms via CS API. This is especially helpful in private
# federations. # federations.
@ -840,18 +863,23 @@ def register_txn_path(servlet, regex_string, http_server, with_get=False):
with_get: True to also register respective GET paths for the PUTs. with_get: True to also register respective GET paths for the PUTs.
""" """
http_server.register_paths( http_server.register_paths(
"POST", client_patterns(regex_string + "$", v1=True), servlet.on_POST "POST",
client_patterns(regex_string + "$", v1=True),
servlet.on_POST,
servlet.__class__.__name__,
) )
http_server.register_paths( http_server.register_paths(
"PUT", "PUT",
client_patterns(regex_string + "/(?P<txn_id>[^/]*)$", v1=True), client_patterns(regex_string + "/(?P<txn_id>[^/]*)$", v1=True),
servlet.on_PUT, servlet.on_PUT,
servlet.__class__.__name__,
) )
if with_get: if with_get:
http_server.register_paths( http_server.register_paths(
"GET", "GET",
client_patterns(regex_string + "/(?P<txn_id>[^/]*)$", v1=True), client_patterns(regex_string + "/(?P<txn_id>[^/]*)$", v1=True),
servlet.on_GET, servlet.on_GET,
servlet.__class__.__name__,
) )

View File

@ -464,11 +464,10 @@ class RegisterRestServlet(RestServlet):
Codes.THREEPID_IN_USE, Codes.THREEPID_IN_USE,
) )
(registered_user_id, _) = yield self.registration_handler.register( registered_user_id = yield self.registration_handler.register_user(
localpart=desired_username, localpart=desired_username,
password=new_password, password=new_password,
guest_access_token=guest_access_token, guest_access_token=guest_access_token,
generate_token=False,
threepid=threepid, threepid=threepid,
address=client_addr, address=client_addr,
) )
@ -542,8 +541,8 @@ class RegisterRestServlet(RestServlet):
if not compare_digest(want_mac, got_mac): if not compare_digest(want_mac, got_mac):
raise SynapseError(403, "HMAC incorrect") raise SynapseError(403, "HMAC incorrect")
(user_id, _) = yield self.registration_handler.register( user_id = yield self.registration_handler.register_user(
localpart=username, password=password, generate_token=False localpart=username, password=password
) )
result = yield self._create_registration_details(user_id, body) result = yield self._create_registration_details(user_id, body)
@ -577,8 +576,8 @@ class RegisterRestServlet(RestServlet):
def _do_guest_registration(self, params, address=None): def _do_guest_registration(self, params, address=None):
if not self.hs.config.allow_guest_access: if not self.hs.config.allow_guest_access:
raise SynapseError(403, "Guest access is disabled") raise SynapseError(403, "Guest access is disabled")
user_id, _ = yield self.registration_handler.register( user_id = yield self.registration_handler.register_user(
generate_token=False, make_guest=True, address=address make_guest=True, address=address
) )
# we don't allow guests to specify their own device_id, because # we don't allow guests to specify their own device_id, because

View File

@ -34,6 +34,7 @@ from synapse.http.servlet import (
from synapse.rest.client.transactions import HttpTransactionCache from synapse.rest.client.transactions import HttpTransactionCache
from synapse.storage.relations import ( from synapse.storage.relations import (
AggregationPaginationToken, AggregationPaginationToken,
PaginationChunk,
RelationPaginationToken, RelationPaginationToken,
) )
@ -71,11 +72,13 @@ class RelationSendServlet(RestServlet):
"POST", "POST",
client_patterns(self.PATTERN + "$", releases=()), client_patterns(self.PATTERN + "$", releases=()),
self.on_PUT_or_POST, self.on_PUT_or_POST,
self.__class__.__name__,
) )
http_server.register_paths( http_server.register_paths(
"PUT", "PUT",
client_patterns(self.PATTERN + "/(?P<txn_id>[^/]*)$", releases=()), client_patterns(self.PATTERN + "/(?P<txn_id>[^/]*)$", releases=()),
self.on_PUT, self.on_PUT,
self.__class__.__name__,
) )
def on_PUT(self, request, *args, **kwargs): def on_PUT(self, request, *args, **kwargs):
@ -145,38 +148,55 @@ class RelationPaginationServlet(RestServlet):
room_id, requester.user.to_string() room_id, requester.user.to_string()
) )
# This checks that a) the event exists and b) the user is allowed to # This gets the original event and checks that a) the event exists and
# view it. # b) the user is allowed to view it.
yield self.event_handler.get_event(requester.user, room_id, parent_id) event = yield self.event_handler.get_event(requester.user, room_id, parent_id)
limit = parse_integer(request, "limit", default=5) limit = parse_integer(request, "limit", default=5)
from_token = parse_string(request, "from") from_token = parse_string(request, "from")
to_token = parse_string(request, "to") to_token = parse_string(request, "to")
if from_token: if event.internal_metadata.is_redacted():
from_token = RelationPaginationToken.from_string(from_token) # If the event is redacted, return an empty list of relations
pagination_chunk = PaginationChunk(chunk=[])
else:
# Return the relations
if from_token:
from_token = RelationPaginationToken.from_string(from_token)
if to_token: if to_token:
to_token = RelationPaginationToken.from_string(to_token) to_token = RelationPaginationToken.from_string(to_token)
result = yield self.store.get_relations_for_event( pagination_chunk = yield self.store.get_relations_for_event(
event_id=parent_id, event_id=parent_id,
relation_type=relation_type, relation_type=relation_type,
event_type=event_type, event_type=event_type,
limit=limit, limit=limit,
from_token=from_token, from_token=from_token,
to_token=to_token, to_token=to_token,
) )
events = yield self.store.get_events_as_list( events = yield self.store.get_events_as_list(
[c["event_id"] for c in result.chunk] [c["event_id"] for c in pagination_chunk.chunk]
) )
now = self.clock.time_msec() now = self.clock.time_msec()
events = yield self._event_serializer.serialize_events(events, now) # We set bundle_aggregations to False when retrieving the original
# event because we want the content before relations were applied to
# it.
original_event = yield self._event_serializer.serialize_event(
event, now, bundle_aggregations=False
)
# Similarly, we don't allow relations to be applied to relations, so we
# return the original relations without any aggregations on top of them
# here.
events = yield self._event_serializer.serialize_events(
events, now, bundle_aggregations=False
)
return_value = result.to_dict() return_value = pagination_chunk.to_dict()
return_value["chunk"] = events return_value["chunk"] = events
return_value["original_event"] = original_event
defer.returnValue((200, return_value)) defer.returnValue((200, return_value))
@ -222,7 +242,7 @@ class RelationAggregationPaginationServlet(RestServlet):
# This checks that a) the event exists and b) the user is allowed to # This checks that a) the event exists and b) the user is allowed to
# view it. # view it.
yield self.event_handler.get_event(requester.user, room_id, parent_id) event = yield self.event_handler.get_event(requester.user, room_id, parent_id)
if relation_type not in (RelationTypes.ANNOTATION, None): if relation_type not in (RelationTypes.ANNOTATION, None):
raise SynapseError(400, "Relation type must be 'annotation'") raise SynapseError(400, "Relation type must be 'annotation'")
@ -231,21 +251,26 @@ class RelationAggregationPaginationServlet(RestServlet):
from_token = parse_string(request, "from") from_token = parse_string(request, "from")
to_token = parse_string(request, "to") to_token = parse_string(request, "to")
if from_token: if event.internal_metadata.is_redacted():
from_token = AggregationPaginationToken.from_string(from_token) # If the event is redacted, return an empty list of relations
pagination_chunk = PaginationChunk(chunk=[])
else:
# Return the relations
if from_token:
from_token = AggregationPaginationToken.from_string(from_token)
if to_token: if to_token:
to_token = AggregationPaginationToken.from_string(to_token) to_token = AggregationPaginationToken.from_string(to_token)
res = yield self.store.get_aggregation_groups_for_event( pagination_chunk = yield self.store.get_aggregation_groups_for_event(
event_id=parent_id, event_id=parent_id,
event_type=event_type, event_type=event_type,
limit=limit, limit=limit,
from_token=from_token, from_token=from_token,
to_token=to_token, to_token=to_token,
) )
defer.returnValue((200, res.to_dict())) defer.returnValue((200, pagination_chunk.to_dict()))
class RelationAggregationGroupPaginationServlet(RestServlet): class RelationAggregationGroupPaginationServlet(RestServlet):

View File

@ -67,7 +67,7 @@ class StorageProviderWrapper(StorageProvider):
backend (StorageProvider) backend (StorageProvider)
store_local (bool): Whether to store new local files or not. store_local (bool): Whether to store new local files or not.
store_synchronous (bool): Whether to wait for file to be successfully store_synchronous (bool): Whether to wait for file to be successfully
uploaded, or todo the upload in the backgroud. uploaded, or todo the upload in the background.
store_remote (bool): Whether remote media should be uploaded store_remote (bool): Whether remote media should be uploaded
""" """

View File

@ -37,6 +37,7 @@ from synapse.logging.context import (
) )
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
from synapse.util import batch_iter
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from ._base import SQLBaseStore from ._base import SQLBaseStore
@ -218,9 +219,108 @@ class EventsWorkerStore(SQLBaseStore):
if not event_ids: if not event_ids:
defer.returnValue([]) defer.returnValue([])
event_id_list = event_ids # there may be duplicates so we cast the list to a set
event_ids = set(event_ids) event_entry_map = yield self._get_events_from_cache_or_db(
set(event_ids), allow_rejected=allow_rejected
)
events = []
for event_id in event_ids:
entry = event_entry_map.get(event_id, None)
if not entry:
continue
if not allow_rejected:
assert not entry.event.rejected_reason, (
"rejected event returned from _get_events_from_cache_or_db despite "
"allow_rejected=False"
)
# We may not have had the original event when we received a redaction, so
# we have to recheck auth now.
if not allow_rejected and entry.event.type == EventTypes.Redaction:
redacted_event_id = entry.event.redacts
event_map = yield self._get_events_from_cache_or_db([redacted_event_id])
original_event_entry = event_map.get(redacted_event_id)
if not original_event_entry:
# we don't have the redacted event (or it was rejected).
#
# We assume that the redaction isn't authorized for now; if the
# redacted event later turns up, the redaction will be re-checked,
# and if it is found valid, the original will get redacted before it
# is served to the client.
logger.debug(
"Withholding redaction event %s since we don't (yet) have the "
"original %s",
event_id,
redacted_event_id,
)
continue
original_event = original_event_entry.event
if original_event.type == EventTypes.Create:
# we never serve redactions of Creates to clients.
logger.info(
"Withholding redaction %s of create event %s",
event_id,
redacted_event_id,
)
continue
if entry.event.internal_metadata.need_to_check_redaction():
original_domain = get_domain_from_id(original_event.sender)
redaction_domain = get_domain_from_id(entry.event.sender)
if original_domain != redaction_domain:
# the senders don't match, so this is forbidden
logger.info(
"Withholding redaction %s whose sender domain %s doesn't "
"match that of redacted event %s %s",
event_id,
redaction_domain,
redacted_event_id,
original_domain,
)
continue
# Update the cache to save doing the checks again.
entry.event.internal_metadata.recheck_redaction = False
if check_redacted and entry.redacted_event:
event = entry.redacted_event
else:
event = entry.event
events.append(event)
if get_prev_content:
if "replaces_state" in event.unsigned:
prev = yield self.get_event(
event.unsigned["replaces_state"],
get_prev_content=False,
allow_none=True,
)
if prev:
event.unsigned = dict(event.unsigned)
event.unsigned["prev_content"] = prev.content
event.unsigned["prev_sender"] = prev.sender
defer.returnValue(events)
@defer.inlineCallbacks
def _get_events_from_cache_or_db(self, event_ids, allow_rejected=False):
"""Fetch a bunch of events from the cache or the database.
If events are pulled from the database, they will be cached for future lookups.
Args:
event_ids (Iterable[str]): The event_ids of the events to fetch
allow_rejected (bool): Whether to include rejected events
Returns:
Deferred[Dict[str, _EventCacheEntry]]:
map from event id to result
"""
event_entry_map = self._get_events_from_cache( event_entry_map = self._get_events_from_cache(
event_ids, allow_rejected=allow_rejected event_ids, allow_rejected=allow_rejected
) )
@ -243,81 +343,7 @@ class EventsWorkerStore(SQLBaseStore):
event_entry_map.update(missing_events) event_entry_map.update(missing_events)
events = [] return event_entry_map
for event_id in event_id_list:
entry = event_entry_map.get(event_id, None)
if not entry:
continue
# Starting in room version v3, some redactions need to be rechecked if we
# didn't have the redacted event at the time, so we recheck on read
# instead.
if not allow_rejected and entry.event.type == EventTypes.Redaction:
if entry.event.internal_metadata.need_to_check_redaction():
# XXX: we need to avoid calling get_event here.
#
# The problem is that we end up at this point when an event
# which has been redacted is pulled out of the database by
# _enqueue_events, because _enqueue_events needs to check
# the redaction before it can cache the redacted event. So
# obviously, calling get_event to get the redacted event out
# of the database gives us an infinite loop.
#
# For now (quick hack to fix during 0.99 release cycle), we
# just go and fetch the relevant row from the db, but it
# would be nice to think about how we can cache this rather
# than hit the db every time we access a redaction event.
#
# One thought on how to do this:
# 1. split get_events_as_list up so that it is divided into
# (a) get the rawish event from the db/cache, (b) do the
# redaction/rejection filtering
# 2. have _get_event_from_row just call the first half of
# that
orig_sender = yield self._simple_select_one_onecol(
table="events",
keyvalues={"event_id": entry.event.redacts},
retcol="sender",
allow_none=True,
)
expected_domain = get_domain_from_id(entry.event.sender)
if (
orig_sender
and get_domain_from_id(orig_sender) == expected_domain
):
# This redaction event is allowed. Mark as not needing a
# recheck.
entry.event.internal_metadata.recheck_redaction = False
else:
# We don't have the event that is being redacted, so we
# assume that the event isn't authorized for now. (If we
# later receive the event, then we will always redact
# it anyway, since we have this redaction)
continue
if allow_rejected or not entry.event.rejected_reason:
if check_redacted and entry.redacted_event:
event = entry.redacted_event
else:
event = entry.event
events.append(event)
if get_prev_content:
if "replaces_state" in event.unsigned:
prev = yield self.get_event(
event.unsigned["replaces_state"],
get_prev_content=False,
allow_none=True,
)
if prev:
event.unsigned = dict(event.unsigned)
event.unsigned["prev_content"] = prev.content
event.unsigned["prev_sender"] = prev.sender
defer.returnValue(events)
def _invalidate_get_event_cache(self, event_id): def _invalidate_get_event_cache(self, event_id):
self._get_event_cache.invalidate((event_id,)) self._get_event_cache.invalidate((event_id,))
@ -326,8 +352,8 @@ class EventsWorkerStore(SQLBaseStore):
"""Fetch events from the caches """Fetch events from the caches
Args: Args:
events (list(str)): list of event_ids to fetch events (Iterable[str]): list of event_ids to fetch
allow_rejected (bool): Whether to teturn events that were rejected allow_rejected (bool): Whether to return events that were rejected
update_metrics (bool): Whether to update the cache hit ratio metrics update_metrics (bool): Whether to update the cache hit ratio metrics
Returns: Returns:
@ -384,19 +410,16 @@ class EventsWorkerStore(SQLBaseStore):
The fetch requests. Each entry consists of a list of event The fetch requests. Each entry consists of a list of event
ids to be fetched, and a deferred to be completed once the ids to be fetched, and a deferred to be completed once the
events have been fetched. events have been fetched.
""" """
with Measure(self._clock, "_fetch_event_list"): with Measure(self._clock, "_fetch_event_list"):
try: try:
event_id_lists = list(zip(*event_list))[0] event_id_lists = list(zip(*event_list))[0]
event_ids = [item for sublist in event_id_lists for item in sublist] event_ids = [item for sublist in event_id_lists for item in sublist]
rows = self._new_transaction( row_dict = self._new_transaction(
conn, "do_fetch", [], [], self._fetch_event_rows, event_ids conn, "do_fetch", [], [], self._fetch_event_rows, event_ids
) )
row_dict = {r["event_id"]: r for r in rows}
# We only want to resolve deferreds from the main thread # We only want to resolve deferreds from the main thread
def fire(lst, res): def fire(lst, res):
for ids, d in lst: for ids, d in lst:
@ -454,7 +477,7 @@ class EventsWorkerStore(SQLBaseStore):
logger.debug("Loaded %d events (%d rows)", len(events), len(rows)) logger.debug("Loaded %d events (%d rows)", len(events), len(rows))
if not allow_rejected: if not allow_rejected:
rows[:] = [r for r in rows if not r["rejects"]] rows[:] = [r for r in rows if r["rejected_reason"] is None]
res = yield make_deferred_yieldable( res = yield make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
@ -463,8 +486,8 @@ class EventsWorkerStore(SQLBaseStore):
self._get_event_from_row, self._get_event_from_row,
row["internal_metadata"], row["internal_metadata"],
row["json"], row["json"],
row["redacts"], row["redactions"],
rejected_reason=row["rejects"], rejected_reason=row["rejected_reason"],
format_version=row["format_version"], format_version=row["format_version"],
) )
for row in rows for row in rows
@ -475,49 +498,98 @@ class EventsWorkerStore(SQLBaseStore):
defer.returnValue({e.event.event_id: e for e in res if e}) defer.returnValue({e.event.event_id: e for e in res if e})
def _fetch_event_rows(self, txn, events): def _fetch_event_rows(self, txn, event_ids):
rows = [] """Fetch event rows from the database
N = 200
for i in range(1 + len(events) // N):
evs = events[i * N : (i + 1) * N]
if not evs:
break
Events which are not found are omitted from the result.
The returned per-event dicts contain the following keys:
* event_id (str)
* json (str): json-encoded event structure
* internal_metadata (str): json-encoded internal metadata dict
* format_version (int|None): The format of the event. Hopefully one
of EventFormatVersions. 'None' means the event predates
EventFormatVersions (so the event is format V1).
* rejected_reason (str|None): if the event was rejected, the reason
why.
* redactions (List[str]): a list of event-ids which (claim to) redact
this event.
Args:
txn (twisted.enterprise.adbapi.Connection):
event_ids (Iterable[str]): event IDs to fetch
Returns:
Dict[str, Dict]: a map from event id to event info.
"""
event_dict = {}
for evs in batch_iter(event_ids, 200):
sql = ( sql = (
"SELECT " "SELECT "
" e.event_id as event_id, " " e.event_id, "
" e.internal_metadata," " e.internal_metadata,"
" e.json," " e.json,"
" e.format_version, " " e.format_version, "
" r.redacts as redacts," " rej.reason "
" rej.event_id as rejects "
" FROM event_json as e" " FROM event_json as e"
" LEFT JOIN rejections as rej USING (event_id)" " LEFT JOIN rejections as rej USING (event_id)"
" LEFT JOIN redactions as r ON e.event_id = r.redacts"
" WHERE e.event_id IN (%s)" " WHERE e.event_id IN (%s)"
) % (",".join(["?"] * len(evs)),) ) % (",".join(["?"] * len(evs)),)
txn.execute(sql, evs) txn.execute(sql, evs)
rows.extend(self.cursor_to_dict(txn))
return rows for row in txn:
event_id = row[0]
event_dict[event_id] = {
"event_id": event_id,
"internal_metadata": row[1],
"json": row[2],
"format_version": row[3],
"rejected_reason": row[4],
"redactions": [],
}
# check for redactions
redactions_sql = (
"SELECT event_id, redacts FROM redactions WHERE redacts IN (%s)"
) % (",".join(["?"] * len(evs)),)
txn.execute(redactions_sql, evs)
for (redacter, redacted) in txn:
d = event_dict.get(redacted)
if d:
d["redactions"].append(redacter)
return event_dict
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_event_from_row( def _get_event_from_row(
self, internal_metadata, js, redacted, format_version, rejected_reason=None self, internal_metadata, js, redactions, format_version, rejected_reason=None
): ):
"""Parse an event row which has been read from the database
Args:
internal_metadata (str): json-encoded internal_metadata column
js (str): json-encoded event body from event_json
redactions (list[str]): a list of the events which claim to have redacted
this event, from the redactions table
format_version: (str): the 'format_version' column
rejected_reason (str|None): the reason this event was rejected, if any
Returns:
_EventCacheEntry
"""
with Measure(self._clock, "_get_event_from_row"): with Measure(self._clock, "_get_event_from_row"):
d = json.loads(js) d = json.loads(js)
internal_metadata = json.loads(internal_metadata) internal_metadata = json.loads(internal_metadata)
if rejected_reason:
rejected_reason = yield self._simple_select_one_onecol(
table="rejections",
keyvalues={"event_id": rejected_reason},
retcol="reason",
desc="_get_event_from_row_rejected_reason",
)
if format_version is None: if format_version is None:
# This means that we stored the event before we had the concept # This means that we stored the event before we had the concept
# of a event format version, so it must be a V1 event. # of a event format version, so it must be a V1 event.
@ -529,41 +601,7 @@ class EventsWorkerStore(SQLBaseStore):
rejected_reason=rejected_reason, rejected_reason=rejected_reason,
) )
redacted_event = None redacted_event = yield self._maybe_redact_event_row(original_ev, redactions)
if redacted:
redacted_event = prune_event(original_ev)
redaction_id = yield self._simple_select_one_onecol(
table="redactions",
keyvalues={"redacts": redacted_event.event_id},
retcol="event_id",
desc="_get_event_from_row_redactions",
)
redacted_event.unsigned["redacted_by"] = redaction_id
# Get the redaction event.
because = yield self.get_event(
redaction_id, check_redacted=False, allow_none=True
)
if because:
# It's fine to do add the event directly, since get_pdu_json
# will serialise this field correctly
redacted_event.unsigned["redacted_because"] = because
# Starting in room version v3, some redactions need to be
# rechecked if we didn't have the redacted event at the
# time, so we recheck on read instead.
if because.internal_metadata.need_to_check_redaction():
expected_domain = get_domain_from_id(original_ev.sender)
if get_domain_from_id(because.sender) == expected_domain:
# This redaction event is allowed. Mark as not needing a
# recheck.
because.internal_metadata.recheck_redaction = False
else:
# Senders don't match, so the event isn't actually redacted
redacted_event = None
cache_entry = _EventCacheEntry( cache_entry = _EventCacheEntry(
event=original_ev, redacted_event=redacted_event event=original_ev, redacted_event=redacted_event
@ -573,6 +611,60 @@ class EventsWorkerStore(SQLBaseStore):
defer.returnValue(cache_entry) defer.returnValue(cache_entry)
@defer.inlineCallbacks
def _maybe_redact_event_row(self, original_ev, redactions):
"""Given an event object and a list of possible redacting event ids,
determine whether to honour any of those redactions and if so return a redacted
event.
Args:
original_ev (EventBase):
redactions (iterable[str]): list of event ids of potential redaction events
Returns:
Deferred[EventBase|None]: if the event should be redacted, a pruned
event object. Otherwise, None.
"""
if original_ev.type == "m.room.create":
# we choose to ignore redactions of m.room.create events.
return None
redaction_map = yield self._get_events_from_cache_or_db(redactions)
for redaction_id in redactions:
redaction_entry = redaction_map.get(redaction_id)
if not redaction_entry:
# we don't have the redaction event, or the redaction event was not
# authorized.
continue
redaction_event = redaction_entry.event
# Starting in room version v3, some redactions need to be
# rechecked if we didn't have the redacted event at the
# time, so we recheck on read instead.
if redaction_event.internal_metadata.need_to_check_redaction():
expected_domain = get_domain_from_id(original_ev.sender)
if get_domain_from_id(redaction_event.sender) == expected_domain:
# This redaction event is allowed. Mark as not needing a recheck.
redaction_event.internal_metadata.recheck_redaction = False
else:
# Senders don't match, so the event isn't actually redacted
continue
# we found a good redaction event. Redact!
redacted_event = prune_event(original_ev)
redacted_event.unsigned["redacted_by"] = redaction_id
# It's fine to add the event directly, since get_pdu_json
# will serialise this field correctly
redacted_event.unsigned["redacted_because"] = redaction_event
return redacted_event
# no valid redaction found for this event
return None
@defer.inlineCallbacks @defer.inlineCallbacks
def have_events_in_timeline(self, event_ids): def have_events_in_timeline(self, event_ids):
"""Given a list of event ids, check if we have already processed and """Given a list of event ids, check if we have already processed and

View File

@ -90,7 +90,8 @@ class RegistrationWorkerStore(SQLBaseStore):
token (str): The access token of a user. token (str): The access token of a user.
Returns: Returns:
defer.Deferred: None, if the token did not match, otherwise dict defer.Deferred: None, if the token did not match, otherwise dict
including the keys `name`, `is_guest`, `device_id`, `token_id`. including the keys `name`, `is_guest`, `device_id`, `token_id`,
`valid_until_ms`.
""" """
return self.runInteraction( return self.runInteraction(
"get_user_by_access_token", self._query_for_auth, token "get_user_by_access_token", self._query_for_auth, token
@ -284,7 +285,7 @@ class RegistrationWorkerStore(SQLBaseStore):
def _query_for_auth(self, txn, token): def _query_for_auth(self, txn, token):
sql = ( sql = (
"SELECT users.name, users.is_guest, access_tokens.id as token_id," "SELECT users.name, users.is_guest, access_tokens.id as token_id,"
" access_tokens.device_id" " access_tokens.device_id, access_tokens.valid_until_ms"
" FROM users" " FROM users"
" INNER JOIN access_tokens on users.name = access_tokens.user_id" " INNER JOIN access_tokens on users.name = access_tokens.user_id"
" WHERE token = ?" " WHERE token = ?"
@ -432,19 +433,6 @@ class RegistrationWorkerStore(SQLBaseStore):
) )
) )
@defer.inlineCallbacks
def get_3pid_guest_access_token(self, medium, address):
ret = yield self._simple_select_one(
"threepid_guest_access_tokens",
{"medium": medium, "address": address},
["guest_access_token"],
True,
"get_3pid_guest_access_token",
)
if ret:
defer.returnValue(ret["guest_access_token"])
defer.returnValue(None)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_user_id_by_threepid(self, medium, address, require_verified=False): def get_user_id_by_threepid(self, medium, address, require_verified=False):
"""Returns user id from threepid """Returns user id from threepid
@ -616,7 +604,7 @@ class RegistrationStore(
) )
self.register_background_update_handler( self.register_background_update_handler(
"users_set_deactivated_flag", self._backgroud_update_set_deactivated_flag "users_set_deactivated_flag", self._background_update_set_deactivated_flag
) )
# Create a background job for culling expired 3PID validity tokens # Create a background job for culling expired 3PID validity tokens
@ -631,14 +619,14 @@ class RegistrationStore(
hs.get_clock().looping_call(start_cull, THIRTY_MINUTES_IN_MS) hs.get_clock().looping_call(start_cull, THIRTY_MINUTES_IN_MS)
@defer.inlineCallbacks @defer.inlineCallbacks
def _backgroud_update_set_deactivated_flag(self, progress, batch_size): def _background_update_set_deactivated_flag(self, progress, batch_size):
"""Retrieves a list of all deactivated users and sets the 'deactivated' flag to 1 """Retrieves a list of all deactivated users and sets the 'deactivated' flag to 1
for each of them. for each of them.
""" """
last_user = progress.get("user_id", "") last_user = progress.get("user_id", "")
def _backgroud_update_set_deactivated_flag_txn(txn): def _background_update_set_deactivated_flag_txn(txn):
txn.execute( txn.execute(
""" """
SELECT SELECT
@ -683,7 +671,7 @@ class RegistrationStore(
return False return False
end = yield self.runInteraction( end = yield self.runInteraction(
"users_set_deactivated_flag", _backgroud_update_set_deactivated_flag_txn "users_set_deactivated_flag", _background_update_set_deactivated_flag_txn
) )
if end: if end:
@ -692,14 +680,16 @@ class RegistrationStore(
defer.returnValue(batch_size) defer.returnValue(batch_size)
@defer.inlineCallbacks @defer.inlineCallbacks
def add_access_token_to_user(self, user_id, token, device_id=None): def add_access_token_to_user(self, user_id, token, device_id, valid_until_ms):
"""Adds an access token for the given user. """Adds an access token for the given user.
Args: Args:
user_id (str): The user ID. user_id (str): The user ID.
token (str): The new access token to add. token (str): The new access token to add.
device_id (str): ID of the device to associate with the access device_id (str): ID of the device to associate with the access
token token
valid_until_ms (int|None): when the token is valid until. None for
no expiry.
Raises: Raises:
StoreError if there was a problem adding this. StoreError if there was a problem adding this.
""" """
@ -707,14 +697,19 @@ class RegistrationStore(
yield self._simple_insert( yield self._simple_insert(
"access_tokens", "access_tokens",
{"id": next_id, "user_id": user_id, "token": token, "device_id": device_id}, {
"id": next_id,
"user_id": user_id,
"token": token,
"device_id": device_id,
"valid_until_ms": valid_until_ms,
},
desc="add_access_token_to_user", desc="add_access_token_to_user",
) )
def register( def register_user(
self, self,
user_id, user_id,
token=None,
password_hash=None, password_hash=None,
was_guest=False, was_guest=False,
make_guest=False, make_guest=False,
@ -727,9 +722,6 @@ class RegistrationStore(
Args: Args:
user_id (str): The desired user ID to register. user_id (str): The desired user ID to register.
token (str): The desired access token to use for this user. If this
is not None, the given access token is associated with the user
id.
password_hash (str): Optional. The password hash for this user. password_hash (str): Optional. The password hash for this user.
was_guest (bool): Optional. Whether this is a guest account being was_guest (bool): Optional. Whether this is a guest account being
upgraded to a non-guest account. upgraded to a non-guest account.
@ -746,10 +738,9 @@ class RegistrationStore(
StoreError if the user_id could not be registered. StoreError if the user_id could not be registered.
""" """
return self.runInteraction( return self.runInteraction(
"register", "register_user",
self._register, self._register_user,
user_id, user_id,
token,
password_hash, password_hash,
was_guest, was_guest,
make_guest, make_guest,
@ -759,11 +750,10 @@ class RegistrationStore(
user_type, user_type,
) )
def _register( def _register_user(
self, self,
txn, txn,
user_id, user_id,
token,
password_hash, password_hash,
was_guest, was_guest,
make_guest, make_guest,
@ -776,8 +766,6 @@ class RegistrationStore(
now = int(self.clock.time()) now = int(self.clock.time())
next_id = self._access_tokens_id_gen.get_next()
try: try:
if was_guest: if was_guest:
# Ensure that the guest user actually exists # Ensure that the guest user actually exists
@ -825,14 +813,6 @@ class RegistrationStore(
if self._account_validity.enabled: if self._account_validity.enabled:
self.set_expiration_date_for_user_txn(txn, user_id) self.set_expiration_date_for_user_txn(txn, user_id)
if token:
# it's possible for this to get a conflict, but only for a single user
# since tokens are namespaced based on their user ID
txn.execute(
"INSERT INTO access_tokens(id, user_id, token)" " VALUES (?,?,?)",
(next_id, user_id, token),
)
if create_profile_with_displayname: if create_profile_with_displayname:
# set a default displayname serverside to avoid ugly race # set a default displayname serverside to avoid ugly race
# between auto-joins and clients trying to set displaynames # between auto-joins and clients trying to set displaynames
@ -979,40 +959,6 @@ class RegistrationStore(
defer.returnValue(res if res else False) defer.returnValue(res if res else False)
@defer.inlineCallbacks
def save_or_get_3pid_guest_access_token(
self, medium, address, access_token, inviter_user_id
):
"""
Gets the 3pid's guest access token if exists, else saves access_token.
Args:
medium (str): Medium of the 3pid. Must be "email".
address (str): 3pid address.
access_token (str): The access token to persist if none is
already persisted.
inviter_user_id (str): User ID of the inviter.
Returns:
deferred str: Whichever access token is persisted at the end
of this function call.
"""
def insert(txn):
txn.execute(
"INSERT INTO threepid_guest_access_tokens "
"(medium, address, guest_access_token, first_inviter) "
"VALUES (?, ?, ?, ?)",
(medium, address, access_token, inviter_user_id),
)
try:
yield self.runInteraction("save_3pid_guest_access_token", insert)
defer.returnValue(access_token)
except self.database_engine.module.IntegrityError:
ret = yield self.get_3pid_guest_access_token(medium, address)
defer.returnValue(ret)
def add_user_pending_deactivation(self, user_id): def add_user_pending_deactivation(self, user_id):
""" """
Adds a user to the table of users who need to be parted from all the rooms they're Adds a user to the table of users who need to be parted from all the rooms they're

View File

@ -60,7 +60,7 @@ class PaginationChunk(object):
class RelationPaginationToken(object): class RelationPaginationToken(object):
"""Pagination token for relation pagination API. """Pagination token for relation pagination API.
As the results are order by topological ordering, we can use the As the results are in topological order, we can use the
`topological_ordering` and `stream_ordering` fields of the events at the `topological_ordering` and `stream_ordering` fields of the events at the
boundaries of the chunk as pagination tokens. boundaries of the chunk as pagination tokens.

View File

@ -575,6 +575,26 @@ class RoomMemberWorkerStore(EventsWorkerStore):
count = yield self.runInteraction("did_forget_membership", f) count = yield self.runInteraction("did_forget_membership", f)
defer.returnValue(count == 0) defer.returnValue(count == 0)
@defer.inlineCallbacks
def get_rooms_user_has_been_in(self, user_id):
"""Get all rooms that the user has ever been in.
Args:
user_id (str)
Returns:
Deferred[set[str]]: Set of room IDs.
"""
room_ids = yield self._simple_select_onecol(
table="room_memberships",
keyvalues={"membership": Membership.JOIN, "user_id": user_id},
retcol="room_id",
desc="get_rooms_user_has_been_in",
)
return set(room_ids)
class RoomMemberStore(RoomMemberWorkerStore): class RoomMemberStore(RoomMemberWorkerStore):
def __init__(self, db_conn, hs): def __init__(self, db_conn, hs):

View File

@ -0,0 +1,18 @@
/* Copyright 2019 The Matrix.org Foundation C.I.C.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- when this access token can be used until, in ms since the epoch. NULL means the token
-- never expires.
ALTER TABLE access_tokens ADD COLUMN valid_until_ms BIGINT;

View File

@ -833,7 +833,9 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
Returns: Returns:
Deferred[tuple[list[_EventDictReturn], str]]: Returns the results Deferred[tuple[list[_EventDictReturn], str]]: Returns the results
as a list of _EventDictReturn and a token that points to the end as a list of _EventDictReturn and a token that points to the end
of the result set. of the result set. If no events are returned then the end of the
stream has been reached (i.e. there are no events between
`from_token` and `to_token`), or `limit` is zero.
""" """
assert int(limit) >= 0 assert int(limit) >= 0
@ -905,15 +907,15 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
only those before only those before
direction(char): Either 'b' or 'f' to indicate whether we are direction(char): Either 'b' or 'f' to indicate whether we are
paginating forwards or backwards from `from_key`. paginating forwards or backwards from `from_key`.
limit (int): The maximum number of events to return. Zero or less limit (int): The maximum number of events to return.
means no limit.
event_filter (Filter|None): If provided filters the events to event_filter (Filter|None): If provided filters the events to
those that match the filter. those that match the filter.
Returns: Returns:
tuple[list[dict], str]: Returns the results as a list of dicts and tuple[list[FrozenEvent], str]: Returns the results as a list of
a token that points to the end of the result set. The dicts have events and a token that points to the end of the result set. If no
the keys "event_id", "topological_ordering" and "stream_orderign". events are returned then the end of the stream has been reached
(i.e. there are no events between `from_key` and `to_key`).
""" """
from_key = RoomStreamToken.parse(from_key) from_key = RoomStreamToken.parse(from_key)

View File

@ -133,34 +133,6 @@ class TransactionStore(SQLBaseStore):
desc="set_received_txn_response", desc="set_received_txn_response",
) )
def prep_send_transaction(self, transaction_id, destination, origin_server_ts):
"""Persists an outgoing transaction and calculates the values for the
previous transaction id list.
This should be called before sending the transaction so that it has the
correct value for the `prev_ids` key.
Args:
transaction_id (str)
destination (str)
origin_server_ts (int)
Returns:
list: A list of previous transaction ids.
"""
return defer.succeed([])
def delivered_txn(self, transaction_id, destination, code, response_dict):
"""Persists the response for an outgoing transaction.
Args:
transaction_id (str)
destination (str)
code (int)
response_json (str)
"""
pass
@defer.inlineCallbacks @defer.inlineCallbacks
def get_destination_retry_timings(self, destination): def get_destination_retry_timings(self, destination):
"""Gets the current retry timings (if any) for a given destination. """Gets the current retry timings (if any) for a given destination.

View File

@ -36,9 +36,11 @@ class FederationRateLimiter(object):
clock (Clock) clock (Clock)
config (FederationRateLimitConfig) config (FederationRateLimitConfig)
""" """
self.clock = clock
self._config = config def new_limiter():
self.ratelimiters = {} return _PerHostRatelimiter(clock=clock, config=config)
self.ratelimiters = collections.defaultdict(new_limiter)
def ratelimit(self, host): def ratelimit(self, host):
"""Used to ratelimit an incoming request from given host """Used to ratelimit an incoming request from given host
@ -53,11 +55,9 @@ class FederationRateLimiter(object):
host (str): Origin of incoming request. host (str): Origin of incoming request.
Returns: Returns:
_PerHostRatelimiter context manager which returns a deferred.
""" """
return self.ratelimiters.setdefault( return self.ratelimiters[host].ratelimit()
host, _PerHostRatelimiter(clock=self.clock, config=self._config)
).ratelimit()
class _PerHostRatelimiter(object): class _PerHostRatelimiter(object):
@ -122,7 +122,7 @@ class _PerHostRatelimiter(object):
self.request_times.append(time_now) self.request_times.append(time_now)
def queue_request(): def queue_request():
if len(self.current_processing) > self.concurrent_requests: if len(self.current_processing) >= self.concurrent_requests:
queue_defer = defer.Deferred() queue_defer = defer.Deferred()
self.ready_request_queue[request_id] = queue_defer self.ready_request_queue[request_id] = queue_defer
logger.info( logger.info(

View File

@ -24,10 +24,6 @@ Newly created users see their own presence in /initialSync (SYT-34)
# Blacklisted due to https://github.com/matrix-org/synapse/issues/1396 # Blacklisted due to https://github.com/matrix-org/synapse/issues/1396
Should reject keys claiming to belong to a different user Should reject keys claiming to belong to a different user
# Blacklisted due to https://github.com/matrix-org/synapse/issues/2306
Users appear/disappear from directory when join_rules are changed
Users appear/disappear from directory when history_visibility are changed
# Blacklisted due to https://github.com/matrix-org/synapse/issues/1531 # Blacklisted due to https://github.com/matrix-org/synapse/issues/1531
Enabling an unknown default rule fails with 404 Enabling an unknown default rule fails with 404

View File

@ -21,7 +21,14 @@ from twisted.internet import defer
import synapse.handlers.auth import synapse.handlers.auth
from synapse.api.auth import Auth from synapse.api.auth import Auth
from synapse.api.errors import AuthError, Codes, ResourceLimitError from synapse.api.errors import (
AuthError,
Codes,
InvalidClientCredentialsError,
InvalidClientTokenError,
MissingClientTokenError,
ResourceLimitError,
)
from synapse.types import UserID from synapse.types import UserID
from tests import unittest from tests import unittest
@ -70,7 +77,9 @@ class AuthTestCase(unittest.TestCase):
request.args[b"access_token"] = [self.test_token] request.args[b"access_token"] = [self.test_token]
request.requestHeaders.getRawHeaders = mock_getRawHeaders() request.requestHeaders.getRawHeaders = mock_getRawHeaders()
d = self.auth.get_user_by_req(request) d = self.auth.get_user_by_req(request)
self.failureResultOf(d, AuthError) f = self.failureResultOf(d, InvalidClientTokenError).value
self.assertEqual(f.code, 401)
self.assertEqual(f.errcode, "M_UNKNOWN_TOKEN")
def test_get_user_by_req_user_missing_token(self): def test_get_user_by_req_user_missing_token(self):
user_info = {"name": self.test_user, "token_id": "ditto"} user_info = {"name": self.test_user, "token_id": "ditto"}
@ -79,7 +88,9 @@ class AuthTestCase(unittest.TestCase):
request = Mock(args={}) request = Mock(args={})
request.requestHeaders.getRawHeaders = mock_getRawHeaders() request.requestHeaders.getRawHeaders = mock_getRawHeaders()
d = self.auth.get_user_by_req(request) d = self.auth.get_user_by_req(request)
self.failureResultOf(d, AuthError) f = self.failureResultOf(d, MissingClientTokenError).value
self.assertEqual(f.code, 401)
self.assertEqual(f.errcode, "M_MISSING_TOKEN")
@defer.inlineCallbacks @defer.inlineCallbacks
def test_get_user_by_req_appservice_valid_token(self): def test_get_user_by_req_appservice_valid_token(self):
@ -133,7 +144,9 @@ class AuthTestCase(unittest.TestCase):
request.args[b"access_token"] = [self.test_token] request.args[b"access_token"] = [self.test_token]
request.requestHeaders.getRawHeaders = mock_getRawHeaders() request.requestHeaders.getRawHeaders = mock_getRawHeaders()
d = self.auth.get_user_by_req(request) d = self.auth.get_user_by_req(request)
self.failureResultOf(d, AuthError) f = self.failureResultOf(d, InvalidClientTokenError).value
self.assertEqual(f.code, 401)
self.assertEqual(f.errcode, "M_UNKNOWN_TOKEN")
def test_get_user_by_req_appservice_bad_token(self): def test_get_user_by_req_appservice_bad_token(self):
self.store.get_app_service_by_token = Mock(return_value=None) self.store.get_app_service_by_token = Mock(return_value=None)
@ -143,7 +156,9 @@ class AuthTestCase(unittest.TestCase):
request.args[b"access_token"] = [self.test_token] request.args[b"access_token"] = [self.test_token]
request.requestHeaders.getRawHeaders = mock_getRawHeaders() request.requestHeaders.getRawHeaders = mock_getRawHeaders()
d = self.auth.get_user_by_req(request) d = self.auth.get_user_by_req(request)
self.failureResultOf(d, AuthError) f = self.failureResultOf(d, InvalidClientTokenError).value
self.assertEqual(f.code, 401)
self.assertEqual(f.errcode, "M_UNKNOWN_TOKEN")
def test_get_user_by_req_appservice_missing_token(self): def test_get_user_by_req_appservice_missing_token(self):
app_service = Mock(token="foobar", url="a_url", sender=self.test_user) app_service = Mock(token="foobar", url="a_url", sender=self.test_user)
@ -153,7 +168,9 @@ class AuthTestCase(unittest.TestCase):
request = Mock(args={}) request = Mock(args={})
request.requestHeaders.getRawHeaders = mock_getRawHeaders() request.requestHeaders.getRawHeaders = mock_getRawHeaders()
d = self.auth.get_user_by_req(request) d = self.auth.get_user_by_req(request)
self.failureResultOf(d, AuthError) f = self.failureResultOf(d, MissingClientTokenError).value
self.assertEqual(f.code, 401)
self.assertEqual(f.errcode, "M_MISSING_TOKEN")
@defer.inlineCallbacks @defer.inlineCallbacks
def test_get_user_by_req_appservice_valid_token_valid_user_id(self): def test_get_user_by_req_appservice_valid_token_valid_user_id(self):
@ -244,10 +261,12 @@ class AuthTestCase(unittest.TestCase):
USER_ID = "@percy:matrix.org" USER_ID = "@percy:matrix.org"
self.store.add_access_token_to_user = Mock() self.store.add_access_token_to_user = Mock()
token = yield self.hs.handlers.auth_handler.issue_access_token( token = yield self.hs.handlers.auth_handler.get_access_token_for_user_id(
USER_ID, "DEVICE" USER_ID, "DEVICE", valid_until_ms=None
)
self.store.add_access_token_to_user.assert_called_with(
USER_ID, token, "DEVICE", None
) )
self.store.add_access_token_to_user.assert_called_with(USER_ID, token, "DEVICE")
def get_user(tok): def get_user(tok):
if token != tok: if token != tok:
@ -280,7 +299,7 @@ class AuthTestCase(unittest.TestCase):
request.args[b"access_token"] = [guest_tok.encode("ascii")] request.args[b"access_token"] = [guest_tok.encode("ascii")]
request.requestHeaders.getRawHeaders = mock_getRawHeaders() request.requestHeaders.getRawHeaders = mock_getRawHeaders()
with self.assertRaises(AuthError) as cm: with self.assertRaises(InvalidClientCredentialsError) as cm:
yield self.auth.get_user_by_req(request, allow_guest=True) yield self.auth.get_user_by_req(request, allow_guest=True)
self.assertEqual(401, cm.exception.code) self.assertEqual(401, cm.exception.code)
@ -325,7 +344,7 @@ class AuthTestCase(unittest.TestCase):
unknown_threepid = {"medium": "email", "address": "unreserved@server.com"} unknown_threepid = {"medium": "email", "address": "unreserved@server.com"}
self.hs.config.mau_limits_reserved_threepids = [threepid] self.hs.config.mau_limits_reserved_threepids = [threepid]
yield self.store.register(user_id="user1", token="123", password_hash=None) yield self.store.register_user(user_id="user1", password_hash=None)
with self.assertRaises(ResourceLimitError): with self.assertRaises(ResourceLimitError):
yield self.auth.check_auth_blocking() yield self.auth.check_auth_blocking()

View File

@ -0,0 +1,40 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.config.homeserver import HomeServerConfig
from tests.unittest import TestCase
from tests.utils import default_config
class RatelimitConfigTestCase(TestCase):
def test_parse_rc_federation(self):
config_dict = default_config("test")
config_dict["rc_federation"] = {
"window_size": 20000,
"sleep_limit": 693,
"sleep_delay": 252,
"reject_limit": 198,
"concurrent": 7,
}
config = HomeServerConfig()
config.parse_config_dict(config_dict, "", "")
config_obj = config.rc_federation
self.assertEqual(config_obj.window_size, 20000)
self.assertEqual(config_obj.sleep_limit, 693)
self.assertEqual(config_obj.sleep_delay, 252)
self.assertEqual(config_obj.reject_limit, 198)
self.assertEqual(config_obj.concurrent, 7)

View File

@ -0,0 +1,210 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from collections import Counter
from mock import Mock
import synapse.api.errors
import synapse.handlers.admin
import synapse.rest.admin
import synapse.storage
from synapse.api.constants import EventTypes
from synapse.rest.client.v1 import login, room
from tests import unittest
class ExfiltrateData(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource,
login.register_servlets,
room.register_servlets,
]
def prepare(self, reactor, clock, hs):
self.admin_handler = hs.get_handlers().admin_handler
self.user1 = self.register_user("user1", "password")
self.token1 = self.login("user1", "password")
self.user2 = self.register_user("user2", "password")
self.token2 = self.login("user2", "password")
def test_single_public_joined_room(self):
"""Test that we write *all* events for a public room
"""
room_id = self.helper.create_room_as(
self.user1, tok=self.token1, is_public=True
)
self.helper.send(room_id, body="Hello!", tok=self.token1)
self.helper.join(room_id, self.user2, tok=self.token2)
self.helper.send(room_id, body="Hello again!", tok=self.token1)
writer = Mock()
self.get_success(self.admin_handler.export_user_data(self.user2, writer))
writer.write_events.assert_called()
# Since we can see all events there shouldn't be any extremities, so no
# state should be written
writer.write_state.assert_not_called()
# Collect all events that were written
written_events = []
for (called_room_id, events), _ in writer.write_events.call_args_list:
self.assertEqual(called_room_id, room_id)
written_events.extend(events)
# Check that the right number of events were written
counter = Counter(
(event.type, getattr(event, "state_key", None)) for event in written_events
)
self.assertEqual(counter[(EventTypes.Message, None)], 2)
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
def test_single_private_joined_room(self):
"""Tests that we correctly write state when we can't see all events in
a room.
"""
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
self.helper.send_state(
room_id,
EventTypes.RoomHistoryVisibility,
body={"history_visibility": "joined"},
tok=self.token1,
)
self.helper.send(room_id, body="Hello!", tok=self.token1)
self.helper.join(room_id, self.user2, tok=self.token2)
self.helper.send(room_id, body="Hello again!", tok=self.token1)
writer = Mock()
self.get_success(self.admin_handler.export_user_data(self.user2, writer))
writer.write_events.assert_called()
# Since we can't see all events there should be one extremity.
writer.write_state.assert_called_once()
# Collect all events that were written
written_events = []
for (called_room_id, events), _ in writer.write_events.call_args_list:
self.assertEqual(called_room_id, room_id)
written_events.extend(events)
# Check that the right number of events were written
counter = Counter(
(event.type, getattr(event, "state_key", None)) for event in written_events
)
self.assertEqual(counter[(EventTypes.Message, None)], 1)
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
def test_single_left_room(self):
"""Tests that we don't see events in the room after we leave.
"""
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
self.helper.send(room_id, body="Hello!", tok=self.token1)
self.helper.join(room_id, self.user2, tok=self.token2)
self.helper.send(room_id, body="Hello again!", tok=self.token1)
self.helper.leave(room_id, self.user2, tok=self.token2)
self.helper.send(room_id, body="Helloooooo!", tok=self.token1)
writer = Mock()
self.get_success(self.admin_handler.export_user_data(self.user2, writer))
writer.write_events.assert_called()
# Since we can see all events there shouldn't be any extremities, so no
# state should be written
writer.write_state.assert_not_called()
written_events = []
for (called_room_id, events), _ in writer.write_events.call_args_list:
self.assertEqual(called_room_id, room_id)
written_events.extend(events)
# Check that the right number of events were written
counter = Counter(
(event.type, getattr(event, "state_key", None)) for event in written_events
)
self.assertEqual(counter[(EventTypes.Message, None)], 2)
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
self.assertEqual(counter[(EventTypes.Member, self.user2)], 2)
def test_single_left_rejoined_private_room(self):
"""Tests that see the correct events in private rooms when we
repeatedly join and leave.
"""
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
self.helper.send_state(
room_id,
EventTypes.RoomHistoryVisibility,
body={"history_visibility": "joined"},
tok=self.token1,
)
self.helper.send(room_id, body="Hello!", tok=self.token1)
self.helper.join(room_id, self.user2, tok=self.token2)
self.helper.send(room_id, body="Hello again!", tok=self.token1)
self.helper.leave(room_id, self.user2, tok=self.token2)
self.helper.send(room_id, body="Helloooooo!", tok=self.token1)
self.helper.join(room_id, self.user2, tok=self.token2)
self.helper.send(room_id, body="Helloooooo!!", tok=self.token1)
writer = Mock()
self.get_success(self.admin_handler.export_user_data(self.user2, writer))
writer.write_events.assert_called_once()
# Since we joined/left/joined again we expect there to be two gaps.
self.assertEqual(writer.write_state.call_count, 2)
written_events = []
for (called_room_id, events), _ in writer.write_events.call_args_list:
self.assertEqual(called_room_id, room_id)
written_events.extend(events)
# Check that the right number of events were written
counter = Counter(
(event.type, getattr(event, "state_key", None)) for event in written_events
)
self.assertEqual(counter[(EventTypes.Message, None)], 2)
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
self.assertEqual(counter[(EventTypes.Member, self.user2)], 3)
def test_invite(self):
"""Tests that pending invites get handled correctly.
"""
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
self.helper.send(room_id, body="Hello!", tok=self.token1)
self.helper.invite(room_id, self.user1, self.user2, tok=self.token1)
writer = Mock()
self.get_success(self.admin_handler.export_user_data(self.user2, writer))
writer.write_events.assert_not_called()
writer.write_state.assert_not_called()
writer.write_invite.assert_called_once()
args = writer.write_invite.call_args[0]
self.assertEqual(args[0], room_id)
self.assertEqual(args[1].content["membership"], "invite")
self.assertTrue(args[2]) # Assert there is at least one bit of state

View File

@ -117,7 +117,9 @@ class AuthTestCase(unittest.TestCase):
def test_mau_limits_disabled(self): def test_mau_limits_disabled(self):
self.hs.config.limit_usage_by_mau = False self.hs.config.limit_usage_by_mau = False
# Ensure does not throw exception # Ensure does not throw exception
yield self.auth_handler.get_access_token_for_user_id("user_a") yield self.auth_handler.get_access_token_for_user_id(
"user_a", device_id=None, valid_until_ms=None
)
yield self.auth_handler.validate_short_term_login_token_and_get_user_id( yield self.auth_handler.validate_short_term_login_token_and_get_user_id(
self._get_macaroon().serialize() self._get_macaroon().serialize()
@ -131,7 +133,9 @@ class AuthTestCase(unittest.TestCase):
) )
with self.assertRaises(ResourceLimitError): with self.assertRaises(ResourceLimitError):
yield self.auth_handler.get_access_token_for_user_id("user_a") yield self.auth_handler.get_access_token_for_user_id(
"user_a", device_id=None, valid_until_ms=None
)
self.hs.get_datastore().get_monthly_active_count = Mock( self.hs.get_datastore().get_monthly_active_count = Mock(
return_value=defer.succeed(self.large_number_of_users) return_value=defer.succeed(self.large_number_of_users)
@ -150,7 +154,9 @@ class AuthTestCase(unittest.TestCase):
return_value=defer.succeed(self.hs.config.max_mau_value) return_value=defer.succeed(self.hs.config.max_mau_value)
) )
with self.assertRaises(ResourceLimitError): with self.assertRaises(ResourceLimitError):
yield self.auth_handler.get_access_token_for_user_id("user_a") yield self.auth_handler.get_access_token_for_user_id(
"user_a", device_id=None, valid_until_ms=None
)
self.hs.get_datastore().get_monthly_active_count = Mock( self.hs.get_datastore().get_monthly_active_count = Mock(
return_value=defer.succeed(self.hs.config.max_mau_value) return_value=defer.succeed(self.hs.config.max_mau_value)
@ -166,7 +172,9 @@ class AuthTestCase(unittest.TestCase):
self.hs.get_datastore().get_monthly_active_count = Mock( self.hs.get_datastore().get_monthly_active_count = Mock(
return_value=defer.succeed(self.hs.config.max_mau_value) return_value=defer.succeed(self.hs.config.max_mau_value)
) )
yield self.auth_handler.get_access_token_for_user_id("user_a") yield self.auth_handler.get_access_token_for_user_id(
"user_a", device_id=None, valid_until_ms=None
)
self.hs.get_datastore().user_last_seen_monthly_active = Mock( self.hs.get_datastore().user_last_seen_monthly_active = Mock(
return_value=defer.succeed(self.hs.get_clock().time_msec()) return_value=defer.succeed(self.hs.get_clock().time_msec())
) )
@ -185,7 +193,9 @@ class AuthTestCase(unittest.TestCase):
return_value=defer.succeed(self.small_number_of_users) return_value=defer.succeed(self.small_number_of_users)
) )
# Ensure does not raise exception # Ensure does not raise exception
yield self.auth_handler.get_access_token_for_user_id("user_a") yield self.auth_handler.get_access_token_for_user_id(
"user_a", device_id=None, valid_until_ms=None
)
self.hs.get_datastore().get_monthly_active_count = Mock( self.hs.get_datastore().get_monthly_active_count = Mock(
return_value=defer.succeed(self.small_number_of_users) return_value=defer.succeed(self.small_number_of_users)

View File

@ -18,7 +18,7 @@ from mock import Mock
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import UserTypes from synapse.api.constants import UserTypes
from synapse.api.errors import ResourceLimitError, SynapseError from synapse.api.errors import Codes, ResourceLimitError, SynapseError
from synapse.handlers.register import RegistrationHandler from synapse.handlers.register import RegistrationHandler
from synapse.types import RoomAlias, UserID, create_requester from synapse.types import RoomAlias, UserID, create_requester
@ -67,7 +67,7 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
user_id = frank.to_string() user_id = frank.to_string()
requester = create_requester(user_id) requester = create_requester(user_id)
result_user_id, result_token = self.get_success( result_user_id, result_token = self.get_success(
self.handler.get_or_create_user(requester, frank.localpart, "Frankie") self.get_or_create_user(requester, frank.localpart, "Frankie")
) )
self.assertEquals(result_user_id, user_id) self.assertEquals(result_user_id, user_id)
self.assertTrue(result_token is not None) self.assertTrue(result_token is not None)
@ -77,17 +77,13 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
store = self.hs.get_datastore() store = self.hs.get_datastore()
frank = UserID.from_string("@frank:test") frank = UserID.from_string("@frank:test")
self.get_success( self.get_success(
store.register( store.register_user(user_id=frank.to_string(), password_hash=None)
user_id=frank.to_string(),
token="jkv;g498752-43gj['eamb!-5",
password_hash=None,
)
) )
local_part = frank.localpart local_part = frank.localpart
user_id = frank.to_string() user_id = frank.to_string()
requester = create_requester(user_id) requester = create_requester(user_id)
result_user_id, result_token = self.get_success( result_user_id, result_token = self.get_success(
self.handler.get_or_create_user(requester, local_part, None) self.get_or_create_user(requester, local_part, None)
) )
self.assertEquals(result_user_id, user_id) self.assertEquals(result_user_id, user_id)
self.assertTrue(result_token is not None) self.assertTrue(result_token is not None)
@ -95,9 +91,7 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
def test_mau_limits_when_disabled(self): def test_mau_limits_when_disabled(self):
self.hs.config.limit_usage_by_mau = False self.hs.config.limit_usage_by_mau = False
# Ensure does not throw exception # Ensure does not throw exception
self.get_success( self.get_success(self.get_or_create_user(self.requester, "a", "display_name"))
self.handler.get_or_create_user(self.requester, "a", "display_name")
)
def test_get_or_create_user_mau_not_blocked(self): def test_get_or_create_user_mau_not_blocked(self):
self.hs.config.limit_usage_by_mau = True self.hs.config.limit_usage_by_mau = True
@ -105,7 +99,7 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
return_value=defer.succeed(self.hs.config.max_mau_value - 1) return_value=defer.succeed(self.hs.config.max_mau_value - 1)
) )
# Ensure does not throw exception # Ensure does not throw exception
self.get_success(self.handler.get_or_create_user(self.requester, "c", "User")) self.get_success(self.get_or_create_user(self.requester, "c", "User"))
def test_get_or_create_user_mau_blocked(self): def test_get_or_create_user_mau_blocked(self):
self.hs.config.limit_usage_by_mau = True self.hs.config.limit_usage_by_mau = True
@ -113,7 +107,7 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
return_value=defer.succeed(self.lots_of_users) return_value=defer.succeed(self.lots_of_users)
) )
self.get_failure( self.get_failure(
self.handler.get_or_create_user(self.requester, "b", "display_name"), self.get_or_create_user(self.requester, "b", "display_name"),
ResourceLimitError, ResourceLimitError,
) )
@ -121,7 +115,7 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
return_value=defer.succeed(self.hs.config.max_mau_value) return_value=defer.succeed(self.hs.config.max_mau_value)
) )
self.get_failure( self.get_failure(
self.handler.get_or_create_user(self.requester, "b", "display_name"), self.get_or_create_user(self.requester, "b", "display_name"),
ResourceLimitError, ResourceLimitError,
) )
@ -131,21 +125,21 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
return_value=defer.succeed(self.lots_of_users) return_value=defer.succeed(self.lots_of_users)
) )
self.get_failure( self.get_failure(
self.handler.register(localpart="local_part"), ResourceLimitError self.handler.register_user(localpart="local_part"), ResourceLimitError
) )
self.store.get_monthly_active_count = Mock( self.store.get_monthly_active_count = Mock(
return_value=defer.succeed(self.hs.config.max_mau_value) return_value=defer.succeed(self.hs.config.max_mau_value)
) )
self.get_failure( self.get_failure(
self.handler.register(localpart="local_part"), ResourceLimitError self.handler.register_user(localpart="local_part"), ResourceLimitError
) )
def test_auto_create_auto_join_rooms(self): def test_auto_create_auto_join_rooms(self):
room_alias_str = "#room:test" room_alias_str = "#room:test"
self.hs.config.auto_join_rooms = [room_alias_str] self.hs.config.auto_join_rooms = [room_alias_str]
res = self.get_success(self.handler.register(localpart="jeff")) user_id = self.get_success(self.handler.register_user(localpart="jeff"))
rooms = self.get_success(self.store.get_rooms_for_user(res[0])) rooms = self.get_success(self.store.get_rooms_for_user(user_id))
directory_handler = self.hs.get_handlers().directory_handler directory_handler = self.hs.get_handlers().directory_handler
room_alias = RoomAlias.from_string(room_alias_str) room_alias = RoomAlias.from_string(room_alias_str)
room_id = self.get_success(directory_handler.get_association(room_alias)) room_id = self.get_success(directory_handler.get_association(room_alias))
@ -156,25 +150,25 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
def test_auto_create_auto_join_rooms_with_no_rooms(self): def test_auto_create_auto_join_rooms_with_no_rooms(self):
self.hs.config.auto_join_rooms = [] self.hs.config.auto_join_rooms = []
frank = UserID.from_string("@frank:test") frank = UserID.from_string("@frank:test")
res = self.get_success(self.handler.register(frank.localpart)) user_id = self.get_success(self.handler.register_user(frank.localpart))
self.assertEqual(res[0], frank.to_string()) self.assertEqual(user_id, frank.to_string())
rooms = self.get_success(self.store.get_rooms_for_user(res[0])) rooms = self.get_success(self.store.get_rooms_for_user(user_id))
self.assertEqual(len(rooms), 0) self.assertEqual(len(rooms), 0)
def test_auto_create_auto_join_where_room_is_another_domain(self): def test_auto_create_auto_join_where_room_is_another_domain(self):
self.hs.config.auto_join_rooms = ["#room:another"] self.hs.config.auto_join_rooms = ["#room:another"]
frank = UserID.from_string("@frank:test") frank = UserID.from_string("@frank:test")
res = self.get_success(self.handler.register(frank.localpart)) user_id = self.get_success(self.handler.register_user(frank.localpart))
self.assertEqual(res[0], frank.to_string()) self.assertEqual(user_id, frank.to_string())
rooms = self.get_success(self.store.get_rooms_for_user(res[0])) rooms = self.get_success(self.store.get_rooms_for_user(user_id))
self.assertEqual(len(rooms), 0) self.assertEqual(len(rooms), 0)
def test_auto_create_auto_join_where_auto_create_is_false(self): def test_auto_create_auto_join_where_auto_create_is_false(self):
self.hs.config.autocreate_auto_join_rooms = False self.hs.config.autocreate_auto_join_rooms = False
room_alias_str = "#room:test" room_alias_str = "#room:test"
self.hs.config.auto_join_rooms = [room_alias_str] self.hs.config.auto_join_rooms = [room_alias_str]
res = self.get_success(self.handler.register(localpart="jeff")) user_id = self.get_success(self.handler.register_user(localpart="jeff"))
rooms = self.get_success(self.store.get_rooms_for_user(res[0])) rooms = self.get_success(self.store.get_rooms_for_user(user_id))
self.assertEqual(len(rooms), 0) self.assertEqual(len(rooms), 0)
def test_auto_create_auto_join_rooms_when_support_user_exists(self): def test_auto_create_auto_join_rooms_when_support_user_exists(self):
@ -182,8 +176,8 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
self.hs.config.auto_join_rooms = [room_alias_str] self.hs.config.auto_join_rooms = [room_alias_str]
self.store.is_support_user = Mock(return_value=True) self.store.is_support_user = Mock(return_value=True)
res = self.get_success(self.handler.register(localpart="support")) user_id = self.get_success(self.handler.register_user(localpart="support"))
rooms = self.get_success(self.store.get_rooms_for_user(res[0])) rooms = self.get_success(self.store.get_rooms_for_user(user_id))
self.assertEqual(len(rooms), 0) self.assertEqual(len(rooms), 0)
directory_handler = self.hs.get_handlers().directory_handler directory_handler = self.hs.get_handlers().directory_handler
room_alias = RoomAlias.from_string(room_alias_str) room_alias = RoomAlias.from_string(room_alias_str)
@ -211,24 +205,82 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
# When:- # When:-
# * the user is registered and post consent actions are called # * the user is registered and post consent actions are called
res = self.get_success(self.handler.register(localpart="jeff")) user_id = self.get_success(self.handler.register_user(localpart="jeff"))
self.get_success(self.handler.post_consent_actions(res[0])) self.get_success(self.handler.post_consent_actions(user_id))
# Then:- # Then:-
# * Ensure that they have not been joined to the room # * Ensure that they have not been joined to the room
rooms = self.get_success(self.store.get_rooms_for_user(res[0])) rooms = self.get_success(self.store.get_rooms_for_user(user_id))
self.assertEqual(len(rooms), 0) self.assertEqual(len(rooms), 0)
def test_register_support_user(self): def test_register_support_user(self):
res = self.get_success( user_id = self.get_success(
self.handler.register(localpart="user", user_type=UserTypes.SUPPORT) self.handler.register_user(localpart="user", user_type=UserTypes.SUPPORT)
) )
self.assertTrue(self.store.is_support_user(res[0])) d = self.store.is_support_user(user_id)
self.assertTrue(self.get_success(d))
def test_register_not_support_user(self): def test_register_not_support_user(self):
res = self.get_success(self.handler.register(localpart="user")) user_id = self.get_success(self.handler.register_user(localpart="user"))
self.assertFalse(self.store.is_support_user(res[0])) d = self.store.is_support_user(user_id)
self.assertFalse(self.get_success(d))
def test_invalid_user_id_length(self): def test_invalid_user_id_length(self):
invalid_user_id = "x" * 256 invalid_user_id = "x" * 256
self.get_failure(self.handler.register(localpart=invalid_user_id), SynapseError) self.get_failure(
self.handler.register_user(localpart=invalid_user_id), SynapseError
)
@defer.inlineCallbacks
def get_or_create_user(self, requester, localpart, displayname, password_hash=None):
"""Creates a new user if the user does not exist,
else revokes all previous access tokens and generates a new one.
XXX: this used to be in the main codebase, but was only used by this file,
so got moved here. TODO: get rid of it, probably
Args:
localpart : The local part of the user ID to register. If None,
one will be randomly generated.
Returns:
A tuple of (user_id, access_token).
Raises:
RegistrationError if there was a problem registering.
"""
if localpart is None:
raise SynapseError(400, "Request must include user id")
yield self.hs.get_auth().check_auth_blocking()
need_register = True
try:
yield self.handler.check_username(localpart)
except SynapseError as e:
if e.errcode == Codes.USER_IN_USE:
need_register = False
else:
raise
user = UserID(localpart, self.hs.hostname)
user_id = user.to_string()
token = self.macaroon_generator.generate_access_token(user_id)
if need_register:
yield self.handler.register_with_store(
user_id=user_id,
password_hash=password_hash,
create_profile_with_displayname=user.localpart,
)
else:
yield self.hs.get_auth_handler().delete_access_tokens_for_user(user_id)
yield self.store.add_access_token_to_user(
user_id=user_id, token=token, device_id=None, valid_until_ms=None
)
if displayname is not None:
# logger.info("setting user display name: %s -> %s", user_id, displayname)
yield self.hs.get_profile_handler().set_displayname(
user, requester, displayname, by_admin=True
)
defer.returnValue((user_id, token))

View File

@ -47,11 +47,8 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
def test_handle_local_profile_change_with_support_user(self): def test_handle_local_profile_change_with_support_user(self):
support_user_id = "@support:test" support_user_id = "@support:test"
self.get_success( self.get_success(
self.store.register( self.store.register_user(
user_id=support_user_id, user_id=support_user_id, password_hash=None, user_type=UserTypes.SUPPORT
token="123",
password_hash=None,
user_type=UserTypes.SUPPORT,
) )
) )
@ -73,11 +70,8 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
def test_handle_user_deactivated_support_user(self): def test_handle_user_deactivated_support_user(self):
s_user_id = "@support:test" s_user_id = "@support:test"
self.get_success( self.get_success(
self.store.register( self.store.register_user(
user_id=s_user_id, user_id=s_user_id, password_hash=None, user_type=UserTypes.SUPPORT
token="123",
password_hash=None,
user_type=UserTypes.SUPPORT,
) )
) )
@ -90,7 +84,7 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
def test_handle_user_deactivated_regular_user(self): def test_handle_user_deactivated_regular_user(self):
r_user_id = "@regular:test" r_user_id = "@regular:test"
self.get_success( self.get_success(
self.store.register(user_id=r_user_id, token="123", password_hash=None) self.store.register_user(user_id=r_user_id, password_hash=None)
) )
self.store.remove_from_user_dir = Mock() self.store.remove_from_user_dir = Mock()
self.get_success(self.handler.handle_user_deactivated(r_user_id)) self.get_success(self.handler.handle_user_deactivated(r_user_id))

View File

@ -0,0 +1,179 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.rest import admin
from synapse.rest.client.v1 import login, room
from synapse.rest.client.v2_alpha import sync
from tests.unittest import HomeserverTestCase
class RedactionsTestCase(HomeserverTestCase):
"""Tests that various redaction events are handled correctly"""
servlets = [
admin.register_servlets,
room.register_servlets,
login.register_servlets,
sync.register_servlets,
]
def prepare(self, reactor, clock, hs):
# register a couple of users
self.mod_user_id = self.register_user("user1", "pass")
self.mod_access_token = self.login("user1", "pass")
self.other_user_id = self.register_user("otheruser", "pass")
self.other_access_token = self.login("otheruser", "pass")
# Create a room
self.room_id = self.helper.create_room_as(
self.mod_user_id, tok=self.mod_access_token
)
# Invite the other user
self.helper.invite(
room=self.room_id,
src=self.mod_user_id,
tok=self.mod_access_token,
targ=self.other_user_id,
)
# The other user joins
self.helper.join(
room=self.room_id, user=self.other_user_id, tok=self.other_access_token
)
def _redact_event(self, access_token, room_id, event_id, expect_code=200):
"""Helper function to send a redaction event.
Returns the json body.
"""
path = "/_matrix/client/r0/rooms/%s/redact/%s" % (room_id, event_id)
request, channel = self.make_request(
"POST", path, content={}, access_token=access_token
)
self.render(request)
self.assertEqual(int(channel.result["code"]), expect_code)
return channel.json_body
def _sync_room_timeline(self, access_token, room_id):
request, channel = self.make_request(
"GET", "sync", access_token=self.mod_access_token
)
self.render(request)
self.assertEqual(channel.result["code"], b"200")
room_sync = channel.json_body["rooms"]["join"][room_id]
return room_sync["timeline"]["events"]
def test_redact_event_as_moderator(self):
# as a regular user, send a message to redact
b = self.helper.send(room_id=self.room_id, tok=self.other_access_token)
msg_id = b["event_id"]
# as the moderator, send a redaction
b = self._redact_event(self.mod_access_token, self.room_id, msg_id)
redaction_id = b["event_id"]
# now sync
timeline = self._sync_room_timeline(self.mod_access_token, self.room_id)
# the last event should be the redaction
self.assertEqual(timeline[-1]["event_id"], redaction_id)
self.assertEqual(timeline[-1]["redacts"], msg_id)
# and the penultimate should be the redacted original
self.assertEqual(timeline[-2]["event_id"], msg_id)
self.assertEqual(timeline[-2]["unsigned"]["redacted_by"], redaction_id)
self.assertEqual(timeline[-2]["content"], {})
def test_redact_event_as_normal(self):
# as a regular user, send a message to redact
b = self.helper.send(room_id=self.room_id, tok=self.other_access_token)
normal_msg_id = b["event_id"]
# also send one as the admin
b = self.helper.send(room_id=self.room_id, tok=self.mod_access_token)
admin_msg_id = b["event_id"]
# as a normal, try to redact the admin's event
self._redact_event(
self.other_access_token, self.room_id, admin_msg_id, expect_code=403
)
# now try to redact our own event
b = self._redact_event(self.other_access_token, self.room_id, normal_msg_id)
redaction_id = b["event_id"]
# now sync
timeline = self._sync_room_timeline(self.other_access_token, self.room_id)
# the last event should be the redaction of the normal event
self.assertEqual(timeline[-1]["event_id"], redaction_id)
self.assertEqual(timeline[-1]["redacts"], normal_msg_id)
# the penultimate should be the unredacted one from the admin
self.assertEqual(timeline[-2]["event_id"], admin_msg_id)
self.assertNotIn("redacted_by", timeline[-2]["unsigned"])
self.assertTrue(timeline[-2]["content"]["body"], {})
# and the antepenultimate should be the redacted normal
self.assertEqual(timeline[-3]["event_id"], normal_msg_id)
self.assertEqual(timeline[-3]["unsigned"]["redacted_by"], redaction_id)
self.assertEqual(timeline[-3]["content"], {})
def test_redact_nonexistent_event(self):
# control case: an existing event
b = self.helper.send(room_id=self.room_id, tok=self.other_access_token)
msg_id = b["event_id"]
b = self._redact_event(self.other_access_token, self.room_id, msg_id)
redaction_id = b["event_id"]
# room moderators can send redactions for non-existent events
self._redact_event(self.mod_access_token, self.room_id, "$zzz")
# ... but normals cannot
self._redact_event(
self.other_access_token, self.room_id, "$zzz", expect_code=404
)
# when we sync, we should see only the valid redaction
timeline = self._sync_room_timeline(self.other_access_token, self.room_id)
self.assertEqual(timeline[-1]["event_id"], redaction_id)
self.assertEqual(timeline[-1]["redacts"], msg_id)
# and the penultimate should be the redacted original
self.assertEqual(timeline[-2]["event_id"], msg_id)
self.assertEqual(timeline[-2]["unsigned"]["redacted_by"], redaction_id)
self.assertEqual(timeline[-2]["content"], {})
def test_redact_create_event(self):
# control case: an existing event
b = self.helper.send(room_id=self.room_id, tok=self.mod_access_token)
msg_id = b["event_id"]
self._redact_event(self.mod_access_token, self.room_id, msg_id)
# sync the room, to get the id of the create event
timeline = self._sync_room_timeline(self.other_access_token, self.room_id)
create_event_id = timeline[0]["event_id"]
# room moderators cannot send redactions for create events
self._redact_event(
self.mod_access_token, self.room_id, create_event_id, expect_code=403
)
# and nor can normals
self._redact_event(
self.other_access_token, self.room_id, create_event_id, expect_code=403
)

Some files were not shown because too many files have changed in this diff Show More