mirror of
https://git.anonymousland.org/anonymousland/synapse-product.git
synced 2024-10-01 08:25:44 -04:00
Features
-------- - Include flags to optionally add `m.login.terms` to the registration flow when consent tracking is enabled. ([\#4004](https://github.com/matrix-org/synapse/issues/4004), [\#4133](https://github.com/matrix-org/synapse/issues/4133), [\#4142](https://github.com/matrix-org/synapse/issues/4142), [\#4184](https://github.com/matrix-org/synapse/issues/4184)) - Support for replacing rooms with new ones ([\#4091](https://github.com/matrix-org/synapse/issues/4091), [\#4099](https://github.com/matrix-org/synapse/issues/4099), [\#4100](https://github.com/matrix-org/synapse/issues/4100), [\#4101](https://github.com/matrix-org/synapse/issues/4101)) Bugfixes -------- - Fix exceptions when using the email mailer on Python 3. ([\#4095](https://github.com/matrix-org/synapse/issues/4095)) - Fix e2e key backup with more than 9 backup versions ([\#4113](https://github.com/matrix-org/synapse/issues/4113)) - Searches that request profile info now no longer fail with a 500. ([\#4122](https://github.com/matrix-org/synapse/issues/4122)) - fix return code of empty key backups ([\#4123](https://github.com/matrix-org/synapse/issues/4123)) - If the typing stream ID goes backwards (as on a worker when the master restarts), the worker's typing handler will no longer erroneously report rooms containing new typing events. ([\#4127](https://github.com/matrix-org/synapse/issues/4127)) - Fix table lock of device_lists_remote_cache which could freeze the application ([\#4132](https://github.com/matrix-org/synapse/issues/4132)) - Fix exception when using state res v2 algorithm ([\#4135](https://github.com/matrix-org/synapse/issues/4135)) - Generating the user consent URI no longer fails on Python 3. ([\#4140](https://github.com/matrix-org/synapse/issues/4140), [\#4163](https://github.com/matrix-org/synapse/issues/4163)) - Loading URL previews from the DB cache on Postgres will no longer cause Unicode type errors when responding to the request, and URL previews will no longer fail if the remote server returns a Content-Type header with the chartype in quotes. ([\#4157](https://github.com/matrix-org/synapse/issues/4157)) - The hash_password script now works on Python 3. ([\#4161](https://github.com/matrix-org/synapse/issues/4161)) - Fix noop checks when updating device keys, reducing spurious device list update notifications. ([\#4164](https://github.com/matrix-org/synapse/issues/4164)) Deprecations and Removals ------------------------- - The disused and un-specced identicon generator has been removed. ([\#4106](https://github.com/matrix-org/synapse/issues/4106)) - The obsolete and non-functional /pull federation endpoint has been removed. ([\#4118](https://github.com/matrix-org/synapse/issues/4118)) - The deprecated v1 key exchange endpoints have been removed. ([\#4119](https://github.com/matrix-org/synapse/issues/4119)) - Synapse will no longer fetch keys using the fallback deprecated v1 key exchange method and will now always use v2. ([\#4120](https://github.com/matrix-org/synapse/issues/4120)) Internal Changes ---------------- - Fix build of Docker image with docker-compose ([\#3778](https://github.com/matrix-org/synapse/issues/3778)) - Delete unreferenced state groups during history purge ([\#4006](https://github.com/matrix-org/synapse/issues/4006)) - The "Received rdata" log messages on workers is now logged at DEBUG, not INFO. ([\#4108](https://github.com/matrix-org/synapse/issues/4108)) - Reduce replication traffic for device lists ([\#4109](https://github.com/matrix-org/synapse/issues/4109)) - Fix `synapse_replication_tcp_protocol_*_commands` metric label to be full command name, rather than just the first character ([\#4110](https://github.com/matrix-org/synapse/issues/4110)) - Log some bits about room creation ([\#4121](https://github.com/matrix-org/synapse/issues/4121)) - Fix `tox` failure on old systems ([\#4124](https://github.com/matrix-org/synapse/issues/4124)) - Add STATE_V2_TEST room version ([\#4128](https://github.com/matrix-org/synapse/issues/4128)) - Clean up event accesses and tests ([\#4137](https://github.com/matrix-org/synapse/issues/4137)) - The default logging config will now set an explicit log file encoding of UTF-8. ([\#4138](https://github.com/matrix-org/synapse/issues/4138)) - Add helpers functions for getting prev and auth events of an event ([\#4139](https://github.com/matrix-org/synapse/issues/4139)) - Add some tests for the HTTP pusher. ([\#4149](https://github.com/matrix-org/synapse/issues/4149)) - add purge_history.sh and purge_remote_media.sh scripts to contrib/ ([\#4155](https://github.com/matrix-org/synapse/issues/4155)) - HTTP tests have been refactored to contain less boilerplate. ([\#4156](https://github.com/matrix-org/synapse/issues/4156)) - Drop incoming events from federation for unknown rooms ([\#4165](https://github.com/matrix-org/synapse/issues/4165)) -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEELYHpJ0E46Qa70sEIINaFRWrb6wFAlvzBkIACgkQIINaFRWr b6x8Nw/+OPgR2kCHwEsIe83Ec9aCXtiuZZmmVmA4Xq37uy7XTADtsIi6cPJHHMG6 fR5Cn7eZXGwvrrRFeLX3IlZBf8CegfA21AkADvpDUp1A5cblH3iV27tdYmpft2E0 DskVCDgESXDn/SOeLMyZg6bOk01ZHAEU5Z466wr7tKlb7GBv+amt6BUWKw+eEilX SneYFjUpsCiYTJyx/MmoTKba5znbSx7snzMbKi5kZaRr6xI/vvOI+EN293dpgJ8K ZDkwnjWYJ8RCw/7jctO41qu5zmlrFYA83YKNIguX8hCHp+0Sr8Z4a5SnNekwlUE9 f3jKVs836R4sT0MWLZtFpCUlWOXTjNhHGQNu7QbylPBO0Py8GLrzR5jB4lORwVDD 39X9Ue0N9sEKzCOWqZ8W3VoxCHyjeo5xkA3JxqHgrU5oiRJu5Jypp4zSzlJM7ndu XqQQhOxZArWNq74mWMutNetGck606dVJ5xR0rSQ9s1FVP36zzWNkrTssATjESmTd nvpmbHxUG8T6balbXncRxs9xO7A8nBssV+2lLI91ImbdeBIo+8pCkumKMPQe+sjh 4pHnUpqQk9o4BTm5ETk47JzYJxX5N9MO3zRtrlhWOI2/iVmEA+VJuFWt1jEO7iic OG25upMOD3FVX/+pGZfcNnjMG5ojB8v7M9kfAvxTFCyZrUAnHOU= =iMv/ -----END PGP SIGNATURE----- Merge tag 'v0.33.9' Features -------- - Include flags to optionally add `m.login.terms` to the registration flow when consent tracking is enabled. ([\#4004](https://github.com/matrix-org/synapse/issues/4004), [\#4133](https://github.com/matrix-org/synapse/issues/4133), [\#4142](https://github.com/matrix-org/synapse/issues/4142), [\#4184](https://github.com/matrix-org/synapse/issues/4184)) - Support for replacing rooms with new ones ([\#4091](https://github.com/matrix-org/synapse/issues/4091), [\#4099](https://github.com/matrix-org/synapse/issues/4099), [\#4100](https://github.com/matrix-org/synapse/issues/4100), [\#4101](https://github.com/matrix-org/synapse/issues/4101)) Bugfixes -------- - Fix exceptions when using the email mailer on Python 3. ([\#4095](https://github.com/matrix-org/synapse/issues/4095)) - Fix e2e key backup with more than 9 backup versions ([\#4113](https://github.com/matrix-org/synapse/issues/4113)) - Searches that request profile info now no longer fail with a 500. ([\#4122](https://github.com/matrix-org/synapse/issues/4122)) - fix return code of empty key backups ([\#4123](https://github.com/matrix-org/synapse/issues/4123)) - If the typing stream ID goes backwards (as on a worker when the master restarts), the worker's typing handler will no longer erroneously report rooms containing new typing events. ([\#4127](https://github.com/matrix-org/synapse/issues/4127)) - Fix table lock of device_lists_remote_cache which could freeze the application ([\#4132](https://github.com/matrix-org/synapse/issues/4132)) - Fix exception when using state res v2 algorithm ([\#4135](https://github.com/matrix-org/synapse/issues/4135)) - Generating the user consent URI no longer fails on Python 3. ([\#4140](https://github.com/matrix-org/synapse/issues/4140), [\#4163](https://github.com/matrix-org/synapse/issues/4163)) - Loading URL previews from the DB cache on Postgres will no longer cause Unicode type errors when responding to the request, and URL previews will no longer fail if the remote server returns a Content-Type header with the chartype in quotes. ([\#4157](https://github.com/matrix-org/synapse/issues/4157)) - The hash_password script now works on Python 3. ([\#4161](https://github.com/matrix-org/synapse/issues/4161)) - Fix noop checks when updating device keys, reducing spurious device list update notifications. ([\#4164](https://github.com/matrix-org/synapse/issues/4164)) Deprecations and Removals ------------------------- - The disused and un-specced identicon generator has been removed. ([\#4106](https://github.com/matrix-org/synapse/issues/4106)) - The obsolete and non-functional /pull federation endpoint has been removed. ([\#4118](https://github.com/matrix-org/synapse/issues/4118)) - The deprecated v1 key exchange endpoints have been removed. ([\#4119](https://github.com/matrix-org/synapse/issues/4119)) - Synapse will no longer fetch keys using the fallback deprecated v1 key exchange method and will now always use v2. ([\#4120](https://github.com/matrix-org/synapse/issues/4120)) Internal Changes ---------------- - Fix build of Docker image with docker-compose ([\#3778](https://github.com/matrix-org/synapse/issues/3778)) - Delete unreferenced state groups during history purge ([\#4006](https://github.com/matrix-org/synapse/issues/4006)) - The "Received rdata" log messages on workers is now logged at DEBUG, not INFO. ([\#4108](https://github.com/matrix-org/synapse/issues/4108)) - Reduce replication traffic for device lists ([\#4109](https://github.com/matrix-org/synapse/issues/4109)) - Fix `synapse_replication_tcp_protocol_*_commands` metric label to be full command name, rather than just the first character ([\#4110](https://github.com/matrix-org/synapse/issues/4110)) - Log some bits about room creation ([\#4121](https://github.com/matrix-org/synapse/issues/4121)) - Fix `tox` failure on old systems ([\#4124](https://github.com/matrix-org/synapse/issues/4124)) - Add STATE_V2_TEST room version ([\#4128](https://github.com/matrix-org/synapse/issues/4128)) - Clean up event accesses and tests ([\#4137](https://github.com/matrix-org/synapse/issues/4137)) - The default logging config will now set an explicit log file encoding of UTF-8. ([\#4138](https://github.com/matrix-org/synapse/issues/4138)) - Add helpers functions for getting prev and auth events of an event ([\#4139](https://github.com/matrix-org/synapse/issues/4139)) - Add some tests for the HTTP pusher. ([\#4149](https://github.com/matrix-org/synapse/issues/4149)) - add purge_history.sh and purge_remote_media.sh scripts to contrib/ ([\#4155](https://github.com/matrix-org/synapse/issues/4155)) - HTTP tests have been refactored to contain less boilerplate. ([\#4156](https://github.com/matrix-org/synapse/issues/4156)) - Drop incoming events from federation for unknown rooms ([\#4165](https://github.com/matrix-org/synapse/issues/4165))
This commit is contained in:
commit
678ad155a2
11
.travis.yml
11
.travis.yml
@ -23,6 +23,9 @@ branches:
|
|||||||
- develop
|
- develop
|
||||||
- /^release-v/
|
- /^release-v/
|
||||||
|
|
||||||
|
# When running the tox environments that call Twisted Trial, we can pass the -j
|
||||||
|
# flag to run the tests concurrently. We set this to 2 for CPU bound tests
|
||||||
|
# (SQLite) and 4 for I/O bound tests (PostgreSQL).
|
||||||
matrix:
|
matrix:
|
||||||
fast_finish: true
|
fast_finish: true
|
||||||
include:
|
include:
|
||||||
@ -33,10 +36,10 @@ matrix:
|
|||||||
env: TOX_ENV="pep8,check_isort"
|
env: TOX_ENV="pep8,check_isort"
|
||||||
|
|
||||||
- python: 2.7
|
- python: 2.7
|
||||||
env: TOX_ENV=py27
|
env: TOX_ENV=py27 TRIAL_FLAGS="-j 2"
|
||||||
|
|
||||||
- python: 2.7
|
- python: 2.7
|
||||||
env: TOX_ENV=py27-old
|
env: TOX_ENV=py27-old TRIAL_FLAGS="-j 2"
|
||||||
|
|
||||||
- python: 2.7
|
- python: 2.7
|
||||||
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
|
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
|
||||||
@ -44,10 +47,10 @@ matrix:
|
|||||||
- postgresql
|
- postgresql
|
||||||
|
|
||||||
- python: 3.5
|
- python: 3.5
|
||||||
env: TOX_ENV=py35
|
env: TOX_ENV=py35 TRIAL_FLAGS="-j 2"
|
||||||
|
|
||||||
- python: 3.6
|
- python: 3.6
|
||||||
env: TOX_ENV=py36
|
env: TOX_ENV=py36 TRIAL_FLAGS="-j 2"
|
||||||
|
|
||||||
- python: 3.6
|
- python: 3.6
|
||||||
env: TOX_ENV=py36-postgres TRIAL_FLAGS="-j 4"
|
env: TOX_ENV=py36-postgres TRIAL_FLAGS="-j 4"
|
||||||
|
61
CHANGES.md
61
CHANGES.md
@ -1,3 +1,64 @@
|
|||||||
|
Synapse 0.33.9 (2018-11-19)
|
||||||
|
===========================
|
||||||
|
|
||||||
|
No significant changes.
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 0.33.9rc1 (2018-11-14)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Features
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Include flags to optionally add `m.login.terms` to the registration flow when consent tracking is enabled. ([\#4004](https://github.com/matrix-org/synapse/issues/4004), [\#4133](https://github.com/matrix-org/synapse/issues/4133), [\#4142](https://github.com/matrix-org/synapse/issues/4142), [\#4184](https://github.com/matrix-org/synapse/issues/4184))
|
||||||
|
- Support for replacing rooms with new ones ([\#4091](https://github.com/matrix-org/synapse/issues/4091), [\#4099](https://github.com/matrix-org/synapse/issues/4099), [\#4100](https://github.com/matrix-org/synapse/issues/4100), [\#4101](https://github.com/matrix-org/synapse/issues/4101))
|
||||||
|
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix exceptions when using the email mailer on Python 3. ([\#4095](https://github.com/matrix-org/synapse/issues/4095))
|
||||||
|
- Fix e2e key backup with more than 9 backup versions ([\#4113](https://github.com/matrix-org/synapse/issues/4113))
|
||||||
|
- Searches that request profile info now no longer fail with a 500. ([\#4122](https://github.com/matrix-org/synapse/issues/4122))
|
||||||
|
- fix return code of empty key backups ([\#4123](https://github.com/matrix-org/synapse/issues/4123))
|
||||||
|
- If the typing stream ID goes backwards (as on a worker when the master restarts), the worker's typing handler will no longer erroneously report rooms containing new typing events. ([\#4127](https://github.com/matrix-org/synapse/issues/4127))
|
||||||
|
- Fix table lock of device_lists_remote_cache which could freeze the application ([\#4132](https://github.com/matrix-org/synapse/issues/4132))
|
||||||
|
- Fix exception when using state res v2 algorithm ([\#4135](https://github.com/matrix-org/synapse/issues/4135))
|
||||||
|
- Generating the user consent URI no longer fails on Python 3. ([\#4140](https://github.com/matrix-org/synapse/issues/4140), [\#4163](https://github.com/matrix-org/synapse/issues/4163))
|
||||||
|
- Loading URL previews from the DB cache on Postgres will no longer cause Unicode type errors when responding to the request, and URL previews will no longer fail if the remote server returns a Content-Type header with the chartype in quotes. ([\#4157](https://github.com/matrix-org/synapse/issues/4157))
|
||||||
|
- The hash_password script now works on Python 3. ([\#4161](https://github.com/matrix-org/synapse/issues/4161))
|
||||||
|
- Fix noop checks when updating device keys, reducing spurious device list update notifications. ([\#4164](https://github.com/matrix-org/synapse/issues/4164))
|
||||||
|
|
||||||
|
|
||||||
|
Deprecations and Removals
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
- The disused and un-specced identicon generator has been removed. ([\#4106](https://github.com/matrix-org/synapse/issues/4106))
|
||||||
|
- The obsolete and non-functional /pull federation endpoint has been removed. ([\#4118](https://github.com/matrix-org/synapse/issues/4118))
|
||||||
|
- The deprecated v1 key exchange endpoints have been removed. ([\#4119](https://github.com/matrix-org/synapse/issues/4119))
|
||||||
|
- Synapse will no longer fetch keys using the fallback deprecated v1 key exchange method and will now always use v2. ([\#4120](https://github.com/matrix-org/synapse/issues/4120))
|
||||||
|
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Fix build of Docker image with docker-compose ([\#3778](https://github.com/matrix-org/synapse/issues/3778))
|
||||||
|
- Delete unreferenced state groups during history purge ([\#4006](https://github.com/matrix-org/synapse/issues/4006))
|
||||||
|
- The "Received rdata" log messages on workers is now logged at DEBUG, not INFO. ([\#4108](https://github.com/matrix-org/synapse/issues/4108))
|
||||||
|
- Reduce replication traffic for device lists ([\#4109](https://github.com/matrix-org/synapse/issues/4109))
|
||||||
|
- Fix `synapse_replication_tcp_protocol_*_commands` metric label to be full command name, rather than just the first character ([\#4110](https://github.com/matrix-org/synapse/issues/4110))
|
||||||
|
- Log some bits about room creation ([\#4121](https://github.com/matrix-org/synapse/issues/4121))
|
||||||
|
- Fix `tox` failure on old systems ([\#4124](https://github.com/matrix-org/synapse/issues/4124))
|
||||||
|
- Add STATE_V2_TEST room version ([\#4128](https://github.com/matrix-org/synapse/issues/4128))
|
||||||
|
- Clean up event accesses and tests ([\#4137](https://github.com/matrix-org/synapse/issues/4137))
|
||||||
|
- The default logging config will now set an explicit log file encoding of UTF-8. ([\#4138](https://github.com/matrix-org/synapse/issues/4138))
|
||||||
|
- Add helpers functions for getting prev and auth events of an event ([\#4139](https://github.com/matrix-org/synapse/issues/4139))
|
||||||
|
- Add some tests for the HTTP pusher. ([\#4149](https://github.com/matrix-org/synapse/issues/4149))
|
||||||
|
- add purge_history.sh and purge_remote_media.sh scripts to contrib/ ([\#4155](https://github.com/matrix-org/synapse/issues/4155))
|
||||||
|
- HTTP tests have been refactored to contain less boilerplate. ([\#4156](https://github.com/matrix-org/synapse/issues/4156))
|
||||||
|
- Drop incoming events from federation for unknown rooms ([\#4165](https://github.com/matrix-org/synapse/issues/4165))
|
||||||
|
|
||||||
|
|
||||||
Synapse 0.33.8 (2018-11-01)
|
Synapse 0.33.8 (2018-11-01)
|
||||||
===========================
|
===========================
|
||||||
|
|
||||||
|
@ -6,9 +6,11 @@ version: '3'
|
|||||||
services:
|
services:
|
||||||
|
|
||||||
synapse:
|
synapse:
|
||||||
build: ../..
|
build:
|
||||||
|
context: ../..
|
||||||
|
dockerfile: docker/Dockerfile
|
||||||
image: docker.io/matrixdotorg/synapse:latest
|
image: docker.io/matrixdotorg/synapse:latest
|
||||||
# Since snyapse does not retry to connect to the database, restart upon
|
# Since synapse does not retry to connect to the database, restart upon
|
||||||
# failure
|
# failure
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
# See the readme for a full documentation of the environment settings
|
# See the readme for a full documentation of the environment settings
|
||||||
@ -47,4 +49,4 @@ services:
|
|||||||
# You may store the database tables in a local folder..
|
# You may store the database tables in a local folder..
|
||||||
- ./schemas:/var/lib/postgresql/data
|
- ./schemas:/var/lib/postgresql/data
|
||||||
# .. or store them on some high performance storage for better results
|
# .. or store them on some high performance storage for better results
|
||||||
# - /path/to/ssd/storage:/var/lib/postfesql/data
|
# - /path/to/ssd/storage:/var/lib/postgresql/data
|
||||||
|
16
contrib/purge_api/README.md
Normal file
16
contrib/purge_api/README.md
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
Purge history API examples
|
||||||
|
==========================
|
||||||
|
|
||||||
|
# `purge_history.sh`
|
||||||
|
|
||||||
|
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
|
||||||
|
purge all messages in a list of rooms up to a certain event. You can select a
|
||||||
|
timeframe or a number of messages that you want to keep in the room.
|
||||||
|
|
||||||
|
Just configure the variables DOMAIN, ADMIN, ROOMS_ARRAY and TIME at the top of
|
||||||
|
the script.
|
||||||
|
|
||||||
|
# `purge_remote_media.sh`
|
||||||
|
|
||||||
|
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
|
||||||
|
purge all old cached remote media.
|
141
contrib/purge_api/purge_history.sh
Normal file
141
contrib/purge_api/purge_history.sh
Normal file
@ -0,0 +1,141 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# this script will use the api:
|
||||||
|
# https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst
|
||||||
|
#
|
||||||
|
# It will purge all messages in a list of rooms up to a cetrain event
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
# define your domain and admin user
|
||||||
|
###################################################################################################
|
||||||
|
# add this user as admin in your home server:
|
||||||
|
DOMAIN=yourserver.tld
|
||||||
|
# add this user as admin in your home server:
|
||||||
|
ADMIN="@you_admin_username:$DOMAIN"
|
||||||
|
|
||||||
|
API_URL="$DOMAIN:8008/_matrix/client/r0"
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
#choose the rooms to prune old messages from (add a free comment at the end)
|
||||||
|
###################################################################################################
|
||||||
|
# the room_id's you can get e.g. from your Riot clients "View Source" button on each message
|
||||||
|
ROOMS_ARRAY=(
|
||||||
|
'!DgvjtOljKujDBrxyHk:matrix.org#riot:matrix.org'
|
||||||
|
'!QtykxKocfZaZOUrTwp:matrix.org#Matrix HQ'
|
||||||
|
)
|
||||||
|
|
||||||
|
# ALTERNATIVELY:
|
||||||
|
# you can select all the rooms that are not encrypted and loop over the result:
|
||||||
|
# SELECT room_id FROM rooms WHERE room_id NOT IN (SELECT DISTINCT room_id FROM events WHERE type ='m.room.encrypted')
|
||||||
|
# or
|
||||||
|
# select all rooms with at least 100 members:
|
||||||
|
# SELECT q.room_id FROM (select count(*) as numberofusers, room_id FROM current_state_events WHERE type ='m.room.member'
|
||||||
|
# GROUP BY room_id) AS q LEFT JOIN room_aliases a ON q.room_id=a.room_id WHERE q.numberofusers > 100 ORDER BY numberofusers desc
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
# evaluate the EVENT_ID before which should be pruned
|
||||||
|
###################################################################################################
|
||||||
|
# choose a time before which the messages should be pruned:
|
||||||
|
TIME='12 months ago'
|
||||||
|
# ALTERNATIVELY:
|
||||||
|
# a certain time:
|
||||||
|
# TIME='2016-08-31 23:59:59'
|
||||||
|
|
||||||
|
# creates a timestamp from the given time string:
|
||||||
|
UNIX_TIMESTAMP=$(date +%s%3N --date='TZ="UTC+2" '"$TIME")
|
||||||
|
|
||||||
|
# ALTERNATIVELY:
|
||||||
|
# prune all messages that are older than 1000 messages ago:
|
||||||
|
# LAST_MESSAGES=1000
|
||||||
|
# SQL_GET_EVENT="SELECT event_id from events WHERE type='m.room.message' AND room_id ='$ROOM' ORDER BY received_ts DESC LIMIT 1 offset $(($LAST_MESSAGES - 1))"
|
||||||
|
|
||||||
|
# ALTERNATIVELY:
|
||||||
|
# select the EVENT_ID manually:
|
||||||
|
#EVENT_ID='$1471814088343495zpPNI:matrix.org' # an example event from 21st of Aug 2016 by Matthew
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
# make the admin user a server admin in the database with
|
||||||
|
###################################################################################################
|
||||||
|
# psql -A -t --dbname=synapse -c "UPDATE users SET admin=1 WHERE name LIKE '$ADMIN'"
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
# database function
|
||||||
|
###################################################################################################
|
||||||
|
sql (){
|
||||||
|
# for sqlite3:
|
||||||
|
#sqlite3 homeserver.db "pragma busy_timeout=20000;$1" | awk '{print $2}'
|
||||||
|
# for postgres:
|
||||||
|
psql -A -t --dbname=synapse -c "$1" | grep -v 'Pager'
|
||||||
|
}
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
# get an access token
|
||||||
|
###################################################################################################
|
||||||
|
# for example externally by watching Riot in your browser's network inspector
|
||||||
|
# or internally on the server locally, use this:
|
||||||
|
TOKEN=$(sql "SELECT token FROM access_tokens WHERE user_id='$ADMIN' ORDER BY id DESC LIMIT 1")
|
||||||
|
AUTH="Authorization: Bearer $TOKEN"
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
# check, if your TOKEN works. For example this works:
|
||||||
|
###################################################################################################
|
||||||
|
# $ curl --header "$AUTH" "$API_URL/rooms/$ROOM/state/m.room.power_levels"
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
# finally start pruning the room:
|
||||||
|
###################################################################################################
|
||||||
|
POSTDATA='{"delete_local_events":"true"}' # this will really delete local events, so the messages in the room really disappear unless they are restored by remote federation
|
||||||
|
|
||||||
|
for ROOM in "${ROOMS_ARRAY[@]}"; do
|
||||||
|
echo "########################################### $(date) ################# "
|
||||||
|
echo "pruning room: $ROOM ..."
|
||||||
|
ROOM=${ROOM%#*}
|
||||||
|
#set -x
|
||||||
|
echo "check for alias in db..."
|
||||||
|
# for postgres:
|
||||||
|
sql "SELECT * FROM room_aliases WHERE room_id='$ROOM'"
|
||||||
|
echo "get event..."
|
||||||
|
# for postgres:
|
||||||
|
EVENT_ID=$(sql "SELECT event_id FROM events WHERE type='m.room.message' AND received_ts<'$UNIX_TIMESTAMP' AND room_id='$ROOM' ORDER BY received_ts DESC LIMIT 1;")
|
||||||
|
if [ "$EVENT_ID" == "" ]; then
|
||||||
|
echo "no event $TIME"
|
||||||
|
else
|
||||||
|
echo "event: $EVENT_ID"
|
||||||
|
SLEEP=2
|
||||||
|
set -x
|
||||||
|
# call purge
|
||||||
|
OUT=$(curl --header "$AUTH" -s -d $POSTDATA POST "$API_URL/admin/purge_history/$ROOM/$EVENT_ID")
|
||||||
|
PURGE_ID=$(echo "$OUT" |grep purge_id|cut -d'"' -f4 )
|
||||||
|
if [ "$PURGE_ID" == "" ]; then
|
||||||
|
# probably the history purge is already in progress for $ROOM
|
||||||
|
: "continuing with next room"
|
||||||
|
else
|
||||||
|
while : ; do
|
||||||
|
# get status of purge and sleep longer each time if still active
|
||||||
|
sleep $SLEEP
|
||||||
|
STATUS=$(curl --header "$AUTH" -s GET "$API_URL/admin/purge_history_status/$PURGE_ID" |grep status|cut -d'"' -f4)
|
||||||
|
: "$ROOM --> Status: $STATUS"
|
||||||
|
[[ "$STATUS" == "active" ]] || break
|
||||||
|
SLEEP=$((SLEEP + 1))
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
set +x
|
||||||
|
sleep 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
# additionally
|
||||||
|
###################################################################################################
|
||||||
|
# to benefit from pruning large amounts of data, you need to call VACUUM to free the unused space.
|
||||||
|
# This can take a very long time (hours) and the client have to be stopped while you do so:
|
||||||
|
# $ synctl stop
|
||||||
|
# $ sqlite3 -line homeserver.db "vacuum;"
|
||||||
|
# $ synctl start
|
||||||
|
|
||||||
|
# This could be set, so you don't need to prune every time after deleting some rows:
|
||||||
|
# $ sqlite3 homeserver.db "PRAGMA auto_vacuum = FULL;"
|
||||||
|
# be cautious, it could make the database somewhat slow if there are a lot of deletions
|
||||||
|
|
||||||
|
exit
|
54
contrib/purge_api/purge_remote_media.sh
Normal file
54
contrib/purge_api/purge_remote_media.sh
Normal file
@ -0,0 +1,54 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
DOMAIN=yourserver.tld
|
||||||
|
# add this user as admin in your home server:
|
||||||
|
ADMIN="@you_admin_username:$DOMAIN"
|
||||||
|
|
||||||
|
API_URL="$DOMAIN:8008/_matrix/client/r0"
|
||||||
|
|
||||||
|
# choose a time before which the messages should be pruned:
|
||||||
|
# TIME='2016-08-31 23:59:59'
|
||||||
|
TIME='12 months ago'
|
||||||
|
|
||||||
|
# creates a timestamp from the given time string:
|
||||||
|
UNIX_TIMESTAMP=$(date +%s%3N --date='TZ="UTC+2" '"$TIME")
|
||||||
|
|
||||||
|
|
||||||
|
###################################################################################################
|
||||||
|
# database function
|
||||||
|
###################################################################################################
|
||||||
|
sql (){
|
||||||
|
# for sqlite3:
|
||||||
|
#sqlite3 homeserver.db "pragma busy_timeout=20000;$1" | awk '{print $2}'
|
||||||
|
# for postgres:
|
||||||
|
psql -A -t --dbname=synapse -c "$1" | grep -v 'Pager'
|
||||||
|
}
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# make the admin user a server admin in the database with
|
||||||
|
###############################################################################
|
||||||
|
# sql "UPDATE users SET admin=1 WHERE name LIKE '$ADMIN'"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# get an access token
|
||||||
|
###############################################################################
|
||||||
|
# for example externally by watching Riot in your browser's network inspector
|
||||||
|
# or internally on the server locally, use this:
|
||||||
|
TOKEN=$(sql "SELECT token FROM access_tokens WHERE user_id='$ADMIN' ORDER BY id DESC LIMIT 1")
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# check, if your TOKEN works. For example this works:
|
||||||
|
###############################################################################
|
||||||
|
# curl --header "Authorization: Bearer $TOKEN" "$API_URL/rooms/$ROOM/state/m.room.power_levels"
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# optional check size before
|
||||||
|
###############################################################################
|
||||||
|
# echo calculate used storage before ...
|
||||||
|
# du -shc ../.synapse/media_store/*
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# finally start pruning media:
|
||||||
|
###############################################################################
|
||||||
|
set -x # for debugging the generated string
|
||||||
|
curl --header "Authorization: Bearer $TOKEN" -v POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"
|
@ -85,6 +85,37 @@ Once this is complete, and the server has been restarted, try visiting
|
|||||||
an error "Missing string query parameter 'u'". It is now possible to manually
|
an error "Missing string query parameter 'u'". It is now possible to manually
|
||||||
construct URIs where users can give their consent.
|
construct URIs where users can give their consent.
|
||||||
|
|
||||||
|
### Enabling consent tracking at registration
|
||||||
|
|
||||||
|
1. Add the following to your configuration:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
user_consent:
|
||||||
|
require_at_registration: true
|
||||||
|
policy_name: "Privacy Policy" # or whatever you'd like to call the policy
|
||||||
|
```
|
||||||
|
|
||||||
|
2. In your consent templates, make use of the `public_version` variable to
|
||||||
|
see if an unauthenticated user is viewing the page. This is typically
|
||||||
|
wrapped around the form that would be used to actually agree to the document:
|
||||||
|
|
||||||
|
```
|
||||||
|
{% if not public_version %}
|
||||||
|
<!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
|
||||||
|
<form method="post" action="consent">
|
||||||
|
<input type="hidden" name="v" value="{{version}}"/>
|
||||||
|
<input type="hidden" name="u" value="{{user}}"/>
|
||||||
|
<input type="hidden" name="h" value="{{userhmac}}"/>
|
||||||
|
<input type="submit" value="Sure thing!"/>
|
||||||
|
</form>
|
||||||
|
{% endif %}
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Restart Synapse to apply the changes.
|
||||||
|
|
||||||
|
Visiting `https://<server>/_matrix/consent` should now give you a view of the privacy
|
||||||
|
document. This is what users will be able to see when registering for accounts.
|
||||||
|
|
||||||
### Constructing the consent URI
|
### Constructing the consent URI
|
||||||
|
|
||||||
It may be useful to manually construct the "consent URI" for a given user - for
|
It may be useful to manually construct the "consent URI" for a given user - for
|
||||||
@ -106,6 +137,12 @@ query parameters:
|
|||||||
`https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
|
`https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
|
||||||
|
|
||||||
|
|
||||||
|
Note that not providing a `u` parameter will be interpreted as wanting to view
|
||||||
|
the document from an unauthenticated perspective, such as prior to registration.
|
||||||
|
Therefore, the `h` parameter is not required in this scenario. To enable this
|
||||||
|
behaviour, set `require_at_registration` to `true` in your `user_consent` config.
|
||||||
|
|
||||||
|
|
||||||
Sending users a server notice asking them to agree to the policy
|
Sending users a server notice asking them to agree to the policy
|
||||||
----------------------------------------------------------------
|
----------------------------------------------------------------
|
||||||
|
|
||||||
|
@ -12,6 +12,8 @@
|
|||||||
<p>
|
<p>
|
||||||
All your base are belong to us.
|
All your base are belong to us.
|
||||||
</p>
|
</p>
|
||||||
|
{% if not public_version %}
|
||||||
|
<!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
|
||||||
<form method="post" action="consent">
|
<form method="post" action="consent">
|
||||||
<input type="hidden" name="v" value="{{version}}"/>
|
<input type="hidden" name="v" value="{{version}}"/>
|
||||||
<input type="hidden" name="u" value="{{user}}"/>
|
<input type="hidden" name="u" value="{{user}}"/>
|
||||||
@ -19,5 +21,6 @@
|
|||||||
<input type="submit" value="Sure thing!"/>
|
<input type="submit" value="Sure thing!"/>
|
||||||
</form>
|
</form>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
@ -14,22 +14,3 @@ fi
|
|||||||
|
|
||||||
# set up the virtualenv
|
# set up the virtualenv
|
||||||
tox -e py27 --notest -v
|
tox -e py27 --notest -v
|
||||||
|
|
||||||
TOX_BIN=$TOX_DIR/py27/bin
|
|
||||||
|
|
||||||
# cryptography 2.2 requires setuptools >= 18.5.
|
|
||||||
#
|
|
||||||
# older versions of virtualenv (?) give us a virtualenv with the same version
|
|
||||||
# of setuptools as is installed on the system python (and tox runs virtualenv
|
|
||||||
# under python3, so we get the version of setuptools that is installed on that).
|
|
||||||
#
|
|
||||||
# anyway, make sure that we have a recent enough setuptools.
|
|
||||||
$TOX_BIN/pip install 'setuptools>=18.5'
|
|
||||||
|
|
||||||
# we also need a semi-recent version of pip, because old ones fail to install
|
|
||||||
# the "enum34" dependency of cryptography.
|
|
||||||
$TOX_BIN/pip install 'pip>=10'
|
|
||||||
|
|
||||||
{ python synapse/python_dependencies.py
|
|
||||||
echo lxml
|
|
||||||
} | xargs $TOX_BIN/pip install
|
|
||||||
|
@ -154,10 +154,15 @@ def request_json(method, origin_name, origin_key, destination, path, content):
|
|||||||
s = requests.Session()
|
s = requests.Session()
|
||||||
s.mount("matrix://", MatrixConnectionAdapter())
|
s.mount("matrix://", MatrixConnectionAdapter())
|
||||||
|
|
||||||
|
headers = {"Host": destination, "Authorization": authorization_headers[0]}
|
||||||
|
|
||||||
|
if method == "POST":
|
||||||
|
headers["Content-Type"] = "application/json"
|
||||||
|
|
||||||
result = s.request(
|
result = s.request(
|
||||||
method=method,
|
method=method,
|
||||||
url=dest,
|
url=dest,
|
||||||
headers={"Host": destination, "Authorization": authorization_headers[0]},
|
headers=headers,
|
||||||
verify=False,
|
verify=False,
|
||||||
data=content,
|
data=content,
|
||||||
)
|
)
|
||||||
@ -203,7 +208,7 @@ def main():
|
|||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"-X",
|
"-X",
|
||||||
"--method",
|
"--method",
|
||||||
help="HTTP method to use for the request. Defaults to GET if --data is"
|
help="HTTP method to use for the request. Defaults to GET if --body is"
|
||||||
"unspecified, POST if it is.",
|
"unspecified, POST if it is.",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -1,39 +0,0 @@
|
|||||||
#!/usr/bin/env perl
|
|
||||||
|
|
||||||
use strict;
|
|
||||||
use warnings;
|
|
||||||
|
|
||||||
use DBI;
|
|
||||||
use DBD::SQLite;
|
|
||||||
use JSON;
|
|
||||||
use Getopt::Long;
|
|
||||||
|
|
||||||
my $db; # = "homeserver.db";
|
|
||||||
my $server = "http://localhost:8008";
|
|
||||||
my $size = 320;
|
|
||||||
|
|
||||||
GetOptions("db|d=s", \$db,
|
|
||||||
"server|s=s", \$server,
|
|
||||||
"width|w=i", \$size) or usage();
|
|
||||||
|
|
||||||
usage() unless $db;
|
|
||||||
|
|
||||||
my $dbh = DBI->connect("dbi:SQLite:dbname=$db","","") || die $DBI::errstr;
|
|
||||||
|
|
||||||
my $res = $dbh->selectall_arrayref("select token, name from access_tokens, users where access_tokens.user_id = users.id group by user_id") || die $DBI::errstr;
|
|
||||||
|
|
||||||
foreach (@$res) {
|
|
||||||
my ($token, $mxid) = ($_->[0], $_->[1]);
|
|
||||||
my ($user_id) = ($mxid =~ m/@(.*):/);
|
|
||||||
my ($url) = $dbh->selectrow_array("select avatar_url from profiles where user_id=?", undef, $user_id);
|
|
||||||
if (!$url || $url =~ /#auto$/) {
|
|
||||||
`curl -s -o tmp.png "$server/_matrix/media/v1/identicon?name=${mxid}&width=$size&height=$size"`;
|
|
||||||
my $json = `curl -s -X POST -H "Content-Type: image/png" -T "tmp.png" $server/_matrix/media/v1/upload?access_token=$token`;
|
|
||||||
my $content_uri = from_json($json)->{content_uri};
|
|
||||||
`curl -X PUT -H "Content-Type: application/json" --data '{ "avatar_url": "${content_uri}#auto"}' $server/_matrix/client/api/v1/profile/${mxid}/avatar_url?access_token=$token`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
sub usage {
|
|
||||||
die "usage: ./make-identicons.pl\n\t-d database [e.g. homeserver.db]\n\t-s homeserver (default: http://localhost:8008)\n\t-w identicon size in pixels (default 320)";
|
|
||||||
}
|
|
@ -3,6 +3,7 @@
|
|||||||
import argparse
|
import argparse
|
||||||
import getpass
|
import getpass
|
||||||
import sys
|
import sys
|
||||||
|
import unicodedata
|
||||||
|
|
||||||
import bcrypt
|
import bcrypt
|
||||||
import yaml
|
import yaml
|
||||||
@ -10,6 +11,7 @@ import yaml
|
|||||||
bcrypt_rounds = 12
|
bcrypt_rounds = 12
|
||||||
password_pepper = ""
|
password_pepper = ""
|
||||||
|
|
||||||
|
|
||||||
def prompt_for_pass():
|
def prompt_for_pass():
|
||||||
password = getpass.getpass("Password: ")
|
password = getpass.getpass("Password: ")
|
||||||
|
|
||||||
@ -23,19 +25,27 @@ def prompt_for_pass():
|
|||||||
|
|
||||||
return password
|
return password
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
parser = argparse.ArgumentParser(
|
parser = argparse.ArgumentParser(
|
||||||
description="Calculate the hash of a new password, so that passwords"
|
description=(
|
||||||
" can be reset")
|
"Calculate the hash of a new password, so that passwords can be reset"
|
||||||
|
)
|
||||||
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"-p", "--password",
|
"-p",
|
||||||
|
"--password",
|
||||||
default=None,
|
default=None,
|
||||||
help="New password for user. Will prompt if omitted.",
|
help="New password for user. Will prompt if omitted.",
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
"-c", "--config",
|
"-c",
|
||||||
|
"--config",
|
||||||
type=argparse.FileType('r'),
|
type=argparse.FileType('r'),
|
||||||
help="Path to server config file. Used to read in bcrypt_rounds and password_pepper.",
|
help=(
|
||||||
|
"Path to server config file. "
|
||||||
|
"Used to read in bcrypt_rounds and password_pepper."
|
||||||
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
@ -49,4 +59,21 @@ if __name__ == "__main__":
|
|||||||
if not password:
|
if not password:
|
||||||
password = prompt_for_pass()
|
password = prompt_for_pass()
|
||||||
|
|
||||||
print bcrypt.hashpw(password + password_pepper, bcrypt.gensalt(bcrypt_rounds))
|
# On Python 2, make sure we decode it to Unicode before we normalise it
|
||||||
|
if isinstance(password, bytes):
|
||||||
|
try:
|
||||||
|
password = password.decode(sys.stdin.encoding)
|
||||||
|
except UnicodeDecodeError:
|
||||||
|
print(
|
||||||
|
"ERROR! Your password is not decodable using your terminal encoding (%s)."
|
||||||
|
% (sys.stdin.encoding,)
|
||||||
|
)
|
||||||
|
|
||||||
|
pw = unicodedata.normalize("NFKC", password)
|
||||||
|
|
||||||
|
hashed = bcrypt.hashpw(
|
||||||
|
pw.encode('utf8') + password_pepper.encode("utf8"),
|
||||||
|
bcrypt.gensalt(bcrypt_rounds),
|
||||||
|
).decode('ascii')
|
||||||
|
|
||||||
|
print(hashed)
|
||||||
|
@ -27,4 +27,4 @@ try:
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "0.33.8"
|
__version__ = "0.33.9"
|
||||||
|
@ -51,6 +51,7 @@ class LoginType(object):
|
|||||||
EMAIL_IDENTITY = u"m.login.email.identity"
|
EMAIL_IDENTITY = u"m.login.email.identity"
|
||||||
MSISDN = u"m.login.msisdn"
|
MSISDN = u"m.login.msisdn"
|
||||||
RECAPTCHA = u"m.login.recaptcha"
|
RECAPTCHA = u"m.login.recaptcha"
|
||||||
|
TERMS = u"m.login.terms"
|
||||||
DUMMY = u"m.login.dummy"
|
DUMMY = u"m.login.dummy"
|
||||||
|
|
||||||
# Only for C/S API v1
|
# Only for C/S API v1
|
||||||
@ -61,6 +62,7 @@ class LoginType(object):
|
|||||||
class EventTypes(object):
|
class EventTypes(object):
|
||||||
Member = "m.room.member"
|
Member = "m.room.member"
|
||||||
Create = "m.room.create"
|
Create = "m.room.create"
|
||||||
|
Tombstone = "m.room.tombstone"
|
||||||
JoinRules = "m.room.join_rules"
|
JoinRules = "m.room.join_rules"
|
||||||
PowerLevels = "m.room.power_levels"
|
PowerLevels = "m.room.power_levels"
|
||||||
Aliases = "m.room.aliases"
|
Aliases = "m.room.aliases"
|
||||||
@ -101,6 +103,7 @@ class ThirdPartyEntityKind(object):
|
|||||||
class RoomVersions(object):
|
class RoomVersions(object):
|
||||||
V1 = "1"
|
V1 = "1"
|
||||||
VDH_TEST = "vdh-test-version"
|
VDH_TEST = "vdh-test-version"
|
||||||
|
STATE_V2_TEST = "state-v2-test"
|
||||||
|
|
||||||
|
|
||||||
# the version we will give rooms which are created on this server
|
# the version we will give rooms which are created on this server
|
||||||
@ -108,7 +111,11 @@ DEFAULT_ROOM_VERSION = RoomVersions.V1
|
|||||||
|
|
||||||
# vdh-test-version is a placeholder to get room versioning support working and tested
|
# vdh-test-version is a placeholder to get room versioning support working and tested
|
||||||
# until we have a working v2.
|
# until we have a working v2.
|
||||||
KNOWN_ROOM_VERSIONS = {RoomVersions.V1, RoomVersions.VDH_TEST}
|
KNOWN_ROOM_VERSIONS = {
|
||||||
|
RoomVersions.V1,
|
||||||
|
RoomVersions.VDH_TEST,
|
||||||
|
RoomVersions.STATE_V2_TEST,
|
||||||
|
}
|
||||||
|
|
||||||
ServerNoticeMsgType = "m.server_notice"
|
ServerNoticeMsgType = "m.server_notice"
|
||||||
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
|
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
|
||||||
|
@ -28,7 +28,6 @@ FEDERATION_PREFIX = "/_matrix/federation/v1"
|
|||||||
STATIC_PREFIX = "/_matrix/static"
|
STATIC_PREFIX = "/_matrix/static"
|
||||||
WEB_CLIENT_PREFIX = "/_matrix/client"
|
WEB_CLIENT_PREFIX = "/_matrix/client"
|
||||||
CONTENT_REPO_PREFIX = "/_matrix/content"
|
CONTENT_REPO_PREFIX = "/_matrix/content"
|
||||||
SERVER_KEY_PREFIX = "/_matrix/key/v1"
|
|
||||||
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
||||||
MEDIA_PREFIX = "/_matrix/media/r0"
|
MEDIA_PREFIX = "/_matrix/media/r0"
|
||||||
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
||||||
|
@ -37,7 +37,6 @@ from synapse.api.urls import (
|
|||||||
FEDERATION_PREFIX,
|
FEDERATION_PREFIX,
|
||||||
LEGACY_MEDIA_PREFIX,
|
LEGACY_MEDIA_PREFIX,
|
||||||
MEDIA_PREFIX,
|
MEDIA_PREFIX,
|
||||||
SERVER_KEY_PREFIX,
|
|
||||||
SERVER_KEY_V2_PREFIX,
|
SERVER_KEY_V2_PREFIX,
|
||||||
STATIC_PREFIX,
|
STATIC_PREFIX,
|
||||||
WEB_CLIENT_PREFIX,
|
WEB_CLIENT_PREFIX,
|
||||||
@ -59,7 +58,6 @@ from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, check_requirem
|
|||||||
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||||
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
|
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
|
||||||
from synapse.rest import ClientRestResource
|
from synapse.rest import ClientRestResource
|
||||||
from synapse.rest.key.v1.server_key_resource import LocalKey
|
|
||||||
from synapse.rest.key.v2 import KeyApiV2Resource
|
from synapse.rest.key.v2 import KeyApiV2Resource
|
||||||
from synapse.rest.media.v0.content_repository import ContentRepoResource
|
from synapse.rest.media.v0.content_repository import ContentRepoResource
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@ -236,10 +234,7 @@ class SynapseHomeServer(HomeServer):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if name in ["keys", "federation"]:
|
if name in ["keys", "federation"]:
|
||||||
resources.update({
|
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
|
||||||
SERVER_KEY_PREFIX: LocalKey(self),
|
|
||||||
SERVER_KEY_V2_PREFIX: KeyApiV2Resource(self),
|
|
||||||
})
|
|
||||||
|
|
||||||
if name == "webclient":
|
if name == "webclient":
|
||||||
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
|
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
|
||||||
|
@ -226,7 +226,15 @@ class SynchrotronPresence(object):
|
|||||||
class SynchrotronTyping(object):
|
class SynchrotronTyping(object):
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
self._latest_room_serial = 0
|
self._latest_room_serial = 0
|
||||||
|
self._reset()
|
||||||
|
|
||||||
|
def _reset(self):
|
||||||
|
"""
|
||||||
|
Reset the typing handler's data caches.
|
||||||
|
"""
|
||||||
|
# map room IDs to serial numbers
|
||||||
self._room_serials = {}
|
self._room_serials = {}
|
||||||
|
# map room IDs to sets of users currently typing
|
||||||
self._room_typing = {}
|
self._room_typing = {}
|
||||||
|
|
||||||
def stream_positions(self):
|
def stream_positions(self):
|
||||||
@ -236,6 +244,12 @@ class SynchrotronTyping(object):
|
|||||||
return {"typing": self._latest_room_serial}
|
return {"typing": self._latest_room_serial}
|
||||||
|
|
||||||
def process_replication_rows(self, token, rows):
|
def process_replication_rows(self, token, rows):
|
||||||
|
if self._latest_room_serial > token:
|
||||||
|
# The master has gone backwards. To prevent inconsistent data, just
|
||||||
|
# clear everything.
|
||||||
|
self._reset()
|
||||||
|
|
||||||
|
# Set the latest serial token to whatever the server gave us.
|
||||||
self._latest_room_serial = token
|
self._latest_room_serial = token
|
||||||
|
|
||||||
for row in rows:
|
for row in rows:
|
||||||
|
@ -42,6 +42,14 @@ DEFAULT_CONFIG = """\
|
|||||||
# until the user consents to the privacy policy. The value of the setting is
|
# until the user consents to the privacy policy. The value of the setting is
|
||||||
# used as the text of the error.
|
# used as the text of the error.
|
||||||
#
|
#
|
||||||
|
# 'require_at_registration', if enabled, will add a step to the registration
|
||||||
|
# process, similar to how captcha works. Users will be required to accept the
|
||||||
|
# policy before their account is created.
|
||||||
|
#
|
||||||
|
# 'policy_name' is the display name of the policy users will see when registering
|
||||||
|
# for an account. Has no effect unless `require_at_registration` is enabled.
|
||||||
|
# Defaults to "Privacy Policy".
|
||||||
|
#
|
||||||
# user_consent:
|
# user_consent:
|
||||||
# template_dir: res/templates/privacy
|
# template_dir: res/templates/privacy
|
||||||
# version: 1.0
|
# version: 1.0
|
||||||
@ -54,6 +62,8 @@ DEFAULT_CONFIG = """\
|
|||||||
# block_events_error: >-
|
# block_events_error: >-
|
||||||
# To continue using this homeserver you must review and agree to the
|
# To continue using this homeserver you must review and agree to the
|
||||||
# terms and conditions at %(consent_uri)s
|
# terms and conditions at %(consent_uri)s
|
||||||
|
# require_at_registration: False
|
||||||
|
# policy_name: Privacy Policy
|
||||||
#
|
#
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@ -67,6 +77,8 @@ class ConsentConfig(Config):
|
|||||||
self.user_consent_server_notice_content = None
|
self.user_consent_server_notice_content = None
|
||||||
self.user_consent_server_notice_to_guests = False
|
self.user_consent_server_notice_to_guests = False
|
||||||
self.block_events_without_consent_error = None
|
self.block_events_without_consent_error = None
|
||||||
|
self.user_consent_at_registration = False
|
||||||
|
self.user_consent_policy_name = "Privacy Policy"
|
||||||
|
|
||||||
def read_config(self, config):
|
def read_config(self, config):
|
||||||
consent_config = config.get("user_consent")
|
consent_config = config.get("user_consent")
|
||||||
@ -83,6 +95,12 @@ class ConsentConfig(Config):
|
|||||||
self.user_consent_server_notice_to_guests = bool(consent_config.get(
|
self.user_consent_server_notice_to_guests = bool(consent_config.get(
|
||||||
"send_server_notice_to_guests", False,
|
"send_server_notice_to_guests", False,
|
||||||
))
|
))
|
||||||
|
self.user_consent_at_registration = bool(consent_config.get(
|
||||||
|
"require_at_registration", False,
|
||||||
|
))
|
||||||
|
self.user_consent_policy_name = consent_config.get(
|
||||||
|
"policy_name", "Privacy Policy",
|
||||||
|
)
|
||||||
|
|
||||||
def default_config(self, **kwargs):
|
def default_config(self, **kwargs):
|
||||||
return DEFAULT_CONFIG
|
return DEFAULT_CONFIG
|
||||||
|
@ -50,6 +50,7 @@ handlers:
|
|||||||
maxBytes: 104857600
|
maxBytes: 104857600
|
||||||
backupCount: 10
|
backupCount: 10
|
||||||
filters: [context]
|
filters: [context]
|
||||||
|
encoding: utf8
|
||||||
console:
|
console:
|
||||||
class: logging.StreamHandler
|
class: logging.StreamHandler
|
||||||
formatter: precise
|
formatter: precise
|
||||||
|
@ -15,6 +15,8 @@
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
|
from six.moves import urllib
|
||||||
|
|
||||||
from canonicaljson import json
|
from canonicaljson import json
|
||||||
|
|
||||||
from twisted.internet import defer, reactor
|
from twisted.internet import defer, reactor
|
||||||
@ -28,15 +30,15 @@ from synapse.util import logcontext
|
|||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
KEY_API_V1 = b"/_matrix/key/v1/"
|
KEY_API_V2 = "/_matrix/key/v2/server/%s"
|
||||||
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
|
def fetch_server_key(server_name, tls_client_options_factory, key_id):
|
||||||
"""Fetch the keys for a remote server."""
|
"""Fetch the keys for a remote server."""
|
||||||
|
|
||||||
factory = SynapseKeyClientFactory()
|
factory = SynapseKeyClientFactory()
|
||||||
factory.path = path
|
factory.path = KEY_API_V2 % (urllib.parse.quote(key_id), )
|
||||||
factory.host = server_name
|
factory.host = server_name
|
||||||
endpoint = matrix_federation_endpoint(
|
endpoint = matrix_federation_endpoint(
|
||||||
reactor, server_name, tls_client_options_factory, timeout=30
|
reactor, server_name, tls_client_options_factory, timeout=30
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
# Copyright 2014-2016 OpenMarket Ltd
|
# Copyright 2014-2016 OpenMarket Ltd
|
||||||
# Copyright 2017 New Vector Ltd.
|
# Copyright 2017, 2018 New Vector Ltd.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -18,8 +18,6 @@ import hashlib
|
|||||||
import logging
|
import logging
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
|
|
||||||
from six.moves import urllib
|
|
||||||
|
|
||||||
from signedjson.key import (
|
from signedjson.key import (
|
||||||
decode_verify_key_bytes,
|
decode_verify_key_bytes,
|
||||||
encode_verify_key_base64,
|
encode_verify_key_base64,
|
||||||
@ -395,32 +393,13 @@ class Keyring(object):
|
|||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def get_keys_from_server(self, server_name_and_key_ids):
|
def get_keys_from_server(self, server_name_and_key_ids):
|
||||||
@defer.inlineCallbacks
|
|
||||||
def get_key(server_name, key_ids):
|
|
||||||
keys = None
|
|
||||||
try:
|
|
||||||
keys = yield self.get_server_verify_key_v2_direct(
|
|
||||||
server_name, key_ids
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.info(
|
|
||||||
"Unable to get key %r for %r directly: %s %s",
|
|
||||||
key_ids, server_name,
|
|
||||||
type(e).__name__, str(e),
|
|
||||||
)
|
|
||||||
|
|
||||||
if not keys:
|
|
||||||
keys = yield self.get_server_verify_key_v1_direct(
|
|
||||||
server_name, key_ids
|
|
||||||
)
|
|
||||||
|
|
||||||
keys = {server_name: keys}
|
|
||||||
|
|
||||||
defer.returnValue(keys)
|
|
||||||
|
|
||||||
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
|
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
|
||||||
[
|
[
|
||||||
run_in_background(get_key, server_name, key_ids)
|
run_in_background(
|
||||||
|
self.get_server_verify_key_v2_direct,
|
||||||
|
server_name,
|
||||||
|
key_ids,
|
||||||
|
)
|
||||||
for server_name, key_ids in server_name_and_key_ids
|
for server_name, key_ids in server_name_and_key_ids
|
||||||
],
|
],
|
||||||
consumeErrors=True,
|
consumeErrors=True,
|
||||||
@ -525,10 +504,7 @@ class Keyring(object):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
(response, tls_certificate) = yield fetch_server_key(
|
(response, tls_certificate) = yield fetch_server_key(
|
||||||
server_name, self.hs.tls_client_options_factory,
|
server_name, self.hs.tls_client_options_factory, requested_key_id
|
||||||
path=("/_matrix/key/v2/server/%s" % (
|
|
||||||
urllib.parse.quote(requested_key_id),
|
|
||||||
)).encode("ascii"),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if (u"signatures" not in response
|
if (u"signatures" not in response
|
||||||
@ -657,78 +633,6 @@ class Keyring(object):
|
|||||||
|
|
||||||
defer.returnValue(results)
|
defer.returnValue(results)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def get_server_verify_key_v1_direct(self, server_name, key_ids):
|
|
||||||
"""Finds a verification key for the server with one of the key ids.
|
|
||||||
Args:
|
|
||||||
server_name (str): The name of the server to fetch a key for.
|
|
||||||
keys_ids (list of str): The key_ids to check for.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Try to fetch the key from the remote server.
|
|
||||||
|
|
||||||
(response, tls_certificate) = yield fetch_server_key(
|
|
||||||
server_name, self.hs.tls_client_options_factory
|
|
||||||
)
|
|
||||||
|
|
||||||
# Check the response.
|
|
||||||
|
|
||||||
x509_certificate_bytes = crypto.dump_certificate(
|
|
||||||
crypto.FILETYPE_ASN1, tls_certificate
|
|
||||||
)
|
|
||||||
|
|
||||||
if ("signatures" not in response
|
|
||||||
or server_name not in response["signatures"]):
|
|
||||||
raise KeyLookupError("Key response not signed by remote server")
|
|
||||||
|
|
||||||
if "tls_certificate" not in response:
|
|
||||||
raise KeyLookupError("Key response missing TLS certificate")
|
|
||||||
|
|
||||||
tls_certificate_b64 = response["tls_certificate"]
|
|
||||||
|
|
||||||
if encode_base64(x509_certificate_bytes) != tls_certificate_b64:
|
|
||||||
raise KeyLookupError("TLS certificate doesn't match")
|
|
||||||
|
|
||||||
# Cache the result in the datastore.
|
|
||||||
|
|
||||||
time_now_ms = self.clock.time_msec()
|
|
||||||
|
|
||||||
verify_keys = {}
|
|
||||||
for key_id, key_base64 in response["verify_keys"].items():
|
|
||||||
if is_signing_algorithm_supported(key_id):
|
|
||||||
key_bytes = decode_base64(key_base64)
|
|
||||||
verify_key = decode_verify_key_bytes(key_id, key_bytes)
|
|
||||||
verify_key.time_added = time_now_ms
|
|
||||||
verify_keys[key_id] = verify_key
|
|
||||||
|
|
||||||
for key_id in response["signatures"][server_name]:
|
|
||||||
if key_id not in response["verify_keys"]:
|
|
||||||
raise KeyLookupError(
|
|
||||||
"Key response must include verification keys for all"
|
|
||||||
" signatures"
|
|
||||||
)
|
|
||||||
if key_id in verify_keys:
|
|
||||||
verify_signed_json(
|
|
||||||
response,
|
|
||||||
server_name,
|
|
||||||
verify_keys[key_id]
|
|
||||||
)
|
|
||||||
|
|
||||||
yield self.store.store_server_certificate(
|
|
||||||
server_name,
|
|
||||||
server_name,
|
|
||||||
time_now_ms,
|
|
||||||
tls_certificate,
|
|
||||||
)
|
|
||||||
|
|
||||||
yield self.store_keys(
|
|
||||||
server_name=server_name,
|
|
||||||
from_server=server_name,
|
|
||||||
verify_keys=verify_keys,
|
|
||||||
)
|
|
||||||
|
|
||||||
defer.returnValue(verify_keys)
|
|
||||||
|
|
||||||
def store_keys(self, server_name, from_server, verify_keys):
|
def store_keys(self, server_name, from_server, verify_keys):
|
||||||
"""Store a collection of verify keys for a given server
|
"""Store a collection of verify keys for a given server
|
||||||
Args:
|
Args:
|
||||||
|
@ -200,11 +200,11 @@ def _is_membership_change_allowed(event, auth_events):
|
|||||||
membership = event.content["membership"]
|
membership = event.content["membership"]
|
||||||
|
|
||||||
# Check if this is the room creator joining:
|
# Check if this is the room creator joining:
|
||||||
if len(event.prev_events) == 1 and Membership.JOIN == membership:
|
if len(event.prev_event_ids()) == 1 and Membership.JOIN == membership:
|
||||||
# Get room creation event:
|
# Get room creation event:
|
||||||
key = (EventTypes.Create, "", )
|
key = (EventTypes.Create, "", )
|
||||||
create = auth_events.get(key)
|
create = auth_events.get(key)
|
||||||
if create and event.prev_events[0][0] == create.event_id:
|
if create and event.prev_event_ids()[0] == create.event_id:
|
||||||
if create.content["creator"] == event.state_key:
|
if create.content["creator"] == event.state_key:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -159,6 +159,24 @@ class EventBase(object):
|
|||||||
def keys(self):
|
def keys(self):
|
||||||
return six.iterkeys(self._event_dict)
|
return six.iterkeys(self._event_dict)
|
||||||
|
|
||||||
|
def prev_event_ids(self):
|
||||||
|
"""Returns the list of prev event IDs. The order matches the order
|
||||||
|
specified in the event, though there is no meaning to it.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
list[str]: The list of event IDs of this event's prev_events
|
||||||
|
"""
|
||||||
|
return [e for e, _ in self.prev_events]
|
||||||
|
|
||||||
|
def auth_event_ids(self):
|
||||||
|
"""Returns the list of auth event IDs. The order matches the order
|
||||||
|
specified in the event, though there is no meaning to it.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
list[str]: The list of event IDs of this event's auth_events
|
||||||
|
"""
|
||||||
|
return [e for e, _ in self.auth_events]
|
||||||
|
|
||||||
|
|
||||||
class FrozenEvent(EventBase):
|
class FrozenEvent(EventBase):
|
||||||
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
|
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
|
||||||
|
@ -162,8 +162,30 @@ class FederationServer(FederationBase):
|
|||||||
p["age_ts"] = request_time - int(p["age"])
|
p["age_ts"] = request_time - int(p["age"])
|
||||||
del p["age"]
|
del p["age"]
|
||||||
|
|
||||||
|
# We try and pull out an event ID so that if later checks fail we
|
||||||
|
# can log something sensible. We don't mandate an event ID here in
|
||||||
|
# case future event formats get rid of the key.
|
||||||
|
possible_event_id = p.get("event_id", "<Unknown>")
|
||||||
|
|
||||||
|
# Now we get the room ID so that we can check that we know the
|
||||||
|
# version of the room.
|
||||||
|
room_id = p.get("room_id")
|
||||||
|
if not room_id:
|
||||||
|
logger.info(
|
||||||
|
"Ignoring PDU as does not have a room_id. Event ID: %s",
|
||||||
|
possible_event_id,
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
# In future we will actually use the room version to parse the
|
||||||
|
# PDU into an event.
|
||||||
|
yield self.store.get_room_version(room_id)
|
||||||
|
except NotFoundError:
|
||||||
|
logger.info("Ignoring PDU for unknown room_id: %s", room_id)
|
||||||
|
continue
|
||||||
|
|
||||||
event = event_from_pdu_json(p)
|
event = event_from_pdu_json(p)
|
||||||
room_id = event.room_id
|
|
||||||
pdus_by_room.setdefault(room_id, []).append(event)
|
pdus_by_room.setdefault(room_id, []).append(event)
|
||||||
|
|
||||||
pdu_results = {}
|
pdu_results = {}
|
||||||
@ -323,11 +345,6 @@ class FederationServer(FederationBase):
|
|||||||
else:
|
else:
|
||||||
defer.returnValue((404, ""))
|
defer.returnValue((404, ""))
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
@log_function
|
|
||||||
def on_pull_request(self, origin, versions):
|
|
||||||
raise NotImplementedError("Pull transactions not implemented")
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def on_query_request(self, query_type, args):
|
def on_query_request(self, query_type, args):
|
||||||
received_queries_counter.labels(query_type).inc()
|
received_queries_counter.labels(query_type).inc()
|
||||||
|
@ -183,9 +183,7 @@ class TransactionQueue(object):
|
|||||||
# banned then it won't receive the event because it won't
|
# banned then it won't receive the event because it won't
|
||||||
# be in the room after the ban.
|
# be in the room after the ban.
|
||||||
destinations = yield self.state.get_current_hosts_in_room(
|
destinations = yield self.state.get_current_hosts_in_room(
|
||||||
event.room_id, latest_event_ids=[
|
event.room_id, latest_event_ids=event.prev_event_ids(),
|
||||||
prev_id for prev_id, _ in event.prev_events
|
|
||||||
],
|
|
||||||
)
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception(
|
logger.exception(
|
||||||
|
@ -362,14 +362,6 @@ class FederationSendServlet(BaseFederationServlet):
|
|||||||
defer.returnValue((code, response))
|
defer.returnValue((code, response))
|
||||||
|
|
||||||
|
|
||||||
class FederationPullServlet(BaseFederationServlet):
|
|
||||||
PATH = "/pull/"
|
|
||||||
|
|
||||||
# This is for when someone asks us for everything since version X
|
|
||||||
def on_GET(self, origin, content, query):
|
|
||||||
return self.handler.on_pull_request(query["origin"][0], query["v"])
|
|
||||||
|
|
||||||
|
|
||||||
class FederationEventServlet(BaseFederationServlet):
|
class FederationEventServlet(BaseFederationServlet):
|
||||||
PATH = "/event/(?P<event_id>[^/]*)/"
|
PATH = "/event/(?P<event_id>[^/]*)/"
|
||||||
|
|
||||||
@ -1261,7 +1253,6 @@ class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
|
|||||||
|
|
||||||
FEDERATION_SERVLET_CLASSES = (
|
FEDERATION_SERVLET_CLASSES = (
|
||||||
FederationSendServlet,
|
FederationSendServlet,
|
||||||
FederationPullServlet,
|
|
||||||
FederationEventServlet,
|
FederationEventServlet,
|
||||||
FederationStateServlet,
|
FederationStateServlet,
|
||||||
FederationStateIdsServlet,
|
FederationStateIdsServlet,
|
||||||
|
@ -117,9 +117,6 @@ class Transaction(JsonEncodedObject):
|
|||||||
"Require 'transaction_id' to construct a Transaction"
|
"Require 'transaction_id' to construct a Transaction"
|
||||||
)
|
)
|
||||||
|
|
||||||
for p in pdus:
|
|
||||||
p.transaction_id = kwargs["transaction_id"]
|
|
||||||
|
|
||||||
kwargs["pdus"] = [p.get_pdu_json() for p in pdus]
|
kwargs["pdus"] = [p.get_pdu_json() for p in pdus]
|
||||||
|
|
||||||
return Transaction(**kwargs)
|
return Transaction(**kwargs)
|
||||||
|
@ -59,6 +59,7 @@ class AuthHandler(BaseHandler):
|
|||||||
LoginType.EMAIL_IDENTITY: self._check_email_identity,
|
LoginType.EMAIL_IDENTITY: self._check_email_identity,
|
||||||
LoginType.MSISDN: self._check_msisdn,
|
LoginType.MSISDN: self._check_msisdn,
|
||||||
LoginType.DUMMY: self._check_dummy_auth,
|
LoginType.DUMMY: self._check_dummy_auth,
|
||||||
|
LoginType.TERMS: self._check_terms_auth,
|
||||||
}
|
}
|
||||||
self.bcrypt_rounds = hs.config.bcrypt_rounds
|
self.bcrypt_rounds = hs.config.bcrypt_rounds
|
||||||
|
|
||||||
@ -431,6 +432,9 @@ class AuthHandler(BaseHandler):
|
|||||||
def _check_dummy_auth(self, authdict, _):
|
def _check_dummy_auth(self, authdict, _):
|
||||||
return defer.succeed(True)
|
return defer.succeed(True)
|
||||||
|
|
||||||
|
def _check_terms_auth(self, authdict, _):
|
||||||
|
return defer.succeed(True)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _check_threepid(self, medium, authdict):
|
def _check_threepid(self, medium, authdict):
|
||||||
if 'threepid_creds' not in authdict:
|
if 'threepid_creds' not in authdict:
|
||||||
@ -462,6 +466,22 @@ class AuthHandler(BaseHandler):
|
|||||||
def _get_params_recaptcha(self):
|
def _get_params_recaptcha(self):
|
||||||
return {"public_key": self.hs.config.recaptcha_public_key}
|
return {"public_key": self.hs.config.recaptcha_public_key}
|
||||||
|
|
||||||
|
def _get_params_terms(self):
|
||||||
|
return {
|
||||||
|
"policies": {
|
||||||
|
"privacy_policy": {
|
||||||
|
"version": self.hs.config.user_consent_version,
|
||||||
|
"en": {
|
||||||
|
"name": self.hs.config.user_consent_policy_name,
|
||||||
|
"url": "%s/_matrix/consent?v=%s" % (
|
||||||
|
self.hs.config.public_baseurl,
|
||||||
|
self.hs.config.user_consent_version,
|
||||||
|
),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
def _auth_dict_for_flows(self, flows, session):
|
def _auth_dict_for_flows(self, flows, session):
|
||||||
public_flows = []
|
public_flows = []
|
||||||
for f in flows:
|
for f in flows:
|
||||||
@ -469,6 +489,7 @@ class AuthHandler(BaseHandler):
|
|||||||
|
|
||||||
get_params = {
|
get_params = {
|
||||||
LoginType.RECAPTCHA: self._get_params_recaptcha,
|
LoginType.RECAPTCHA: self._get_params_recaptcha,
|
||||||
|
LoginType.TERMS: self._get_params_terms,
|
||||||
}
|
}
|
||||||
|
|
||||||
params = {}
|
params = {}
|
||||||
|
@ -138,9 +138,30 @@ class DirectoryHandler(BaseHandler):
|
|||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def delete_association(self, requester, room_alias):
|
def delete_association(self, requester, room_alias, send_event=True):
|
||||||
# association deletion for human users
|
"""Remove an alias from the directory
|
||||||
|
|
||||||
|
(this is only meant for human users; AS users should call
|
||||||
|
delete_appservice_association)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
requester (Requester):
|
||||||
|
room_alias (RoomAlias):
|
||||||
|
send_event (bool): Whether to send an updated m.room.aliases event.
|
||||||
|
Note that, if we delete the canonical alias, we will always attempt
|
||||||
|
to send an m.room.canonical_alias event
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred[unicode]: room id that the alias used to point to
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
NotFoundError: if the alias doesn't exist
|
||||||
|
|
||||||
|
AuthError: if the user doesn't have perms to delete the alias (ie, the user
|
||||||
|
is neither the creator of the alias, nor a server admin.
|
||||||
|
|
||||||
|
SynapseError: if the alias belongs to an AS
|
||||||
|
"""
|
||||||
user_id = requester.user.to_string()
|
user_id = requester.user.to_string()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@ -168,6 +189,7 @@ class DirectoryHandler(BaseHandler):
|
|||||||
room_id = yield self._delete_association(room_alias)
|
room_id = yield self._delete_association(room_alias)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
if send_event:
|
||||||
yield self.send_room_alias_update_event(
|
yield self.send_room_alias_update_event(
|
||||||
requester,
|
requester,
|
||||||
room_id
|
room_id
|
||||||
|
@ -19,7 +19,7 @@ from six import iteritems
|
|||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.errors import RoomKeysVersionError, StoreError, SynapseError
|
from synapse.api.errors import NotFoundError, RoomKeysVersionError, StoreError
|
||||||
from synapse.util.async_helpers import Linearizer
|
from synapse.util.async_helpers import Linearizer
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@ -55,6 +55,8 @@ class E2eRoomKeysHandler(object):
|
|||||||
room_id(string): room ID to get keys for, for None to get keys for all rooms
|
room_id(string): room ID to get keys for, for None to get keys for all rooms
|
||||||
session_id(string): session ID to get keys for, for None to get keys for all
|
session_id(string): session ID to get keys for, for None to get keys for all
|
||||||
sessions
|
sessions
|
||||||
|
Raises:
|
||||||
|
NotFoundError: if the backup version does not exist
|
||||||
Returns:
|
Returns:
|
||||||
A deferred list of dicts giving the session_data and message metadata for
|
A deferred list of dicts giving the session_data and message metadata for
|
||||||
these room keys.
|
these room keys.
|
||||||
@ -63,13 +65,19 @@ class E2eRoomKeysHandler(object):
|
|||||||
# we deliberately take the lock to get keys so that changing the version
|
# we deliberately take the lock to get keys so that changing the version
|
||||||
# works atomically
|
# works atomically
|
||||||
with (yield self._upload_linearizer.queue(user_id)):
|
with (yield self._upload_linearizer.queue(user_id)):
|
||||||
|
# make sure the backup version exists
|
||||||
|
try:
|
||||||
|
yield self.store.get_e2e_room_keys_version_info(user_id, version)
|
||||||
|
except StoreError as e:
|
||||||
|
if e.code == 404:
|
||||||
|
raise NotFoundError("Unknown backup version")
|
||||||
|
else:
|
||||||
|
raise
|
||||||
|
|
||||||
results = yield self.store.get_e2e_room_keys(
|
results = yield self.store.get_e2e_room_keys(
|
||||||
user_id, version, room_id, session_id
|
user_id, version, room_id, session_id
|
||||||
)
|
)
|
||||||
|
|
||||||
if results['rooms'] == {}:
|
|
||||||
raise SynapseError(404, "No room_keys found")
|
|
||||||
|
|
||||||
defer.returnValue(results)
|
defer.returnValue(results)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@ -120,7 +128,7 @@ class E2eRoomKeysHandler(object):
|
|||||||
}
|
}
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
SynapseError: with code 404 if there are no versions defined
|
NotFoundError: if there are no versions defined
|
||||||
RoomKeysVersionError: if the uploaded version is not the current version
|
RoomKeysVersionError: if the uploaded version is not the current version
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@ -134,7 +142,7 @@ class E2eRoomKeysHandler(object):
|
|||||||
version_info = yield self.store.get_e2e_room_keys_version_info(user_id)
|
version_info = yield self.store.get_e2e_room_keys_version_info(user_id)
|
||||||
except StoreError as e:
|
except StoreError as e:
|
||||||
if e.code == 404:
|
if e.code == 404:
|
||||||
raise SynapseError(404, "Version '%s' not found" % (version,))
|
raise NotFoundError("Version '%s' not found" % (version,))
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
@ -148,7 +156,7 @@ class E2eRoomKeysHandler(object):
|
|||||||
raise RoomKeysVersionError(current_version=version_info['version'])
|
raise RoomKeysVersionError(current_version=version_info['version'])
|
||||||
except StoreError as e:
|
except StoreError as e:
|
||||||
if e.code == 404:
|
if e.code == 404:
|
||||||
raise SynapseError(404, "Version '%s' not found" % (version,))
|
raise NotFoundError("Version '%s' not found" % (version,))
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
@ -202,24 +202,19 @@ class FederationHandler(BaseHandler):
|
|||||||
self.room_queues[room_id].append((pdu, origin))
|
self.room_queues[room_id].append((pdu, origin))
|
||||||
return
|
return
|
||||||
|
|
||||||
# If we're no longer in the room just ditch the event entirely. This
|
# If we're not in the room just ditch the event entirely. This is
|
||||||
# is probably an old server that has come back and thinks we're still
|
# probably an old server that has come back and thinks we're still in
|
||||||
# in the room (or we've been rejoined to the room by a state reset).
|
# the room (or we've been rejoined to the room by a state reset).
|
||||||
#
|
#
|
||||||
# If we were never in the room then maybe our database got vaped and
|
# Note that if we were never in the room then we would have already
|
||||||
# we should check if we *are* in fact in the room. If we are then we
|
# dropped the event, since we wouldn't know the room version.
|
||||||
# can magically rejoin the room.
|
|
||||||
is_in_room = yield self.auth.check_host_in_room(
|
is_in_room = yield self.auth.check_host_in_room(
|
||||||
room_id,
|
room_id,
|
||||||
self.server_name
|
self.server_name
|
||||||
)
|
)
|
||||||
if not is_in_room:
|
if not is_in_room:
|
||||||
was_in_room = yield self.store.was_host_joined(
|
|
||||||
pdu.room_id, self.server_name,
|
|
||||||
)
|
|
||||||
if was_in_room:
|
|
||||||
logger.info(
|
logger.info(
|
||||||
"[%s %s] Ignoring PDU from %s as we've left the room",
|
"[%s %s] Ignoring PDU from %s as we're not in the room",
|
||||||
room_id, event_id, origin,
|
room_id, event_id, origin,
|
||||||
)
|
)
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
@ -239,7 +234,7 @@ class FederationHandler(BaseHandler):
|
|||||||
room_id, event_id, min_depth,
|
room_id, event_id, min_depth,
|
||||||
)
|
)
|
||||||
|
|
||||||
prevs = {e_id for e_id, _ in pdu.prev_events}
|
prevs = set(pdu.prev_event_ids())
|
||||||
seen = yield self.store.have_seen_events(prevs)
|
seen = yield self.store.have_seen_events(prevs)
|
||||||
|
|
||||||
if min_depth and pdu.depth < min_depth:
|
if min_depth and pdu.depth < min_depth:
|
||||||
@ -557,38 +552,6 @@ class FederationHandler(BaseHandler):
|
|||||||
room_id, event_id, event,
|
room_id, event_id, event,
|
||||||
)
|
)
|
||||||
|
|
||||||
# FIXME (erikj): Awful hack to make the case where we are not currently
|
|
||||||
# in the room work
|
|
||||||
# If state and auth_chain are None, then we don't need to do this check
|
|
||||||
# as we already know we have enough state in the DB to handle this
|
|
||||||
# event.
|
|
||||||
if state and auth_chain and not event.internal_metadata.is_outlier():
|
|
||||||
is_in_room = yield self.auth.check_host_in_room(
|
|
||||||
room_id,
|
|
||||||
self.server_name
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
is_in_room = True
|
|
||||||
|
|
||||||
if not is_in_room:
|
|
||||||
logger.info(
|
|
||||||
"[%s %s] Got event for room we're not in",
|
|
||||||
room_id, event_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
yield self._persist_auth_tree(
|
|
||||||
origin, auth_chain, state, event
|
|
||||||
)
|
|
||||||
except AuthError as e:
|
|
||||||
raise FederationError(
|
|
||||||
"ERROR",
|
|
||||||
e.code,
|
|
||||||
e.msg,
|
|
||||||
affected=event_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
else:
|
|
||||||
event_ids = set()
|
event_ids = set()
|
||||||
if state:
|
if state:
|
||||||
event_ids |= {e.event_id for e in state}
|
event_ids |= {e.event_id for e in state}
|
||||||
@ -607,7 +570,7 @@ class FederationHandler(BaseHandler):
|
|||||||
if e.event_id in seen_ids:
|
if e.event_id in seen_ids:
|
||||||
continue
|
continue
|
||||||
e.internal_metadata.outlier = True
|
e.internal_metadata.outlier = True
|
||||||
auth_ids = [e_id for e_id, _ in e.auth_events]
|
auth_ids = e.auth_event_ids()
|
||||||
auth = {
|
auth = {
|
||||||
(e.type, e.state_key): e for e in auth_chain
|
(e.type, e.state_key): e for e in auth_chain
|
||||||
if e.event_id in auth_ids or e.type == EventTypes.Create
|
if e.event_id in auth_ids or e.type == EventTypes.Create
|
||||||
@ -726,7 +689,7 @@ class FederationHandler(BaseHandler):
|
|||||||
edges = [
|
edges = [
|
||||||
ev.event_id
|
ev.event_id
|
||||||
for ev in events
|
for ev in events
|
||||||
if set(e_id for e_id, _ in ev.prev_events) - event_ids
|
if set(ev.prev_event_ids()) - event_ids
|
||||||
]
|
]
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
@ -753,7 +716,7 @@ class FederationHandler(BaseHandler):
|
|||||||
required_auth = set(
|
required_auth = set(
|
||||||
a_id
|
a_id
|
||||||
for event in events + list(state_events.values()) + list(auth_events.values())
|
for event in events + list(state_events.values()) + list(auth_events.values())
|
||||||
for a_id, _ in event.auth_events
|
for a_id in event.auth_event_ids()
|
||||||
)
|
)
|
||||||
auth_events.update({
|
auth_events.update({
|
||||||
e_id: event_map[e_id] for e_id in required_auth if e_id in event_map
|
e_id: event_map[e_id] for e_id in required_auth if e_id in event_map
|
||||||
@ -769,7 +732,7 @@ class FederationHandler(BaseHandler):
|
|||||||
auth_events.update(ret_events)
|
auth_events.update(ret_events)
|
||||||
|
|
||||||
required_auth.update(
|
required_auth.update(
|
||||||
a_id for event in ret_events.values() for a_id, _ in event.auth_events
|
a_id for event in ret_events.values() for a_id in event.auth_event_ids()
|
||||||
)
|
)
|
||||||
missing_auth = required_auth - set(auth_events)
|
missing_auth = required_auth - set(auth_events)
|
||||||
|
|
||||||
@ -796,7 +759,7 @@ class FederationHandler(BaseHandler):
|
|||||||
required_auth.update(
|
required_auth.update(
|
||||||
a_id
|
a_id
|
||||||
for event in results if event
|
for event in results if event
|
||||||
for a_id, _ in event.auth_events
|
for a_id in event.auth_event_ids()
|
||||||
)
|
)
|
||||||
missing_auth = required_auth - set(auth_events)
|
missing_auth = required_auth - set(auth_events)
|
||||||
|
|
||||||
@ -816,7 +779,7 @@ class FederationHandler(BaseHandler):
|
|||||||
"auth_events": {
|
"auth_events": {
|
||||||
(auth_events[a_id].type, auth_events[a_id].state_key):
|
(auth_events[a_id].type, auth_events[a_id].state_key):
|
||||||
auth_events[a_id]
|
auth_events[a_id]
|
||||||
for a_id, _ in a.auth_events
|
for a_id in a.auth_event_ids()
|
||||||
if a_id in auth_events
|
if a_id in auth_events
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
@ -828,7 +791,7 @@ class FederationHandler(BaseHandler):
|
|||||||
"auth_events": {
|
"auth_events": {
|
||||||
(auth_events[a_id].type, auth_events[a_id].state_key):
|
(auth_events[a_id].type, auth_events[a_id].state_key):
|
||||||
auth_events[a_id]
|
auth_events[a_id]
|
||||||
for a_id, _ in event_map[e_id].auth_events
|
for a_id in event_map[e_id].auth_event_ids()
|
||||||
if a_id in auth_events
|
if a_id in auth_events
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
@ -1041,17 +1004,17 @@ class FederationHandler(BaseHandler):
|
|||||||
Raises:
|
Raises:
|
||||||
SynapseError if the event does not pass muster
|
SynapseError if the event does not pass muster
|
||||||
"""
|
"""
|
||||||
if len(ev.prev_events) > 20:
|
if len(ev.prev_event_ids()) > 20:
|
||||||
logger.warn("Rejecting event %s which has %i prev_events",
|
logger.warn("Rejecting event %s which has %i prev_events",
|
||||||
ev.event_id, len(ev.prev_events))
|
ev.event_id, len(ev.prev_event_ids()))
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
http_client.BAD_REQUEST,
|
http_client.BAD_REQUEST,
|
||||||
"Too many prev_events",
|
"Too many prev_events",
|
||||||
)
|
)
|
||||||
|
|
||||||
if len(ev.auth_events) > 10:
|
if len(ev.auth_event_ids()) > 10:
|
||||||
logger.warn("Rejecting event %s which has %i auth_events",
|
logger.warn("Rejecting event %s which has %i auth_events",
|
||||||
ev.event_id, len(ev.auth_events))
|
ev.event_id, len(ev.auth_event_ids()))
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
http_client.BAD_REQUEST,
|
http_client.BAD_REQUEST,
|
||||||
"Too many auth_events",
|
"Too many auth_events",
|
||||||
@ -1076,7 +1039,7 @@ class FederationHandler(BaseHandler):
|
|||||||
def on_event_auth(self, event_id):
|
def on_event_auth(self, event_id):
|
||||||
event = yield self.store.get_event(event_id)
|
event = yield self.store.get_event(event_id)
|
||||||
auth = yield self.store.get_auth_chain(
|
auth = yield self.store.get_auth_chain(
|
||||||
[auth_id for auth_id, _ in event.auth_events],
|
[auth_id for auth_id in event.auth_event_ids()],
|
||||||
include_given=True
|
include_given=True
|
||||||
)
|
)
|
||||||
defer.returnValue([e for e in auth])
|
defer.returnValue([e for e in auth])
|
||||||
@ -1698,7 +1661,7 @@ class FederationHandler(BaseHandler):
|
|||||||
|
|
||||||
missing_auth_events = set()
|
missing_auth_events = set()
|
||||||
for e in itertools.chain(auth_events, state, [event]):
|
for e in itertools.chain(auth_events, state, [event]):
|
||||||
for e_id, _ in e.auth_events:
|
for e_id in e.auth_event_ids():
|
||||||
if e_id not in event_map:
|
if e_id not in event_map:
|
||||||
missing_auth_events.add(e_id)
|
missing_auth_events.add(e_id)
|
||||||
|
|
||||||
@ -1717,7 +1680,7 @@ class FederationHandler(BaseHandler):
|
|||||||
for e in itertools.chain(auth_events, state, [event]):
|
for e in itertools.chain(auth_events, state, [event]):
|
||||||
auth_for_e = {
|
auth_for_e = {
|
||||||
(event_map[e_id].type, event_map[e_id].state_key): event_map[e_id]
|
(event_map[e_id].type, event_map[e_id].state_key): event_map[e_id]
|
||||||
for e_id, _ in e.auth_events
|
for e_id in e.auth_event_ids()
|
||||||
if e_id in event_map
|
if e_id in event_map
|
||||||
}
|
}
|
||||||
if create_event:
|
if create_event:
|
||||||
@ -1785,10 +1748,10 @@ class FederationHandler(BaseHandler):
|
|||||||
|
|
||||||
# This is a hack to fix some old rooms where the initial join event
|
# This is a hack to fix some old rooms where the initial join event
|
||||||
# didn't reference the create event in its auth events.
|
# didn't reference the create event in its auth events.
|
||||||
if event.type == EventTypes.Member and not event.auth_events:
|
if event.type == EventTypes.Member and not event.auth_event_ids():
|
||||||
if len(event.prev_events) == 1 and event.depth < 5:
|
if len(event.prev_event_ids()) == 1 and event.depth < 5:
|
||||||
c = yield self.store.get_event(
|
c = yield self.store.get_event(
|
||||||
event.prev_events[0][0],
|
event.prev_event_ids()[0],
|
||||||
allow_none=True,
|
allow_none=True,
|
||||||
)
|
)
|
||||||
if c and c.type == EventTypes.Create:
|
if c and c.type == EventTypes.Create:
|
||||||
@ -1835,7 +1798,7 @@ class FederationHandler(BaseHandler):
|
|||||||
|
|
||||||
# Now get the current auth_chain for the event.
|
# Now get the current auth_chain for the event.
|
||||||
local_auth_chain = yield self.store.get_auth_chain(
|
local_auth_chain = yield self.store.get_auth_chain(
|
||||||
[auth_id for auth_id, _ in event.auth_events],
|
[auth_id for auth_id in event.auth_event_ids()],
|
||||||
include_given=True
|
include_given=True
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -1891,7 +1854,7 @@ class FederationHandler(BaseHandler):
|
|||||||
"""
|
"""
|
||||||
# Check if we have all the auth events.
|
# Check if we have all the auth events.
|
||||||
current_state = set(e.event_id for e in auth_events.values())
|
current_state = set(e.event_id for e in auth_events.values())
|
||||||
event_auth_events = set(e_id for e_id, _ in event.auth_events)
|
event_auth_events = set(event.auth_event_ids())
|
||||||
|
|
||||||
if event.is_state():
|
if event.is_state():
|
||||||
event_key = (event.type, event.state_key)
|
event_key = (event.type, event.state_key)
|
||||||
@ -1935,7 +1898,7 @@ class FederationHandler(BaseHandler):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
try:
|
try:
|
||||||
auth_ids = [e_id for e_id, _ in e.auth_events]
|
auth_ids = e.auth_event_ids()
|
||||||
auth = {
|
auth = {
|
||||||
(e.type, e.state_key): e for e in remote_auth_chain
|
(e.type, e.state_key): e for e in remote_auth_chain
|
||||||
if e.event_id in auth_ids or e.type == EventTypes.Create
|
if e.event_id in auth_ids or e.type == EventTypes.Create
|
||||||
@ -1956,7 +1919,7 @@ class FederationHandler(BaseHandler):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
have_events = yield self.store.get_seen_events_with_rejections(
|
have_events = yield self.store.get_seen_events_with_rejections(
|
||||||
[e_id for e_id, _ in event.auth_events]
|
event.auth_event_ids()
|
||||||
)
|
)
|
||||||
seen_events = set(have_events.keys())
|
seen_events = set(have_events.keys())
|
||||||
except Exception:
|
except Exception:
|
||||||
@ -2058,7 +2021,7 @@ class FederationHandler(BaseHandler):
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
try:
|
try:
|
||||||
auth_ids = [e_id for e_id, _ in ev.auth_events]
|
auth_ids = ev.auth_event_ids()
|
||||||
auth = {
|
auth = {
|
||||||
(e.type, e.state_key): e
|
(e.type, e.state_key): e
|
||||||
for e in result["auth_chain"]
|
for e in result["auth_chain"]
|
||||||
@ -2250,7 +2213,7 @@ class FederationHandler(BaseHandler):
|
|||||||
missing_remote_ids = [e.event_id for e in missing_remotes]
|
missing_remote_ids = [e.event_id for e in missing_remotes]
|
||||||
base_remote_rejected = list(missing_remotes)
|
base_remote_rejected = list(missing_remotes)
|
||||||
for e in missing_remotes:
|
for e in missing_remotes:
|
||||||
for e_id, _ in e.auth_events:
|
for e_id in e.auth_event_ids():
|
||||||
if e_id in missing_remote_ids:
|
if e_id in missing_remote_ids:
|
||||||
try:
|
try:
|
||||||
base_remote_rejected.remove(e)
|
base_remote_rejected.remove(e)
|
||||||
|
@ -427,6 +427,9 @@ class EventCreationHandler(object):
|
|||||||
|
|
||||||
if event.is_state():
|
if event.is_state():
|
||||||
prev_state = yield self.deduplicate_state_event(event, context)
|
prev_state = yield self.deduplicate_state_event(event, context)
|
||||||
|
logger.info(
|
||||||
|
"Not bothering to persist duplicate state event %s", event.event_id,
|
||||||
|
)
|
||||||
if prev_state is not None:
|
if prev_state is not None:
|
||||||
defer.returnValue(prev_state)
|
defer.returnValue(prev_state)
|
||||||
|
|
||||||
|
@ -50,7 +50,6 @@ class RegistrationHandler(BaseHandler):
|
|||||||
self._auth_handler = hs.get_auth_handler()
|
self._auth_handler = hs.get_auth_handler()
|
||||||
self.profile_handler = hs.get_profile_handler()
|
self.profile_handler = hs.get_profile_handler()
|
||||||
self.user_directory_handler = hs.get_user_directory_handler()
|
self.user_directory_handler = hs.get_user_directory_handler()
|
||||||
self.room_creation_handler = self.hs.get_room_creation_handler()
|
|
||||||
self.captcha_client = CaptchaServerHttpClient(hs)
|
self.captcha_client = CaptchaServerHttpClient(hs)
|
||||||
|
|
||||||
self._next_generated_user_id = None
|
self._next_generated_user_id = None
|
||||||
@ -241,7 +240,10 @@ class RegistrationHandler(BaseHandler):
|
|||||||
else:
|
else:
|
||||||
# create room expects the localpart of the room alias
|
# create room expects the localpart of the room alias
|
||||||
room_alias_localpart = room_alias.localpart
|
room_alias_localpart = room_alias.localpart
|
||||||
yield self.room_creation_handler.create_room(
|
|
||||||
|
# getting the RoomCreationHandler during init gives a dependency
|
||||||
|
# loop
|
||||||
|
yield self.hs.get_room_creation_handler().create_room(
|
||||||
fake_requester,
|
fake_requester,
|
||||||
config={
|
config={
|
||||||
"preset": "public_chat",
|
"preset": "public_chat",
|
||||||
@ -254,9 +256,6 @@ class RegistrationHandler(BaseHandler):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error("Failed to join new user to %r: %r", r, e)
|
logger.error("Failed to join new user to %r: %r", r, e)
|
||||||
|
|
||||||
# We used to generate default identicons here, but nowadays
|
|
||||||
# we want clients to generate their own as part of their branding
|
|
||||||
# rather than there being consistent matrix-wide ones, so we don't.
|
|
||||||
defer.returnValue((user_id, token))
|
defer.returnValue((user_id, token))
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@ -21,7 +21,7 @@ import math
|
|||||||
import string
|
import string
|
||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
|
|
||||||
from six import string_types
|
from six import iteritems, string_types
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
@ -32,10 +32,11 @@ from synapse.api.constants import (
|
|||||||
JoinRules,
|
JoinRules,
|
||||||
RoomCreationPreset,
|
RoomCreationPreset,
|
||||||
)
|
)
|
||||||
from synapse.api.errors import AuthError, Codes, StoreError, SynapseError
|
from synapse.api.errors import AuthError, Codes, NotFoundError, StoreError, SynapseError
|
||||||
from synapse.storage.state import StateFilter
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.types import RoomAlias, RoomID, RoomStreamToken, StreamToken, UserID
|
from synapse.types import RoomAlias, RoomID, RoomStreamToken, StreamToken, UserID
|
||||||
from synapse.util import stringutils
|
from synapse.util import stringutils
|
||||||
|
from synapse.util.async_helpers import Linearizer
|
||||||
from synapse.visibility import filter_events_for_client
|
from synapse.visibility import filter_events_for_client
|
||||||
|
|
||||||
from ._base import BaseHandler
|
from ._base import BaseHandler
|
||||||
@ -73,6 +74,334 @@ class RoomCreationHandler(BaseHandler):
|
|||||||
|
|
||||||
self.spam_checker = hs.get_spam_checker()
|
self.spam_checker = hs.get_spam_checker()
|
||||||
self.event_creation_handler = hs.get_event_creation_handler()
|
self.event_creation_handler = hs.get_event_creation_handler()
|
||||||
|
self.room_member_handler = hs.get_room_member_handler()
|
||||||
|
|
||||||
|
# linearizer to stop two upgrades happening at once
|
||||||
|
self._upgrade_linearizer = Linearizer("room_upgrade_linearizer")
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def upgrade_room(self, requester, old_room_id, new_version):
|
||||||
|
"""Replace a room with a new room with a different version
|
||||||
|
|
||||||
|
Args:
|
||||||
|
requester (synapse.types.Requester): the user requesting the upgrade
|
||||||
|
old_room_id (unicode): the id of the room to be replaced
|
||||||
|
new_version (unicode): the new room version to use
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred[unicode]: the new room id
|
||||||
|
"""
|
||||||
|
yield self.ratelimit(requester)
|
||||||
|
|
||||||
|
user_id = requester.user.to_string()
|
||||||
|
|
||||||
|
with (yield self._upgrade_linearizer.queue(old_room_id)):
|
||||||
|
# start by allocating a new room id
|
||||||
|
r = yield self.store.get_room(old_room_id)
|
||||||
|
if r is None:
|
||||||
|
raise NotFoundError("Unknown room id %s" % (old_room_id,))
|
||||||
|
new_room_id = yield self._generate_room_id(
|
||||||
|
creator_id=user_id, is_public=r["is_public"],
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info("Creating new room %s to replace %s", new_room_id, old_room_id)
|
||||||
|
|
||||||
|
# we create and auth the tombstone event before properly creating the new
|
||||||
|
# room, to check our user has perms in the old room.
|
||||||
|
tombstone_event, tombstone_context = (
|
||||||
|
yield self.event_creation_handler.create_event(
|
||||||
|
requester, {
|
||||||
|
"type": EventTypes.Tombstone,
|
||||||
|
"state_key": "",
|
||||||
|
"room_id": old_room_id,
|
||||||
|
"sender": user_id,
|
||||||
|
"content": {
|
||||||
|
"body": "This room has been replaced",
|
||||||
|
"replacement_room": new_room_id,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
token_id=requester.access_token_id,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
yield self.auth.check_from_context(tombstone_event, tombstone_context)
|
||||||
|
|
||||||
|
yield self.clone_exiting_room(
|
||||||
|
requester,
|
||||||
|
old_room_id=old_room_id,
|
||||||
|
new_room_id=new_room_id,
|
||||||
|
new_room_version=new_version,
|
||||||
|
tombstone_event_id=tombstone_event.event_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
# now send the tombstone
|
||||||
|
yield self.event_creation_handler.send_nonmember_event(
|
||||||
|
requester, tombstone_event, tombstone_context,
|
||||||
|
)
|
||||||
|
|
||||||
|
old_room_state = yield tombstone_context.get_current_state_ids(self.store)
|
||||||
|
|
||||||
|
# update any aliases
|
||||||
|
yield self._move_aliases_to_new_room(
|
||||||
|
requester, old_room_id, new_room_id, old_room_state,
|
||||||
|
)
|
||||||
|
|
||||||
|
# and finally, shut down the PLs in the old room, and update them in the new
|
||||||
|
# room.
|
||||||
|
yield self._update_upgraded_room_pls(
|
||||||
|
requester, old_room_id, new_room_id, old_room_state,
|
||||||
|
)
|
||||||
|
|
||||||
|
defer.returnValue(new_room_id)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _update_upgraded_room_pls(
|
||||||
|
self, requester, old_room_id, new_room_id, old_room_state,
|
||||||
|
):
|
||||||
|
"""Send updated power levels in both rooms after an upgrade
|
||||||
|
|
||||||
|
Args:
|
||||||
|
requester (synapse.types.Requester): the user requesting the upgrade
|
||||||
|
old_room_id (unicode): the id of the room to be replaced
|
||||||
|
new_room_id (unicode): the id of the replacement room
|
||||||
|
old_room_state (dict[tuple[str, str], str]): the state map for the old room
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred
|
||||||
|
"""
|
||||||
|
old_room_pl_event_id = old_room_state.get((EventTypes.PowerLevels, ""))
|
||||||
|
|
||||||
|
if old_room_pl_event_id is None:
|
||||||
|
logger.warning(
|
||||||
|
"Not supported: upgrading a room with no PL event. Not setting PLs "
|
||||||
|
"in old room.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
old_room_pl_state = yield self.store.get_event(old_room_pl_event_id)
|
||||||
|
|
||||||
|
# we try to stop regular users from speaking by setting the PL required
|
||||||
|
# to send regular events and invites to 'Moderator' level. That's normally
|
||||||
|
# 50, but if the default PL in a room is 50 or more, then we set the
|
||||||
|
# required PL above that.
|
||||||
|
|
||||||
|
pl_content = dict(old_room_pl_state.content)
|
||||||
|
users_default = int(pl_content.get("users_default", 0))
|
||||||
|
restricted_level = max(users_default + 1, 50)
|
||||||
|
|
||||||
|
updated = False
|
||||||
|
for v in ("invite", "events_default"):
|
||||||
|
current = int(pl_content.get(v, 0))
|
||||||
|
if current < restricted_level:
|
||||||
|
logger.info(
|
||||||
|
"Setting level for %s in %s to %i (was %i)",
|
||||||
|
v, old_room_id, restricted_level, current,
|
||||||
|
)
|
||||||
|
pl_content[v] = restricted_level
|
||||||
|
updated = True
|
||||||
|
else:
|
||||||
|
logger.info(
|
||||||
|
"Not setting level for %s (already %i)",
|
||||||
|
v, current,
|
||||||
|
)
|
||||||
|
|
||||||
|
if updated:
|
||||||
|
try:
|
||||||
|
yield self.event_creation_handler.create_and_send_nonmember_event(
|
||||||
|
requester, {
|
||||||
|
"type": EventTypes.PowerLevels,
|
||||||
|
"state_key": '',
|
||||||
|
"room_id": old_room_id,
|
||||||
|
"sender": requester.user.to_string(),
|
||||||
|
"content": pl_content,
|
||||||
|
}, ratelimit=False,
|
||||||
|
)
|
||||||
|
except AuthError as e:
|
||||||
|
logger.warning("Unable to update PLs in old room: %s", e)
|
||||||
|
|
||||||
|
logger.info("Setting correct PLs in new room")
|
||||||
|
yield self.event_creation_handler.create_and_send_nonmember_event(
|
||||||
|
requester, {
|
||||||
|
"type": EventTypes.PowerLevels,
|
||||||
|
"state_key": '',
|
||||||
|
"room_id": new_room_id,
|
||||||
|
"sender": requester.user.to_string(),
|
||||||
|
"content": old_room_pl_state.content,
|
||||||
|
}, ratelimit=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def clone_exiting_room(
|
||||||
|
self, requester, old_room_id, new_room_id, new_room_version,
|
||||||
|
tombstone_event_id,
|
||||||
|
):
|
||||||
|
"""Populate a new room based on an old room
|
||||||
|
|
||||||
|
Args:
|
||||||
|
requester (synapse.types.Requester): the user requesting the upgrade
|
||||||
|
old_room_id (unicode): the id of the room to be replaced
|
||||||
|
new_room_id (unicode): the id to give the new room (should already have been
|
||||||
|
created with _gemerate_room_id())
|
||||||
|
new_room_version (unicode): the new room version to use
|
||||||
|
tombstone_event_id (unicode|str): the ID of the tombstone event in the old
|
||||||
|
room.
|
||||||
|
Returns:
|
||||||
|
Deferred[None]
|
||||||
|
"""
|
||||||
|
user_id = requester.user.to_string()
|
||||||
|
|
||||||
|
if not self.spam_checker.user_may_create_room(user_id):
|
||||||
|
raise SynapseError(403, "You are not permitted to create rooms")
|
||||||
|
|
||||||
|
creation_content = {
|
||||||
|
"room_version": new_room_version,
|
||||||
|
"predecessor": {
|
||||||
|
"room_id": old_room_id,
|
||||||
|
"event_id": tombstone_event_id,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
initial_state = dict()
|
||||||
|
|
||||||
|
types_to_copy = (
|
||||||
|
(EventTypes.JoinRules, ""),
|
||||||
|
(EventTypes.Name, ""),
|
||||||
|
(EventTypes.Topic, ""),
|
||||||
|
(EventTypes.RoomHistoryVisibility, ""),
|
||||||
|
(EventTypes.GuestAccess, ""),
|
||||||
|
(EventTypes.RoomAvatar, ""),
|
||||||
|
)
|
||||||
|
|
||||||
|
old_room_state_ids = yield self.store.get_filtered_current_state_ids(
|
||||||
|
old_room_id, StateFilter.from_types(types_to_copy),
|
||||||
|
)
|
||||||
|
# map from event_id to BaseEvent
|
||||||
|
old_room_state_events = yield self.store.get_events(old_room_state_ids.values())
|
||||||
|
|
||||||
|
for k, old_event_id in iteritems(old_room_state_ids):
|
||||||
|
old_event = old_room_state_events.get(old_event_id)
|
||||||
|
if old_event:
|
||||||
|
initial_state[k] = old_event.content
|
||||||
|
|
||||||
|
yield self._send_events_for_new_room(
|
||||||
|
requester,
|
||||||
|
new_room_id,
|
||||||
|
|
||||||
|
# we expect to override all the presets with initial_state, so this is
|
||||||
|
# somewhat arbitrary.
|
||||||
|
preset_config=RoomCreationPreset.PRIVATE_CHAT,
|
||||||
|
|
||||||
|
invite_list=[],
|
||||||
|
initial_state=initial_state,
|
||||||
|
creation_content=creation_content,
|
||||||
|
)
|
||||||
|
|
||||||
|
# XXX invites/joins
|
||||||
|
# XXX 3pid invites
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _move_aliases_to_new_room(
|
||||||
|
self, requester, old_room_id, new_room_id, old_room_state,
|
||||||
|
):
|
||||||
|
directory_handler = self.hs.get_handlers().directory_handler
|
||||||
|
|
||||||
|
aliases = yield self.store.get_aliases_for_room(old_room_id)
|
||||||
|
|
||||||
|
# check to see if we have a canonical alias.
|
||||||
|
canonical_alias = None
|
||||||
|
canonical_alias_event_id = old_room_state.get((EventTypes.CanonicalAlias, ""))
|
||||||
|
if canonical_alias_event_id:
|
||||||
|
canonical_alias_event = yield self.store.get_event(canonical_alias_event_id)
|
||||||
|
if canonical_alias_event:
|
||||||
|
canonical_alias = canonical_alias_event.content.get("alias", "")
|
||||||
|
|
||||||
|
# first we try to remove the aliases from the old room (we suppress sending
|
||||||
|
# the room_aliases event until the end).
|
||||||
|
#
|
||||||
|
# Note that we'll only be able to remove aliases that (a) aren't owned by an AS,
|
||||||
|
# and (b) unless the user is a server admin, which the user created.
|
||||||
|
#
|
||||||
|
# This is probably correct - given we don't allow such aliases to be deleted
|
||||||
|
# normally, it would be odd to allow it in the case of doing a room upgrade -
|
||||||
|
# but it makes the upgrade less effective, and you have to wonder why a room
|
||||||
|
# admin can't remove aliases that point to that room anyway.
|
||||||
|
# (cf https://github.com/matrix-org/synapse/issues/2360)
|
||||||
|
#
|
||||||
|
removed_aliases = []
|
||||||
|
for alias_str in aliases:
|
||||||
|
alias = RoomAlias.from_string(alias_str)
|
||||||
|
try:
|
||||||
|
yield directory_handler.delete_association(
|
||||||
|
requester, alias, send_event=False,
|
||||||
|
)
|
||||||
|
removed_aliases.append(alias_str)
|
||||||
|
except SynapseError as e:
|
||||||
|
logger.warning(
|
||||||
|
"Unable to remove alias %s from old room: %s",
|
||||||
|
alias, e,
|
||||||
|
)
|
||||||
|
|
||||||
|
# if we didn't find any aliases, or couldn't remove anyway, we can skip the rest
|
||||||
|
# of this.
|
||||||
|
if not removed_aliases:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
# this can fail if, for some reason, our user doesn't have perms to send
|
||||||
|
# m.room.aliases events in the old room (note that we've already checked that
|
||||||
|
# they have perms to send a tombstone event, so that's not terribly likely).
|
||||||
|
#
|
||||||
|
# If that happens, it's regrettable, but we should carry on: it's the same
|
||||||
|
# as when you remove an alias from the directory normally - it just means that
|
||||||
|
# the aliases event gets out of sync with the directory
|
||||||
|
# (cf https://github.com/vector-im/riot-web/issues/2369)
|
||||||
|
yield directory_handler.send_room_alias_update_event(
|
||||||
|
requester, old_room_id,
|
||||||
|
)
|
||||||
|
except AuthError as e:
|
||||||
|
logger.warning(
|
||||||
|
"Failed to send updated alias event on old room: %s", e,
|
||||||
|
)
|
||||||
|
|
||||||
|
# we can now add any aliases we successfully removed to the new room.
|
||||||
|
for alias in removed_aliases:
|
||||||
|
try:
|
||||||
|
yield directory_handler.create_association(
|
||||||
|
requester, RoomAlias.from_string(alias),
|
||||||
|
new_room_id, servers=(self.hs.hostname, ),
|
||||||
|
send_event=False,
|
||||||
|
)
|
||||||
|
logger.info("Moved alias %s to new room", alias)
|
||||||
|
except SynapseError as e:
|
||||||
|
# I'm not really expecting this to happen, but it could if the spam
|
||||||
|
# checking module decides it shouldn't, or similar.
|
||||||
|
logger.error(
|
||||||
|
"Error adding alias %s to new room: %s",
|
||||||
|
alias, e,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
if canonical_alias and (canonical_alias in removed_aliases):
|
||||||
|
yield self.event_creation_handler.create_and_send_nonmember_event(
|
||||||
|
requester,
|
||||||
|
{
|
||||||
|
"type": EventTypes.CanonicalAlias,
|
||||||
|
"state_key": "",
|
||||||
|
"room_id": new_room_id,
|
||||||
|
"sender": requester.user.to_string(),
|
||||||
|
"content": {"alias": canonical_alias, },
|
||||||
|
},
|
||||||
|
ratelimit=False
|
||||||
|
)
|
||||||
|
|
||||||
|
yield directory_handler.send_room_alias_update_event(
|
||||||
|
requester, new_room_id,
|
||||||
|
)
|
||||||
|
except SynapseError as e:
|
||||||
|
# again I'm not really expecting this to fail, but if it does, I'd rather
|
||||||
|
# we returned the new room to the client at this point.
|
||||||
|
logger.error(
|
||||||
|
"Unable to send updated alias events in new room: %s", e,
|
||||||
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def create_room(self, requester, config, ratelimit=True,
|
def create_room(self, requester, config, ratelimit=True,
|
||||||
@ -165,28 +494,7 @@ class RoomCreationHandler(BaseHandler):
|
|||||||
visibility = config.get("visibility", None)
|
visibility = config.get("visibility", None)
|
||||||
is_public = visibility == "public"
|
is_public = visibility == "public"
|
||||||
|
|
||||||
# autogen room IDs and try to create it. We may clash, so just
|
room_id = yield self._generate_room_id(creator_id=user_id, is_public=is_public)
|
||||||
# try a few times till one goes through, giving up eventually.
|
|
||||||
attempts = 0
|
|
||||||
room_id = None
|
|
||||||
while attempts < 5:
|
|
||||||
try:
|
|
||||||
random_string = stringutils.random_string(18)
|
|
||||||
gen_room_id = RoomID(
|
|
||||||
random_string,
|
|
||||||
self.hs.hostname,
|
|
||||||
)
|
|
||||||
yield self.store.store_room(
|
|
||||||
room_id=gen_room_id.to_string(),
|
|
||||||
room_creator_user_id=user_id,
|
|
||||||
is_public=is_public
|
|
||||||
)
|
|
||||||
room_id = gen_room_id.to_string()
|
|
||||||
break
|
|
||||||
except StoreError:
|
|
||||||
attempts += 1
|
|
||||||
if not room_id:
|
|
||||||
raise StoreError(500, "Couldn't generate a room ID.")
|
|
||||||
|
|
||||||
if room_alias:
|
if room_alias:
|
||||||
directory_handler = self.hs.get_handlers().directory_handler
|
directory_handler = self.hs.get_handlers().directory_handler
|
||||||
@ -216,18 +524,15 @@ class RoomCreationHandler(BaseHandler):
|
|||||||
# override any attempt to set room versions via the creation_content
|
# override any attempt to set room versions via the creation_content
|
||||||
creation_content["room_version"] = room_version
|
creation_content["room_version"] = room_version
|
||||||
|
|
||||||
room_member_handler = self.hs.get_room_member_handler()
|
|
||||||
|
|
||||||
yield self._send_events_for_new_room(
|
yield self._send_events_for_new_room(
|
||||||
requester,
|
requester,
|
||||||
room_id,
|
room_id,
|
||||||
room_member_handler,
|
|
||||||
preset_config=preset_config,
|
preset_config=preset_config,
|
||||||
invite_list=invite_list,
|
invite_list=invite_list,
|
||||||
initial_state=initial_state,
|
initial_state=initial_state,
|
||||||
creation_content=creation_content,
|
creation_content=creation_content,
|
||||||
room_alias=room_alias,
|
room_alias=room_alias,
|
||||||
power_level_content_override=config.get("power_level_content_override", {}),
|
power_level_content_override=config.get("power_level_content_override"),
|
||||||
creator_join_profile=creator_join_profile,
|
creator_join_profile=creator_join_profile,
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -263,7 +568,7 @@ class RoomCreationHandler(BaseHandler):
|
|||||||
if is_direct:
|
if is_direct:
|
||||||
content["is_direct"] = is_direct
|
content["is_direct"] = is_direct
|
||||||
|
|
||||||
yield room_member_handler.update_membership(
|
yield self.room_member_handler.update_membership(
|
||||||
requester,
|
requester,
|
||||||
UserID.from_string(invitee),
|
UserID.from_string(invitee),
|
||||||
room_id,
|
room_id,
|
||||||
@ -301,14 +606,13 @@ class RoomCreationHandler(BaseHandler):
|
|||||||
self,
|
self,
|
||||||
creator, # A Requester object.
|
creator, # A Requester object.
|
||||||
room_id,
|
room_id,
|
||||||
room_member_handler,
|
|
||||||
preset_config,
|
preset_config,
|
||||||
invite_list,
|
invite_list,
|
||||||
initial_state,
|
initial_state,
|
||||||
creation_content,
|
creation_content,
|
||||||
room_alias,
|
room_alias=None,
|
||||||
power_level_content_override,
|
power_level_content_override=None,
|
||||||
creator_join_profile,
|
creator_join_profile=None,
|
||||||
):
|
):
|
||||||
def create(etype, content, **kwargs):
|
def create(etype, content, **kwargs):
|
||||||
e = {
|
e = {
|
||||||
@ -324,6 +628,7 @@ class RoomCreationHandler(BaseHandler):
|
|||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def send(etype, content, **kwargs):
|
def send(etype, content, **kwargs):
|
||||||
event = create(etype, content, **kwargs)
|
event = create(etype, content, **kwargs)
|
||||||
|
logger.info("Sending %s in new room", etype)
|
||||||
yield self.event_creation_handler.create_and_send_nonmember_event(
|
yield self.event_creation_handler.create_and_send_nonmember_event(
|
||||||
creator,
|
creator,
|
||||||
event,
|
event,
|
||||||
@ -346,7 +651,8 @@ class RoomCreationHandler(BaseHandler):
|
|||||||
content=creation_content,
|
content=creation_content,
|
||||||
)
|
)
|
||||||
|
|
||||||
yield room_member_handler.update_membership(
|
logger.info("Sending %s in new room", EventTypes.Member)
|
||||||
|
yield self.room_member_handler.update_membership(
|
||||||
creator,
|
creator,
|
||||||
creator.user,
|
creator.user,
|
||||||
room_id,
|
room_id,
|
||||||
@ -388,6 +694,7 @@ class RoomCreationHandler(BaseHandler):
|
|||||||
for invitee in invite_list:
|
for invitee in invite_list:
|
||||||
power_level_content["users"][invitee] = 100
|
power_level_content["users"][invitee] = 100
|
||||||
|
|
||||||
|
if power_level_content_override:
|
||||||
power_level_content.update(power_level_content_override)
|
power_level_content.update(power_level_content_override)
|
||||||
|
|
||||||
yield send(
|
yield send(
|
||||||
@ -427,6 +734,30 @@ class RoomCreationHandler(BaseHandler):
|
|||||||
content=content,
|
content=content,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _generate_room_id(self, creator_id, is_public):
|
||||||
|
# autogen room IDs and try to create it. We may clash, so just
|
||||||
|
# try a few times till one goes through, giving up eventually.
|
||||||
|
attempts = 0
|
||||||
|
while attempts < 5:
|
||||||
|
try:
|
||||||
|
random_string = stringutils.random_string(18)
|
||||||
|
gen_room_id = RoomID(
|
||||||
|
random_string,
|
||||||
|
self.hs.hostname,
|
||||||
|
).to_string()
|
||||||
|
if isinstance(gen_room_id, bytes):
|
||||||
|
gen_room_id = gen_room_id.decode('utf-8')
|
||||||
|
yield self.store.store_room(
|
||||||
|
room_id=gen_room_id,
|
||||||
|
room_creator_user_id=creator_id,
|
||||||
|
is_public=is_public,
|
||||||
|
)
|
||||||
|
defer.returnValue(gen_room_id)
|
||||||
|
except StoreError:
|
||||||
|
attempts += 1
|
||||||
|
raise StoreError(500, "Couldn't generate a room ID.")
|
||||||
|
|
||||||
|
|
||||||
class RoomContextHandler(object):
|
class RoomContextHandler(object):
|
||||||
def __init__(self, hs):
|
def __init__(self, hs):
|
||||||
|
@ -63,11 +63,8 @@ class TypingHandler(object):
|
|||||||
self._member_typing_until = {} # clock time we expect to stop
|
self._member_typing_until = {} # clock time we expect to stop
|
||||||
self._member_last_federation_poke = {}
|
self._member_last_federation_poke = {}
|
||||||
|
|
||||||
# map room IDs to serial numbers
|
|
||||||
self._room_serials = {}
|
|
||||||
self._latest_room_serial = 0
|
self._latest_room_serial = 0
|
||||||
# map room IDs to sets of users currently typing
|
self._reset()
|
||||||
self._room_typing = {}
|
|
||||||
|
|
||||||
# caches which room_ids changed at which serials
|
# caches which room_ids changed at which serials
|
||||||
self._typing_stream_change_cache = StreamChangeCache(
|
self._typing_stream_change_cache = StreamChangeCache(
|
||||||
@ -79,6 +76,15 @@ class TypingHandler(object):
|
|||||||
5000,
|
5000,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def _reset(self):
|
||||||
|
"""
|
||||||
|
Reset the typing handler's data caches.
|
||||||
|
"""
|
||||||
|
# map room IDs to serial numbers
|
||||||
|
self._room_serials = {}
|
||||||
|
# map room IDs to sets of users currently typing
|
||||||
|
self._room_typing = {}
|
||||||
|
|
||||||
def _handle_timeouts(self):
|
def _handle_timeouts(self):
|
||||||
logger.info("Checking for typing timeouts")
|
logger.info("Checking for typing timeouts")
|
||||||
|
|
||||||
|
@ -468,13 +468,13 @@ def set_cors_headers(request):
|
|||||||
Args:
|
Args:
|
||||||
request (twisted.web.http.Request): The http request to add CORs to.
|
request (twisted.web.http.Request): The http request to add CORs to.
|
||||||
"""
|
"""
|
||||||
request.setHeader("Access-Control-Allow-Origin", "*")
|
request.setHeader(b"Access-Control-Allow-Origin", b"*")
|
||||||
request.setHeader(
|
request.setHeader(
|
||||||
"Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS"
|
b"Access-Control-Allow-Methods", b"GET, POST, PUT, DELETE, OPTIONS"
|
||||||
)
|
)
|
||||||
request.setHeader(
|
request.setHeader(
|
||||||
"Access-Control-Allow-Headers",
|
b"Access-Control-Allow-Headers",
|
||||||
"Origin, X-Requested-With, Content-Type, Accept, Authorization"
|
b"Origin, X-Requested-With, Content-Type, Accept, Authorization"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -121,16 +121,15 @@ def parse_string(request, name, default=None, required=False,
|
|||||||
|
|
||||||
Args:
|
Args:
|
||||||
request: the twisted HTTP request.
|
request: the twisted HTTP request.
|
||||||
name (bytes/unicode): the name of the query parameter.
|
name (bytes|unicode): the name of the query parameter.
|
||||||
default (bytes/unicode|None): value to use if the parameter is absent,
|
default (bytes|unicode|None): value to use if the parameter is absent,
|
||||||
defaults to None. Must be bytes if encoding is None.
|
defaults to None. Must be bytes if encoding is None.
|
||||||
required (bool): whether to raise a 400 SynapseError if the
|
required (bool): whether to raise a 400 SynapseError if the
|
||||||
parameter is absent, defaults to False.
|
parameter is absent, defaults to False.
|
||||||
allowed_values (list[bytes/unicode]): List of allowed values for the
|
allowed_values (list[bytes|unicode]): List of allowed values for the
|
||||||
string, or None if any value is allowed, defaults to None. Must be
|
string, or None if any value is allowed, defaults to None. Must be
|
||||||
the same type as name, if given.
|
the same type as name, if given.
|
||||||
encoding: The encoding to decode the name to, and decode the string
|
encoding (str|None): The encoding to decode the string content with.
|
||||||
content with.
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
bytes/unicode|None: A string value or the default. Unicode if encoding
|
bytes/unicode|None: A string value or the default. Unicode if encoding
|
||||||
|
@ -85,7 +85,10 @@ class EmailPusher(object):
|
|||||||
self.timed_call = None
|
self.timed_call = None
|
||||||
|
|
||||||
def on_new_notifications(self, min_stream_ordering, max_stream_ordering):
|
def on_new_notifications(self, min_stream_ordering, max_stream_ordering):
|
||||||
|
if self.max_stream_ordering:
|
||||||
self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering)
|
self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering)
|
||||||
|
else:
|
||||||
|
self.max_stream_ordering = max_stream_ordering
|
||||||
self._start_processing()
|
self._start_processing()
|
||||||
|
|
||||||
def on_new_receipts(self, min_stream_id, max_stream_id):
|
def on_new_receipts(self, min_stream_id, max_stream_id):
|
||||||
|
@ -311,10 +311,10 @@ class HttpPusher(object):
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if event.type == 'm.room.member':
|
if event.type == 'm.room.member' and event.is_state():
|
||||||
d['notification']['membership'] = event.content['membership']
|
d['notification']['membership'] = event.content['membership']
|
||||||
d['notification']['user_is_target'] = event.state_key == self.user_id
|
d['notification']['user_is_target'] = event.state_key == self.user_id
|
||||||
if self.hs.config.push_include_content and 'content' in event:
|
if self.hs.config.push_include_content and event.content:
|
||||||
d['notification']['content'] = event.content
|
d['notification']['content'] = event.content
|
||||||
|
|
||||||
# We no longer send aliases separately, instead, we send the human
|
# We no longer send aliases separately, instead, we send the human
|
||||||
|
@ -26,7 +26,6 @@ import bleach
|
|||||||
import jinja2
|
import jinja2
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
from twisted.mail.smtp import sendmail
|
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes
|
from synapse.api.constants import EventTypes
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError
|
||||||
@ -85,6 +84,7 @@ class Mailer(object):
|
|||||||
self.notif_template_html = notif_template_html
|
self.notif_template_html = notif_template_html
|
||||||
self.notif_template_text = notif_template_text
|
self.notif_template_text = notif_template_text
|
||||||
|
|
||||||
|
self.sendmail = self.hs.get_sendmail()
|
||||||
self.store = self.hs.get_datastore()
|
self.store = self.hs.get_datastore()
|
||||||
self.macaroon_gen = self.hs.get_macaroon_generator()
|
self.macaroon_gen = self.hs.get_macaroon_generator()
|
||||||
self.state_handler = self.hs.get_state_handler()
|
self.state_handler = self.hs.get_state_handler()
|
||||||
@ -191,11 +191,11 @@ class Mailer(object):
|
|||||||
multipart_msg.attach(html_part)
|
multipart_msg.attach(html_part)
|
||||||
|
|
||||||
logger.info("Sending email push notification to %s" % email_address)
|
logger.info("Sending email push notification to %s" % email_address)
|
||||||
# logger.debug(html_text)
|
|
||||||
|
|
||||||
yield sendmail(
|
yield self.sendmail(
|
||||||
self.hs.config.email_smtp_host,
|
self.hs.config.email_smtp_host,
|
||||||
raw_from, raw_to, multipart_msg.as_string(),
|
raw_from, raw_to, multipart_msg.as_string().encode('utf8'),
|
||||||
|
reactor=self.hs.get_reactor(),
|
||||||
port=self.hs.config.email_smtp_port,
|
port=self.hs.config.email_smtp_port,
|
||||||
requireAuthentication=self.hs.config.email_smtp_user is not None,
|
requireAuthentication=self.hs.config.email_smtp_user is not None,
|
||||||
username=self.hs.config.email_smtp_user,
|
username=self.hs.config.email_smtp_user,
|
||||||
@ -333,7 +333,7 @@ class Mailer(object):
|
|||||||
notif_events, user_id, reason):
|
notif_events, user_id, reason):
|
||||||
if len(notifs_by_room) == 1:
|
if len(notifs_by_room) == 1:
|
||||||
# Only one room has new stuff
|
# Only one room has new stuff
|
||||||
room_id = notifs_by_room.keys()[0]
|
room_id = list(notifs_by_room.keys())[0]
|
||||||
|
|
||||||
# If the room has some kind of name, use it, but we don't
|
# If the room has some kind of name, use it, but we don't
|
||||||
# want the generated-from-names one here otherwise we'll
|
# want the generated-from-names one here otherwise we'll
|
||||||
|
@ -124,7 +124,7 @@ class PushRuleEvaluatorForEvent(object):
|
|||||||
|
|
||||||
# XXX: optimisation: cache our pattern regexps
|
# XXX: optimisation: cache our pattern regexps
|
||||||
if condition['key'] == 'content.body':
|
if condition['key'] == 'content.body':
|
||||||
body = self._event["content"].get("body", None)
|
body = self._event.content.get("body", None)
|
||||||
if not body:
|
if not body:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@ -140,7 +140,7 @@ class PushRuleEvaluatorForEvent(object):
|
|||||||
if not display_name:
|
if not display_name:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
body = self._event["content"].get("body", None)
|
body = self._event.content.get("body", None)
|
||||||
if not body:
|
if not body:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -51,7 +51,6 @@ REQUIREMENTS = {
|
|||||||
"daemonize>=2.3.1": ["daemonize"],
|
"daemonize>=2.3.1": ["daemonize"],
|
||||||
"bcrypt>=3.1.0": ["bcrypt>=3.1.0"],
|
"bcrypt>=3.1.0": ["bcrypt>=3.1.0"],
|
||||||
"pillow>=3.1.2": ["PIL"],
|
"pillow>=3.1.2": ["PIL"],
|
||||||
"pydenticon>=0.2": ["pydenticon"],
|
|
||||||
"sortedcontainers>=1.4.4": ["sortedcontainers"],
|
"sortedcontainers>=1.4.4": ["sortedcontainers"],
|
||||||
"psutil>=2.0.0": ["psutil>=2.0.0"],
|
"psutil>=2.0.0": ["psutil>=2.0.0"],
|
||||||
"pysaml2>=3.0.0": ["saml2"],
|
"pysaml2>=3.0.0": ["saml2"],
|
||||||
|
@ -106,7 +106,7 @@ class ReplicationClientHandler(object):
|
|||||||
|
|
||||||
Can be overriden in subclasses to handle more.
|
Can be overriden in subclasses to handle more.
|
||||||
"""
|
"""
|
||||||
logger.info("Received rdata %s -> %s", stream_name, token)
|
logger.debug("Received rdata %s -> %s", stream_name, token)
|
||||||
return self.store.process_replication_rows(stream_name, token, rows)
|
return self.store.process_replication_rows(stream_name, token, rows)
|
||||||
|
|
||||||
def on_position(self, stream_name, token):
|
def on_position(self, stream_name, token):
|
||||||
|
@ -656,7 +656,7 @@ tcp_inbound_commands = LaterGauge(
|
|||||||
"",
|
"",
|
||||||
["command", "name"],
|
["command", "name"],
|
||||||
lambda: {
|
lambda: {
|
||||||
(k[0], p.name,): count
|
(k, p.name,): count
|
||||||
for p in connected_connections
|
for p in connected_connections
|
||||||
for k, count in iteritems(p.inbound_commands_counter)
|
for k, count in iteritems(p.inbound_commands_counter)
|
||||||
},
|
},
|
||||||
@ -667,7 +667,7 @@ tcp_outbound_commands = LaterGauge(
|
|||||||
"",
|
"",
|
||||||
["command", "name"],
|
["command", "name"],
|
||||||
lambda: {
|
lambda: {
|
||||||
(k[0], p.name,): count
|
(k, p.name,): count
|
||||||
for p in connected_connections
|
for p in connected_connections
|
||||||
for k, count in iteritems(p.outbound_commands_counter)
|
for k, count in iteritems(p.outbound_commands_counter)
|
||||||
},
|
},
|
||||||
|
@ -47,6 +47,7 @@ from synapse.rest.client.v2_alpha import (
|
|||||||
register,
|
register,
|
||||||
report_event,
|
report_event,
|
||||||
room_keys,
|
room_keys,
|
||||||
|
room_upgrade_rest_servlet,
|
||||||
sendtodevice,
|
sendtodevice,
|
||||||
sync,
|
sync,
|
||||||
tags,
|
tags,
|
||||||
@ -116,3 +117,4 @@ class ClientRestResource(JsonResource):
|
|||||||
sendtodevice.register_servlets(hs, client_resource)
|
sendtodevice.register_servlets(hs, client_resource)
|
||||||
user_directory.register_servlets(hs, client_resource)
|
user_directory.register_servlets(hs, client_resource)
|
||||||
groups.register_servlets(hs, client_resource)
|
groups.register_servlets(hs, client_resource)
|
||||||
|
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
|
||||||
|
@ -68,6 +68,29 @@ function captchaDone() {
|
|||||||
</html>
|
</html>
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
TERMS_TEMPLATE = """
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<title>Authentication</title>
|
||||||
|
<meta name='viewport' content='width=device-width, initial-scale=1,
|
||||||
|
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
|
||||||
|
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<form id="registrationForm" method="post" action="%(myurl)s">
|
||||||
|
<div>
|
||||||
|
<p>
|
||||||
|
Please click the button below if you agree to the
|
||||||
|
<a href="%(terms_url)s">privacy policy of this homeserver.</a>
|
||||||
|
</p>
|
||||||
|
<input type="hidden" name="session" value="%(session)s" />
|
||||||
|
<input type="submit" value="Agree" />
|
||||||
|
</div>
|
||||||
|
</form>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
"""
|
||||||
|
|
||||||
SUCCESS_TEMPLATE = """
|
SUCCESS_TEMPLATE = """
|
||||||
<html>
|
<html>
|
||||||
<head>
|
<head>
|
||||||
@ -130,6 +153,27 @@ class AuthRestServlet(RestServlet):
|
|||||||
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
|
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
|
||||||
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
|
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
|
||||||
|
|
||||||
|
request.write(html_bytes)
|
||||||
|
finish_request(request)
|
||||||
|
defer.returnValue(None)
|
||||||
|
elif stagetype == LoginType.TERMS:
|
||||||
|
session = request.args['session'][0]
|
||||||
|
|
||||||
|
html = TERMS_TEMPLATE % {
|
||||||
|
'session': session,
|
||||||
|
'terms_url': "%s/_matrix/consent?v=%s" % (
|
||||||
|
self.hs.config.public_baseurl,
|
||||||
|
self.hs.config.user_consent_version,
|
||||||
|
),
|
||||||
|
'myurl': "%s/auth/%s/fallback/web" % (
|
||||||
|
CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
|
||||||
|
),
|
||||||
|
}
|
||||||
|
html_bytes = html.encode("utf8")
|
||||||
|
request.setResponseCode(200)
|
||||||
|
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
|
||||||
|
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
|
||||||
|
|
||||||
request.write(html_bytes)
|
request.write(html_bytes)
|
||||||
finish_request(request)
|
finish_request(request)
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
@ -139,7 +183,7 @@ class AuthRestServlet(RestServlet):
|
|||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def on_POST(self, request, stagetype):
|
def on_POST(self, request, stagetype):
|
||||||
yield
|
yield
|
||||||
if stagetype == "m.login.recaptcha":
|
if stagetype == LoginType.RECAPTCHA:
|
||||||
if ('g-recaptcha-response' not in request.args or
|
if ('g-recaptcha-response' not in request.args or
|
||||||
len(request.args['g-recaptcha-response'])) == 0:
|
len(request.args['g-recaptcha-response'])) == 0:
|
||||||
raise SynapseError(400, "No captcha response supplied")
|
raise SynapseError(400, "No captcha response supplied")
|
||||||
@ -178,6 +222,41 @@ class AuthRestServlet(RestServlet):
|
|||||||
request.write(html_bytes)
|
request.write(html_bytes)
|
||||||
finish_request(request)
|
finish_request(request)
|
||||||
|
|
||||||
|
defer.returnValue(None)
|
||||||
|
elif stagetype == LoginType.TERMS:
|
||||||
|
if ('session' not in request.args or
|
||||||
|
len(request.args['session'])) == 0:
|
||||||
|
raise SynapseError(400, "No session supplied")
|
||||||
|
|
||||||
|
session = request.args['session'][0]
|
||||||
|
authdict = {'session': session}
|
||||||
|
|
||||||
|
success = yield self.auth_handler.add_oob_auth(
|
||||||
|
LoginType.TERMS,
|
||||||
|
authdict,
|
||||||
|
self.hs.get_ip_from_request(request)
|
||||||
|
)
|
||||||
|
|
||||||
|
if success:
|
||||||
|
html = SUCCESS_TEMPLATE
|
||||||
|
else:
|
||||||
|
html = TERMS_TEMPLATE % {
|
||||||
|
'session': session,
|
||||||
|
'terms_url': "%s/_matrix/consent?v=%s" % (
|
||||||
|
self.hs.config.public_baseurl,
|
||||||
|
self.hs.config.user_consent_version,
|
||||||
|
),
|
||||||
|
'myurl': "%s/auth/%s/fallback/web" % (
|
||||||
|
CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
|
||||||
|
),
|
||||||
|
}
|
||||||
|
html_bytes = html.encode("utf8")
|
||||||
|
request.setResponseCode(200)
|
||||||
|
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
|
||||||
|
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
|
||||||
|
|
||||||
|
request.write(html_bytes)
|
||||||
|
finish_request(request)
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
else:
|
else:
|
||||||
raise SynapseError(404, "Unknown auth stage type")
|
raise SynapseError(404, "Unknown auth stage type")
|
||||||
|
@ -359,6 +359,13 @@ class RegisterRestServlet(RestServlet):
|
|||||||
[LoginType.MSISDN, LoginType.EMAIL_IDENTITY]
|
[LoginType.MSISDN, LoginType.EMAIL_IDENTITY]
|
||||||
])
|
])
|
||||||
|
|
||||||
|
# Append m.login.terms to all flows if we're requiring consent
|
||||||
|
if self.hs.config.user_consent_at_registration:
|
||||||
|
new_flows = []
|
||||||
|
for flow in flows:
|
||||||
|
flow.append(LoginType.TERMS)
|
||||||
|
flows.extend(new_flows)
|
||||||
|
|
||||||
auth_result, params, session_id = yield self.auth_handler.check_auth(
|
auth_result, params, session_id = yield self.auth_handler.check_auth(
|
||||||
flows, body, self.hs.get_ip_from_request(request)
|
flows, body, self.hs.get_ip_from_request(request)
|
||||||
)
|
)
|
||||||
@ -445,6 +452,12 @@ class RegisterRestServlet(RestServlet):
|
|||||||
params.get("bind_msisdn")
|
params.get("bind_msisdn")
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if auth_result and LoginType.TERMS in auth_result:
|
||||||
|
logger.info("%s has consented to the privacy policy" % registered_user_id)
|
||||||
|
yield self.store.user_set_consent_version(
|
||||||
|
registered_user_id, self.hs.config.user_consent_version,
|
||||||
|
)
|
||||||
|
|
||||||
defer.returnValue((200, return_dict))
|
defer.returnValue((200, return_dict))
|
||||||
|
|
||||||
def on_OPTIONS(self, _):
|
def on_OPTIONS(self, _):
|
||||||
|
@ -17,7 +17,7 @@ import logging
|
|||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
RestServlet,
|
RestServlet,
|
||||||
parse_json_object_from_request,
|
parse_json_object_from_request,
|
||||||
@ -208,9 +208,24 @@ class RoomKeysServlet(RestServlet):
|
|||||||
user_id, version, room_id, session_id
|
user_id, version, room_id, session_id
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Convert room_keys to the right format to return.
|
||||||
if session_id:
|
if session_id:
|
||||||
|
# If the client requests a specific session, but that session was
|
||||||
|
# not backed up, then return an M_NOT_FOUND.
|
||||||
|
if room_keys['rooms'] == {}:
|
||||||
|
raise NotFoundError("No room_keys found")
|
||||||
|
else:
|
||||||
room_keys = room_keys['rooms'][room_id]['sessions'][session_id]
|
room_keys = room_keys['rooms'][room_id]['sessions'][session_id]
|
||||||
elif room_id:
|
elif room_id:
|
||||||
|
# If the client requests all sessions from a room, but no sessions
|
||||||
|
# are found, then return an empty result rather than an error, so
|
||||||
|
# that clients don't have to handle an error condition, and an
|
||||||
|
# empty result is valid. (Similarly if the client requests all
|
||||||
|
# sessions from the backup, but in that case, room_keys is already
|
||||||
|
# in the right format, so we don't need to do anything about it.)
|
||||||
|
if room_keys['rooms'] == {}:
|
||||||
|
room_keys = {'sessions': {}}
|
||||||
|
else:
|
||||||
room_keys = room_keys['rooms'][room_id]
|
room_keys = room_keys['rooms'][room_id]
|
||||||
|
|
||||||
defer.returnValue((200, room_keys))
|
defer.returnValue((200, room_keys))
|
||||||
|
89
synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py
Normal file
89
synapse/rest/client/v2_alpha/room_upgrade_rest_servlet.py
Normal file
@ -0,0 +1,89 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2016 OpenMarket Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
from twisted.internet import defer
|
||||||
|
|
||||||
|
from synapse.api.constants import KNOWN_ROOM_VERSIONS
|
||||||
|
from synapse.api.errors import Codes, SynapseError
|
||||||
|
from synapse.http.servlet import (
|
||||||
|
RestServlet,
|
||||||
|
assert_params_in_dict,
|
||||||
|
parse_json_object_from_request,
|
||||||
|
)
|
||||||
|
|
||||||
|
from ._base import client_v2_patterns
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class RoomUpgradeRestServlet(RestServlet):
|
||||||
|
"""Handler for room uprade requests.
|
||||||
|
|
||||||
|
Handles requests of the form:
|
||||||
|
|
||||||
|
POST /_matrix/client/r0/rooms/$roomid/upgrade HTTP/1.1
|
||||||
|
Content-Type: application/json
|
||||||
|
|
||||||
|
{
|
||||||
|
"new_version": "2",
|
||||||
|
}
|
||||||
|
|
||||||
|
Creates a new room and shuts down the old one. Returns the ID of the new room.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
hs (synapse.server.HomeServer):
|
||||||
|
"""
|
||||||
|
PATTERNS = client_v2_patterns(
|
||||||
|
# /rooms/$roomid/upgrade
|
||||||
|
"/rooms/(?P<room_id>[^/]*)/upgrade$",
|
||||||
|
v2_alpha=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
def __init__(self, hs):
|
||||||
|
super(RoomUpgradeRestServlet, self).__init__()
|
||||||
|
self._hs = hs
|
||||||
|
self._room_creation_handler = hs.get_room_creation_handler()
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def on_POST(self, request, room_id):
|
||||||
|
requester = yield self._auth.get_user_by_req(request)
|
||||||
|
|
||||||
|
content = parse_json_object_from_request(request)
|
||||||
|
assert_params_in_dict(content, ("new_version", ))
|
||||||
|
new_version = content["new_version"]
|
||||||
|
|
||||||
|
if new_version not in KNOWN_ROOM_VERSIONS:
|
||||||
|
raise SynapseError(
|
||||||
|
400,
|
||||||
|
"Your homeserver does not support this room version",
|
||||||
|
Codes.UNSUPPORTED_ROOM_VERSION,
|
||||||
|
)
|
||||||
|
|
||||||
|
new_room_id = yield self._room_creation_handler.upgrade_room(
|
||||||
|
requester, room_id, new_version
|
||||||
|
)
|
||||||
|
|
||||||
|
ret = {
|
||||||
|
"replacement_room": new_room_id,
|
||||||
|
}
|
||||||
|
|
||||||
|
defer.returnValue((200, ret))
|
||||||
|
|
||||||
|
|
||||||
|
def register_servlets(hs, http_server):
|
||||||
|
RoomUpgradeRestServlet(hs).register(http_server)
|
@ -137,12 +137,15 @@ class ConsentResource(Resource):
|
|||||||
request (twisted.web.http.Request):
|
request (twisted.web.http.Request):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
version = parse_string(request, "v",
|
version = parse_string(request, "v", default=self._default_consent_version)
|
||||||
default=self._default_consent_version)
|
username = parse_string(request, "u", required=False, default="")
|
||||||
username = parse_string(request, "u", required=True)
|
userhmac = None
|
||||||
userhmac = parse_string(request, "h", required=True, encoding=None)
|
has_consented = False
|
||||||
|
public_version = username == ""
|
||||||
|
if not public_version:
|
||||||
|
userhmac_bytes = parse_string(request, "h", required=True, encoding=None)
|
||||||
|
|
||||||
self._check_hash(username, userhmac)
|
self._check_hash(username, userhmac_bytes)
|
||||||
|
|
||||||
if username.startswith('@'):
|
if username.startswith('@'):
|
||||||
qualified_user_id = username
|
qualified_user_id = username
|
||||||
@ -153,11 +156,17 @@ class ConsentResource(Resource):
|
|||||||
if u is None:
|
if u is None:
|
||||||
raise NotFoundError("Unknown user")
|
raise NotFoundError("Unknown user")
|
||||||
|
|
||||||
|
has_consented = u["consent_version"] == version
|
||||||
|
userhmac = userhmac_bytes.decode("ascii")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self._render_template(
|
self._render_template(
|
||||||
request, "%s.html" % (version,),
|
request, "%s.html" % (version,),
|
||||||
user=username, userhmac=userhmac, version=version,
|
user=username,
|
||||||
has_consented=(u["consent_version"] == version),
|
userhmac=userhmac,
|
||||||
|
version=version,
|
||||||
|
has_consented=has_consented,
|
||||||
|
public_version=public_version,
|
||||||
)
|
)
|
||||||
except TemplateNotFound:
|
except TemplateNotFound:
|
||||||
raise NotFoundError("Unknown policy version")
|
raise NotFoundError("Unknown policy version")
|
||||||
@ -223,7 +232,7 @@ class ConsentResource(Resource):
|
|||||||
key=self._hmac_secret,
|
key=self._hmac_secret,
|
||||||
msg=userid.encode('utf-8'),
|
msg=userid.encode('utf-8'),
|
||||||
digestmod=sha256,
|
digestmod=sha256,
|
||||||
).hexdigest()
|
).hexdigest().encode('ascii')
|
||||||
|
|
||||||
if not compare_digest(want_mac, userhmac):
|
if not compare_digest(want_mac, userhmac):
|
||||||
raise SynapseError(http_client.FORBIDDEN, "HMAC incorrect")
|
raise SynapseError(http_client.FORBIDDEN, "HMAC incorrect")
|
||||||
|
@ -1,14 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
@ -1,92 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Copyright 2014-2016 OpenMarket Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
|
|
||||||
import logging
|
|
||||||
|
|
||||||
from canonicaljson import encode_canonical_json
|
|
||||||
from signedjson.sign import sign_json
|
|
||||||
from unpaddedbase64 import encode_base64
|
|
||||||
|
|
||||||
from OpenSSL import crypto
|
|
||||||
from twisted.web.resource import Resource
|
|
||||||
|
|
||||||
from synapse.http.server import respond_with_json_bytes
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class LocalKey(Resource):
|
|
||||||
"""HTTP resource containing encoding the TLS X.509 certificate and NACL
|
|
||||||
signature verification keys for this server::
|
|
||||||
|
|
||||||
GET /key HTTP/1.1
|
|
||||||
|
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Content-Type: application/json
|
|
||||||
{
|
|
||||||
"server_name": "this.server.example.com"
|
|
||||||
"verify_keys": {
|
|
||||||
"algorithm:version": # base64 encoded NACL verification key.
|
|
||||||
},
|
|
||||||
"tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
|
|
||||||
"signatures": {
|
|
||||||
"this.server.example.com": {
|
|
||||||
"algorithm:version": # NACL signature for this server.
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, hs):
|
|
||||||
self.response_body = encode_canonical_json(
|
|
||||||
self.response_json_object(hs.config)
|
|
||||||
)
|
|
||||||
Resource.__init__(self)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def response_json_object(server_config):
|
|
||||||
verify_keys = {}
|
|
||||||
for key in server_config.signing_key:
|
|
||||||
verify_key_bytes = key.verify_key.encode()
|
|
||||||
key_id = "%s:%s" % (key.alg, key.version)
|
|
||||||
verify_keys[key_id] = encode_base64(verify_key_bytes)
|
|
||||||
|
|
||||||
x509_certificate_bytes = crypto.dump_certificate(
|
|
||||||
crypto.FILETYPE_ASN1,
|
|
||||||
server_config.tls_certificate
|
|
||||||
)
|
|
||||||
json_object = {
|
|
||||||
u"server_name": server_config.server_name,
|
|
||||||
u"verify_keys": verify_keys,
|
|
||||||
u"tls_certificate": encode_base64(x509_certificate_bytes)
|
|
||||||
}
|
|
||||||
for key in server_config.signing_key:
|
|
||||||
json_object = sign_json(
|
|
||||||
json_object,
|
|
||||||
server_config.server_name,
|
|
||||||
key,
|
|
||||||
)
|
|
||||||
|
|
||||||
return json_object
|
|
||||||
|
|
||||||
def render_GET(self, request):
|
|
||||||
return respond_with_json_bytes(
|
|
||||||
request, 200, self.response_body,
|
|
||||||
)
|
|
||||||
|
|
||||||
def getChild(self, name, request):
|
|
||||||
if name == b'':
|
|
||||||
return self
|
|
@ -1,68 +0,0 @@
|
|||||||
# Copyright 2015, 2016 OpenMarket Ltd
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
from pydenticon import Generator
|
|
||||||
|
|
||||||
from twisted.web.resource import Resource
|
|
||||||
|
|
||||||
from synapse.http.servlet import parse_integer
|
|
||||||
|
|
||||||
FOREGROUND = [
|
|
||||||
"rgb(45,79,255)",
|
|
||||||
"rgb(254,180,44)",
|
|
||||||
"rgb(226,121,234)",
|
|
||||||
"rgb(30,179,253)",
|
|
||||||
"rgb(232,77,65)",
|
|
||||||
"rgb(49,203,115)",
|
|
||||||
"rgb(141,69,170)"
|
|
||||||
]
|
|
||||||
|
|
||||||
BACKGROUND = "rgb(224,224,224)"
|
|
||||||
SIZE = 5
|
|
||||||
|
|
||||||
|
|
||||||
class IdenticonResource(Resource):
|
|
||||||
isLeaf = True
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
Resource.__init__(self)
|
|
||||||
self.generator = Generator(
|
|
||||||
SIZE, SIZE, foreground=FOREGROUND, background=BACKGROUND,
|
|
||||||
)
|
|
||||||
|
|
||||||
def generate_identicon(self, name, width, height):
|
|
||||||
v_padding = width % SIZE
|
|
||||||
h_padding = height % SIZE
|
|
||||||
top_padding = v_padding // 2
|
|
||||||
left_padding = h_padding // 2
|
|
||||||
bottom_padding = v_padding - top_padding
|
|
||||||
right_padding = h_padding - left_padding
|
|
||||||
width -= v_padding
|
|
||||||
height -= h_padding
|
|
||||||
padding = (top_padding, bottom_padding, left_padding, right_padding)
|
|
||||||
identicon = self.generator.generate(
|
|
||||||
name, width, height, padding=padding
|
|
||||||
)
|
|
||||||
return identicon
|
|
||||||
|
|
||||||
def render_GET(self, request):
|
|
||||||
name = "/".join(request.postpath)
|
|
||||||
width = parse_integer(request, "width", default=96)
|
|
||||||
height = parse_integer(request, "height", default=96)
|
|
||||||
identicon_bytes = self.generate_identicon(name, width, height)
|
|
||||||
request.setHeader(b"Content-Type", b"image/png")
|
|
||||||
request.setHeader(
|
|
||||||
b"Cache-Control", b"public,max-age=86400,s-maxage=86400"
|
|
||||||
)
|
|
||||||
return identicon_bytes
|
|
@ -45,7 +45,6 @@ from ._base import FileInfo, respond_404, respond_with_responder
|
|||||||
from .config_resource import MediaConfigResource
|
from .config_resource import MediaConfigResource
|
||||||
from .download_resource import DownloadResource
|
from .download_resource import DownloadResource
|
||||||
from .filepath import MediaFilePaths
|
from .filepath import MediaFilePaths
|
||||||
from .identicon_resource import IdenticonResource
|
|
||||||
from .media_storage import MediaStorage
|
from .media_storage import MediaStorage
|
||||||
from .preview_url_resource import PreviewUrlResource
|
from .preview_url_resource import PreviewUrlResource
|
||||||
from .storage_provider import StorageProviderWrapper
|
from .storage_provider import StorageProviderWrapper
|
||||||
@ -769,7 +768,6 @@ class MediaRepositoryResource(Resource):
|
|||||||
self.putChild(b"thumbnail", ThumbnailResource(
|
self.putChild(b"thumbnail", ThumbnailResource(
|
||||||
hs, media_repo, media_repo.media_storage,
|
hs, media_repo, media_repo.media_storage,
|
||||||
))
|
))
|
||||||
self.putChild(b"identicon", IdenticonResource())
|
|
||||||
if hs.config.url_preview_enabled:
|
if hs.config.url_preview_enabled:
|
||||||
self.putChild(b"preview_url", PreviewUrlResource(
|
self.putChild(b"preview_url", PreviewUrlResource(
|
||||||
hs, media_repo, media_repo.media_storage,
|
hs, media_repo, media_repo.media_storage,
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import cgi
|
import cgi
|
||||||
import datetime
|
import datetime
|
||||||
import errno
|
import errno
|
||||||
@ -24,6 +25,7 @@ import shutil
|
|||||||
import sys
|
import sys
|
||||||
import traceback
|
import traceback
|
||||||
|
|
||||||
|
import six
|
||||||
from six import string_types
|
from six import string_types
|
||||||
from six.moves import urllib_parse as urlparse
|
from six.moves import urllib_parse as urlparse
|
||||||
|
|
||||||
@ -98,7 +100,7 @@ class PreviewUrlResource(Resource):
|
|||||||
# XXX: if get_user_by_req fails, what should we do in an async render?
|
# XXX: if get_user_by_req fails, what should we do in an async render?
|
||||||
requester = yield self.auth.get_user_by_req(request)
|
requester = yield self.auth.get_user_by_req(request)
|
||||||
url = parse_string(request, "url")
|
url = parse_string(request, "url")
|
||||||
if "ts" in request.args:
|
if b"ts" in request.args:
|
||||||
ts = parse_integer(request, "ts")
|
ts = parse_integer(request, "ts")
|
||||||
else:
|
else:
|
||||||
ts = self.clock.time_msec()
|
ts = self.clock.time_msec()
|
||||||
@ -180,7 +182,12 @@ class PreviewUrlResource(Resource):
|
|||||||
cache_result["expires_ts"] > ts and
|
cache_result["expires_ts"] > ts and
|
||||||
cache_result["response_code"] / 100 == 2
|
cache_result["response_code"] / 100 == 2
|
||||||
):
|
):
|
||||||
defer.returnValue(cache_result["og"])
|
# It may be stored as text in the database, not as bytes (such as
|
||||||
|
# PostgreSQL). If so, encode it back before handing it on.
|
||||||
|
og = cache_result["og"]
|
||||||
|
if isinstance(og, six.text_type):
|
||||||
|
og = og.encode('utf8')
|
||||||
|
defer.returnValue(og)
|
||||||
return
|
return
|
||||||
|
|
||||||
media_info = yield self._download_url(url, user)
|
media_info = yield self._download_url(url, user)
|
||||||
@ -213,14 +220,17 @@ class PreviewUrlResource(Resource):
|
|||||||
elif _is_html(media_info['media_type']):
|
elif _is_html(media_info['media_type']):
|
||||||
# TODO: somehow stop a big HTML tree from exploding synapse's RAM
|
# TODO: somehow stop a big HTML tree from exploding synapse's RAM
|
||||||
|
|
||||||
file = open(media_info['filename'])
|
with open(media_info['filename'], 'rb') as file:
|
||||||
body = file.read()
|
body = file.read()
|
||||||
file.close()
|
|
||||||
|
|
||||||
# clobber the encoding from the content-type, or default to utf-8
|
# clobber the encoding from the content-type, or default to utf-8
|
||||||
# XXX: this overrides any <meta/> or XML charset headers in the body
|
# XXX: this overrides any <meta/> or XML charset headers in the body
|
||||||
# which may pose problems, but so far seems to work okay.
|
# which may pose problems, but so far seems to work okay.
|
||||||
match = re.match(r'.*; *charset=(.*?)(;|$)', media_info['media_type'], re.I)
|
match = re.match(
|
||||||
|
r'.*; *charset="?(.*?)"?(;|$)',
|
||||||
|
media_info['media_type'],
|
||||||
|
re.I
|
||||||
|
)
|
||||||
encoding = match.group(1) if match else "utf-8"
|
encoding = match.group(1) if match else "utf-8"
|
||||||
|
|
||||||
og = decode_and_calc_og(body, media_info['uri'], encoding)
|
og = decode_and_calc_og(body, media_info['uri'], encoding)
|
||||||
|
@ -23,6 +23,7 @@ import abc
|
|||||||
import logging
|
import logging
|
||||||
|
|
||||||
from twisted.enterprise import adbapi
|
from twisted.enterprise import adbapi
|
||||||
|
from twisted.mail.smtp import sendmail
|
||||||
from twisted.web.client import BrowserLikePolicyForHTTPS
|
from twisted.web.client import BrowserLikePolicyForHTTPS
|
||||||
|
|
||||||
from synapse.api.auth import Auth
|
from synapse.api.auth import Auth
|
||||||
@ -174,6 +175,7 @@ class HomeServer(object):
|
|||||||
'message_handler',
|
'message_handler',
|
||||||
'pagination_handler',
|
'pagination_handler',
|
||||||
'room_context_handler',
|
'room_context_handler',
|
||||||
|
'sendmail',
|
||||||
]
|
]
|
||||||
|
|
||||||
# This is overridden in derived application classes
|
# This is overridden in derived application classes
|
||||||
@ -269,6 +271,9 @@ class HomeServer(object):
|
|||||||
def build_room_creation_handler(self):
|
def build_room_creation_handler(self):
|
||||||
return RoomCreationHandler(self)
|
return RoomCreationHandler(self)
|
||||||
|
|
||||||
|
def build_sendmail(self):
|
||||||
|
return sendmail
|
||||||
|
|
||||||
def build_state_handler(self):
|
def build_state_handler(self):
|
||||||
return StateHandler(self)
|
return StateHandler(self)
|
||||||
|
|
||||||
|
@ -7,6 +7,9 @@ import synapse.handlers.auth
|
|||||||
import synapse.handlers.deactivate_account
|
import synapse.handlers.deactivate_account
|
||||||
import synapse.handlers.device
|
import synapse.handlers.device
|
||||||
import synapse.handlers.e2e_keys
|
import synapse.handlers.e2e_keys
|
||||||
|
import synapse.handlers.room
|
||||||
|
import synapse.handlers.room_member
|
||||||
|
import synapse.handlers.message
|
||||||
import synapse.handlers.set_password
|
import synapse.handlers.set_password
|
||||||
import synapse.rest.media.v1.media_repository
|
import synapse.rest.media.v1.media_repository
|
||||||
import synapse.server_notices.server_notices_manager
|
import synapse.server_notices.server_notices_manager
|
||||||
@ -50,6 +53,9 @@ class HomeServer(object):
|
|||||||
def get_room_creation_handler(self) -> synapse.handlers.room.RoomCreationHandler:
|
def get_room_creation_handler(self) -> synapse.handlers.room.RoomCreationHandler:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def get_room_member_handler(self) -> synapse.handlers.room_member.RoomMemberHandler:
|
||||||
|
pass
|
||||||
|
|
||||||
def get_event_creation_handler(self) -> synapse.handlers.message.EventCreationHandler:
|
def get_event_creation_handler(self) -> synapse.handlers.message.EventCreationHandler:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@ -261,7 +261,7 @@ class StateHandler(object):
|
|||||||
logger.debug("calling resolve_state_groups from compute_event_context")
|
logger.debug("calling resolve_state_groups from compute_event_context")
|
||||||
|
|
||||||
entry = yield self.resolve_state_groups_for_events(
|
entry = yield self.resolve_state_groups_for_events(
|
||||||
event.room_id, [e for e, _ in event.prev_events],
|
event.room_id, event.prev_event_ids(),
|
||||||
)
|
)
|
||||||
|
|
||||||
prev_state_ids = entry.state
|
prev_state_ids = entry.state
|
||||||
@ -607,7 +607,7 @@ def resolve_events_with_store(room_version, state_sets, event_map, state_res_sto
|
|||||||
return v1.resolve_events_with_store(
|
return v1.resolve_events_with_store(
|
||||||
state_sets, event_map, state_res_store.get_events,
|
state_sets, event_map, state_res_store.get_events,
|
||||||
)
|
)
|
||||||
elif room_version == RoomVersions.VDH_TEST:
|
elif room_version in (RoomVersions.VDH_TEST, RoomVersions.STATE_V2_TEST):
|
||||||
return v2.resolve_events_with_store(
|
return v2.resolve_events_with_store(
|
||||||
state_sets, event_map, state_res_store,
|
state_sets, event_map, state_res_store,
|
||||||
)
|
)
|
||||||
|
@ -53,6 +53,10 @@ def resolve_events_with_store(state_sets, event_map, state_res_store):
|
|||||||
|
|
||||||
logger.debug("Computing conflicted state")
|
logger.debug("Computing conflicted state")
|
||||||
|
|
||||||
|
# We use event_map as a cache, so if its None we need to initialize it
|
||||||
|
if event_map is None:
|
||||||
|
event_map = {}
|
||||||
|
|
||||||
# First split up the un/conflicted state
|
# First split up the un/conflicted state
|
||||||
unconflicted_state, conflicted_state = _seperate(state_sets)
|
unconflicted_state, conflicted_state = _seperate(state_sets)
|
||||||
|
|
||||||
@ -155,7 +159,7 @@ def _get_power_level_for_sender(event_id, event_map, state_res_store):
|
|||||||
event = yield _get_event(event_id, event_map, state_res_store)
|
event = yield _get_event(event_id, event_map, state_res_store)
|
||||||
|
|
||||||
pl = None
|
pl = None
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
aev = yield _get_event(aid, event_map, state_res_store)
|
aev = yield _get_event(aid, event_map, state_res_store)
|
||||||
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
|
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
|
||||||
pl = aev
|
pl = aev
|
||||||
@ -163,7 +167,7 @@ def _get_power_level_for_sender(event_id, event_map, state_res_store):
|
|||||||
|
|
||||||
if pl is None:
|
if pl is None:
|
||||||
# Couldn't find power level. Check if they're the creator of the room
|
# Couldn't find power level. Check if they're the creator of the room
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
aev = yield _get_event(aid, event_map, state_res_store)
|
aev = yield _get_event(aid, event_map, state_res_store)
|
||||||
if (aev.type, aev.state_key) == (EventTypes.Create, ""):
|
if (aev.type, aev.state_key) == (EventTypes.Create, ""):
|
||||||
if aev.content.get("creator") == event.sender:
|
if aev.content.get("creator") == event.sender:
|
||||||
@ -295,7 +299,7 @@ def _add_event_and_auth_chain_to_graph(graph, event_id, event_map,
|
|||||||
graph.setdefault(eid, set())
|
graph.setdefault(eid, set())
|
||||||
|
|
||||||
event = yield _get_event(eid, event_map, state_res_store)
|
event = yield _get_event(eid, event_map, state_res_store)
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
if aid in auth_diff:
|
if aid in auth_diff:
|
||||||
if aid not in graph:
|
if aid not in graph:
|
||||||
state.append(aid)
|
state.append(aid)
|
||||||
@ -365,7 +369,7 @@ def _iterative_auth_checks(event_ids, base_state, event_map, state_res_store):
|
|||||||
event = event_map[event_id]
|
event = event_map[event_id]
|
||||||
|
|
||||||
auth_events = {}
|
auth_events = {}
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
ev = yield _get_event(aid, event_map, state_res_store)
|
ev = yield _get_event(aid, event_map, state_res_store)
|
||||||
|
|
||||||
if ev.rejected_reason is None:
|
if ev.rejected_reason is None:
|
||||||
@ -413,9 +417,9 @@ def _mainline_sort(event_ids, resolved_power_event_id, event_map,
|
|||||||
while pl:
|
while pl:
|
||||||
mainline.append(pl)
|
mainline.append(pl)
|
||||||
pl_ev = yield _get_event(pl, event_map, state_res_store)
|
pl_ev = yield _get_event(pl, event_map, state_res_store)
|
||||||
auth_events = pl_ev.auth_events
|
auth_events = pl_ev.auth_event_ids()
|
||||||
pl = None
|
pl = None
|
||||||
for aid, _ in auth_events:
|
for aid in auth_events:
|
||||||
ev = yield _get_event(aid, event_map, state_res_store)
|
ev = yield _get_event(aid, event_map, state_res_store)
|
||||||
if (ev.type, ev.state_key) == (EventTypes.PowerLevels, ""):
|
if (ev.type, ev.state_key) == (EventTypes.PowerLevels, ""):
|
||||||
pl = aid
|
pl = aid
|
||||||
@ -460,10 +464,10 @@ def _get_mainline_depth_for_event(event, mainline_map, event_map, state_res_stor
|
|||||||
if depth is not None:
|
if depth is not None:
|
||||||
defer.returnValue(depth)
|
defer.returnValue(depth)
|
||||||
|
|
||||||
auth_events = event.auth_events
|
auth_events = event.auth_event_ids()
|
||||||
event = None
|
event = None
|
||||||
|
|
||||||
for aid, _ in auth_events:
|
for aid in auth_events:
|
||||||
aev = yield _get_event(aid, event_map, state_res_store)
|
aev = yield _get_event(aid, event_map, state_res_store)
|
||||||
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
|
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
|
||||||
event = aev
|
event = aev
|
||||||
|
@ -22,14 +22,19 @@ from twisted.internet import defer
|
|||||||
|
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
|
from synapse.storage.background_updates import BackgroundUpdateStore
|
||||||
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
|
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
|
||||||
|
|
||||||
from ._base import Cache, SQLBaseStore, db_to_json
|
from ._base import Cache, db_to_json
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES = (
|
||||||
|
"drop_device_list_streams_non_unique_indexes"
|
||||||
|
)
|
||||||
|
|
||||||
class DeviceStore(SQLBaseStore):
|
|
||||||
|
class DeviceStore(BackgroundUpdateStore):
|
||||||
def __init__(self, db_conn, hs):
|
def __init__(self, db_conn, hs):
|
||||||
super(DeviceStore, self).__init__(db_conn, hs)
|
super(DeviceStore, self).__init__(db_conn, hs)
|
||||||
|
|
||||||
@ -52,6 +57,30 @@ class DeviceStore(SQLBaseStore):
|
|||||||
columns=["user_id", "device_id"],
|
columns=["user_id", "device_id"],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# create a unique index on device_lists_remote_cache
|
||||||
|
self.register_background_index_update(
|
||||||
|
"device_lists_remote_cache_unique_idx",
|
||||||
|
index_name="device_lists_remote_cache_unique_id",
|
||||||
|
table="device_lists_remote_cache",
|
||||||
|
columns=["user_id", "device_id"],
|
||||||
|
unique=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# And one on device_lists_remote_extremeties
|
||||||
|
self.register_background_index_update(
|
||||||
|
"device_lists_remote_extremeties_unique_idx",
|
||||||
|
index_name="device_lists_remote_extremeties_unique_idx",
|
||||||
|
table="device_lists_remote_extremeties",
|
||||||
|
columns=["user_id"],
|
||||||
|
unique=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# once they complete, we can remove the old non-unique indexes.
|
||||||
|
self.register_background_update_handler(
|
||||||
|
DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES,
|
||||||
|
self._drop_device_list_streams_non_unique_indexes,
|
||||||
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def store_device(self, user_id, device_id,
|
def store_device(self, user_id, device_id,
|
||||||
initial_device_display_name):
|
initial_device_display_name):
|
||||||
@ -239,7 +268,19 @@ class DeviceStore(SQLBaseStore):
|
|||||||
|
|
||||||
def update_remote_device_list_cache_entry(self, user_id, device_id, content,
|
def update_remote_device_list_cache_entry(self, user_id, device_id, content,
|
||||||
stream_id):
|
stream_id):
|
||||||
"""Updates a single user's device in the cache.
|
"""Updates a single device in the cache of a remote user's devicelist.
|
||||||
|
|
||||||
|
Note: assumes that we are the only thread that can be updating this user's
|
||||||
|
device list.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id (str): User to update device list for
|
||||||
|
device_id (str): ID of decivice being updated
|
||||||
|
content (dict): new data on this device
|
||||||
|
stream_id (int): the version of the device list
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred[None]
|
||||||
"""
|
"""
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
"update_remote_device_list_cache_entry",
|
"update_remote_device_list_cache_entry",
|
||||||
@ -272,7 +313,11 @@ class DeviceStore(SQLBaseStore):
|
|||||||
},
|
},
|
||||||
values={
|
values={
|
||||||
"content": json.dumps(content),
|
"content": json.dumps(content),
|
||||||
}
|
},
|
||||||
|
|
||||||
|
# we don't need to lock, because we assume we are the only thread
|
||||||
|
# updating this user's devices.
|
||||||
|
lock=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
txn.call_after(self._get_cached_user_device.invalidate, (user_id, device_id,))
|
txn.call_after(self._get_cached_user_device.invalidate, (user_id, device_id,))
|
||||||
@ -289,11 +334,26 @@ class DeviceStore(SQLBaseStore):
|
|||||||
},
|
},
|
||||||
values={
|
values={
|
||||||
"stream_id": stream_id,
|
"stream_id": stream_id,
|
||||||
}
|
},
|
||||||
|
|
||||||
|
# again, we can assume we are the only thread updating this user's
|
||||||
|
# extremity.
|
||||||
|
lock=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
def update_remote_device_list_cache(self, user_id, devices, stream_id):
|
def update_remote_device_list_cache(self, user_id, devices, stream_id):
|
||||||
"""Replace the cache of the remote user's devices.
|
"""Replace the entire cache of the remote user's devices.
|
||||||
|
|
||||||
|
Note: assumes that we are the only thread that can be updating this user's
|
||||||
|
device list.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id (str): User to update device list for
|
||||||
|
devices (list[dict]): list of device objects supplied over federation
|
||||||
|
stream_id (int): the version of the device list
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred[None]
|
||||||
"""
|
"""
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
"update_remote_device_list_cache",
|
"update_remote_device_list_cache",
|
||||||
@ -338,7 +398,11 @@ class DeviceStore(SQLBaseStore):
|
|||||||
},
|
},
|
||||||
values={
|
values={
|
||||||
"stream_id": stream_id,
|
"stream_id": stream_id,
|
||||||
}
|
},
|
||||||
|
|
||||||
|
# we don't need to lock, because we can assume we are the only thread
|
||||||
|
# updating this user's extremity.
|
||||||
|
lock=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
def get_devices_by_remote(self, destination, from_stream_id):
|
def get_devices_by_remote(self, destination, from_stream_id):
|
||||||
@ -589,10 +653,14 @@ class DeviceStore(SQLBaseStore):
|
|||||||
combined list of changes to devices, and which destinations need to be
|
combined list of changes to devices, and which destinations need to be
|
||||||
poked. `destination` may be None if no destinations need to be poked.
|
poked. `destination` may be None if no destinations need to be poked.
|
||||||
"""
|
"""
|
||||||
|
# We do a group by here as there can be a large number of duplicate
|
||||||
|
# entries, since we throw away device IDs.
|
||||||
sql = """
|
sql = """
|
||||||
SELECT stream_id, user_id, destination FROM device_lists_stream
|
SELECT MAX(stream_id) AS stream_id, user_id, destination
|
||||||
|
FROM device_lists_stream
|
||||||
LEFT JOIN device_lists_outbound_pokes USING (stream_id, user_id, device_id)
|
LEFT JOIN device_lists_outbound_pokes USING (stream_id, user_id, device_id)
|
||||||
WHERE ? < stream_id AND stream_id <= ?
|
WHERE ? < stream_id AND stream_id <= ?
|
||||||
|
GROUP BY user_id, destination
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"get_all_device_list_changes_for_remotes", None,
|
"get_all_device_list_changes_for_remotes", None,
|
||||||
@ -718,3 +786,19 @@ class DeviceStore(SQLBaseStore):
|
|||||||
"_prune_old_outbound_device_pokes",
|
"_prune_old_outbound_device_pokes",
|
||||||
_prune_txn,
|
_prune_txn,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _drop_device_list_streams_non_unique_indexes(self, progress, batch_size):
|
||||||
|
def f(conn):
|
||||||
|
txn = conn.cursor()
|
||||||
|
txn.execute(
|
||||||
|
"DROP INDEX IF EXISTS device_lists_remote_cache_id"
|
||||||
|
)
|
||||||
|
txn.execute(
|
||||||
|
"DROP INDEX IF EXISTS device_lists_remote_extremeties_id"
|
||||||
|
)
|
||||||
|
txn.close()
|
||||||
|
|
||||||
|
yield self.runWithConnection(f)
|
||||||
|
yield self._end_background_update(DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES)
|
||||||
|
defer.returnValue(1)
|
||||||
|
@ -118,6 +118,11 @@ class EndToEndRoomKeyStore(SQLBaseStore):
|
|||||||
these room keys.
|
these room keys.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
version = int(version)
|
||||||
|
except ValueError:
|
||||||
|
defer.returnValue({'rooms': {}})
|
||||||
|
|
||||||
keyvalues = {
|
keyvalues = {
|
||||||
"user_id": user_id,
|
"user_id": user_id,
|
||||||
"version": version,
|
"version": version,
|
||||||
@ -212,14 +217,23 @@ class EndToEndRoomKeyStore(SQLBaseStore):
|
|||||||
Raises:
|
Raises:
|
||||||
StoreError: with code 404 if there are no e2e_room_keys_versions present
|
StoreError: with code 404 if there are no e2e_room_keys_versions present
|
||||||
Returns:
|
Returns:
|
||||||
A deferred dict giving the info metadata for this backup version
|
A deferred dict giving the info metadata for this backup version, with
|
||||||
|
fields including:
|
||||||
|
version(str)
|
||||||
|
algorithm(str)
|
||||||
|
auth_data(object): opaque dict supplied by the client
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def _get_e2e_room_keys_version_info_txn(txn):
|
def _get_e2e_room_keys_version_info_txn(txn):
|
||||||
if version is None:
|
if version is None:
|
||||||
this_version = self._get_current_version(txn, user_id)
|
this_version = self._get_current_version(txn, user_id)
|
||||||
else:
|
else:
|
||||||
this_version = version
|
try:
|
||||||
|
this_version = int(version)
|
||||||
|
except ValueError:
|
||||||
|
# Our versions are all ints so if we can't convert it to an integer,
|
||||||
|
# it isn't there.
|
||||||
|
raise StoreError(404, "No row found")
|
||||||
|
|
||||||
result = self._simple_select_one_txn(
|
result = self._simple_select_one_txn(
|
||||||
txn,
|
txn,
|
||||||
@ -236,6 +250,7 @@ class EndToEndRoomKeyStore(SQLBaseStore):
|
|||||||
),
|
),
|
||||||
)
|
)
|
||||||
result["auth_data"] = json.loads(result["auth_data"])
|
result["auth_data"] = json.loads(result["auth_data"])
|
||||||
|
result["version"] = str(result["version"])
|
||||||
return result
|
return result
|
||||||
|
|
||||||
return self.runInteraction(
|
return self.runInteraction(
|
||||||
|
@ -40,7 +40,10 @@ class EndToEndKeyStore(SQLBaseStore):
|
|||||||
allow_none=True,
|
allow_none=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
new_key_json = encode_canonical_json(device_keys)
|
# In py3 we need old_key_json to match new_key_json type. The DB
|
||||||
|
# returns unicode while encode_canonical_json returns bytes.
|
||||||
|
new_key_json = encode_canonical_json(device_keys).decode("utf-8")
|
||||||
|
|
||||||
if old_key_json == new_key_json:
|
if old_key_json == new_key_json:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
@ -477,7 +477,7 @@ class EventFederationStore(EventFederationWorkerStore):
|
|||||||
"is_state": False,
|
"is_state": False,
|
||||||
}
|
}
|
||||||
for ev in events
|
for ev in events
|
||||||
for e_id, _ in ev.prev_events
|
for e_id in ev.prev_event_ids()
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -510,7 +510,7 @@ class EventFederationStore(EventFederationWorkerStore):
|
|||||||
|
|
||||||
txn.executemany(query, [
|
txn.executemany(query, [
|
||||||
(e_id, ev.room_id, e_id, ev.room_id, e_id, ev.room_id, False)
|
(e_id, ev.room_id, e_id, ev.room_id, e_id, ev.room_id, False)
|
||||||
for ev in events for e_id, _ in ev.prev_events
|
for ev in events for e_id in ev.prev_event_ids()
|
||||||
if not ev.internal_metadata.is_outlier()
|
if not ev.internal_metadata.is_outlier()
|
||||||
])
|
])
|
||||||
|
|
||||||
|
@ -38,6 +38,7 @@ from synapse.state import StateResolutionStore
|
|||||||
from synapse.storage.background_updates import BackgroundUpdateStore
|
from synapse.storage.background_updates import BackgroundUpdateStore
|
||||||
from synapse.storage.event_federation import EventFederationStore
|
from synapse.storage.event_federation import EventFederationStore
|
||||||
from synapse.storage.events_worker import EventsWorkerStore
|
from synapse.storage.events_worker import EventsWorkerStore
|
||||||
|
from synapse.storage.state import StateGroupWorkerStore
|
||||||
from synapse.types import RoomStreamToken, get_domain_from_id
|
from synapse.types import RoomStreamToken, get_domain_from_id
|
||||||
from synapse.util import batch_iter
|
from synapse.util import batch_iter
|
||||||
from synapse.util.async_helpers import ObservableDeferred
|
from synapse.util.async_helpers import ObservableDeferred
|
||||||
@ -205,7 +206,8 @@ def _retry_on_integrity_error(func):
|
|||||||
|
|
||||||
# inherits from EventFederationStore so that we can call _update_backward_extremities
|
# inherits from EventFederationStore so that we can call _update_backward_extremities
|
||||||
# and _handle_mult_prev_events (though arguably those could both be moved in here)
|
# and _handle_mult_prev_events (though arguably those could both be moved in here)
|
||||||
class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore):
|
class EventsStore(StateGroupWorkerStore, EventFederationStore, EventsWorkerStore,
|
||||||
|
BackgroundUpdateStore):
|
||||||
EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
|
EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
|
||||||
EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
|
EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
|
||||||
|
|
||||||
@ -414,7 +416,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
|||||||
)
|
)
|
||||||
if len_1:
|
if len_1:
|
||||||
all_single_prev_not_state = all(
|
all_single_prev_not_state = all(
|
||||||
len(event.prev_events) == 1
|
len(event.prev_event_ids()) == 1
|
||||||
and not event.is_state()
|
and not event.is_state()
|
||||||
for event, ctx in ev_ctx_rm
|
for event, ctx in ev_ctx_rm
|
||||||
)
|
)
|
||||||
@ -438,7 +440,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
|||||||
# guess this by looking at the prev_events and checking
|
# guess this by looking at the prev_events and checking
|
||||||
# if they match the current forward extremities.
|
# if they match the current forward extremities.
|
||||||
for ev, _ in ev_ctx_rm:
|
for ev, _ in ev_ctx_rm:
|
||||||
prev_event_ids = set(e for e, _ in ev.prev_events)
|
prev_event_ids = set(ev.prev_event_ids())
|
||||||
if latest_event_ids == prev_event_ids:
|
if latest_event_ids == prev_event_ids:
|
||||||
state_delta_reuse_delta_counter.inc()
|
state_delta_reuse_delta_counter.inc()
|
||||||
break
|
break
|
||||||
@ -549,7 +551,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
|||||||
result.difference_update(
|
result.difference_update(
|
||||||
e_id
|
e_id
|
||||||
for event in new_events
|
for event in new_events
|
||||||
for e_id, _ in event.prev_events
|
for e_id in event.prev_event_ids()
|
||||||
)
|
)
|
||||||
|
|
||||||
# Finally, remove any events which are prev_events of any existing events.
|
# Finally, remove any events which are prev_events of any existing events.
|
||||||
@ -867,7 +869,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
|||||||
"auth_id": auth_id,
|
"auth_id": auth_id,
|
||||||
}
|
}
|
||||||
for event, _ in events_and_contexts
|
for event, _ in events_and_contexts
|
||||||
for auth_id, _ in event.auth_events
|
for auth_id in event.auth_event_ids()
|
||||||
if event.is_state()
|
if event.is_state()
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
@ -2034,54 +2036,36 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
|||||||
|
|
||||||
logger.info("[purge] finding redundant state groups")
|
logger.info("[purge] finding redundant state groups")
|
||||||
|
|
||||||
# Get all state groups that are only referenced by events that are
|
# Get all state groups that are referenced by events that are to be
|
||||||
# to be deleted.
|
# deleted. We then go and check if they are referenced by other events
|
||||||
# This works by first getting state groups that we may want to delete,
|
# or state groups, and if not we delete them.
|
||||||
# joining against event_to_state_groups to get events that use that
|
|
||||||
# state group, then left joining against events_to_purge again. Any
|
|
||||||
# state group where the left join produce *no nulls* are referenced
|
|
||||||
# only by events that are going to be purged.
|
|
||||||
txn.execute("""
|
txn.execute("""
|
||||||
SELECT state_group FROM
|
|
||||||
(
|
|
||||||
SELECT DISTINCT state_group FROM events_to_purge
|
SELECT DISTINCT state_group FROM events_to_purge
|
||||||
INNER JOIN event_to_state_groups USING (event_id)
|
INNER JOIN event_to_state_groups USING (event_id)
|
||||||
) AS sp
|
|
||||||
INNER JOIN event_to_state_groups USING (state_group)
|
|
||||||
LEFT JOIN events_to_purge AS ep USING (event_id)
|
|
||||||
GROUP BY state_group
|
|
||||||
HAVING SUM(CASE WHEN ep.event_id IS NULL THEN 1 ELSE 0 END) = 0
|
|
||||||
""")
|
""")
|
||||||
|
|
||||||
state_rows = txn.fetchall()
|
referenced_state_groups = set(sg for sg, in txn)
|
||||||
logger.info("[purge] found %i redundant state groups", len(state_rows))
|
logger.info(
|
||||||
|
"[purge] found %i referenced state groups",
|
||||||
# make a set of the redundant state groups, so that we can look them up
|
len(referenced_state_groups),
|
||||||
# efficiently
|
|
||||||
state_groups_to_delete = set([sg for sg, in state_rows])
|
|
||||||
|
|
||||||
# Now we get all the state groups that rely on these state groups
|
|
||||||
logger.info("[purge] finding state groups which depend on redundant"
|
|
||||||
" state groups")
|
|
||||||
remaining_state_groups = []
|
|
||||||
for i in range(0, len(state_rows), 100):
|
|
||||||
chunk = [sg for sg, in state_rows[i:i + 100]]
|
|
||||||
# look for state groups whose prev_state_group is one we are about
|
|
||||||
# to delete
|
|
||||||
rows = self._simple_select_many_txn(
|
|
||||||
txn,
|
|
||||||
table="state_group_edges",
|
|
||||||
column="prev_state_group",
|
|
||||||
iterable=chunk,
|
|
||||||
retcols=["state_group"],
|
|
||||||
keyvalues={},
|
|
||||||
)
|
)
|
||||||
remaining_state_groups.extend(
|
|
||||||
row["state_group"] for row in rows
|
|
||||||
|
|
||||||
# exclude state groups we are about to delete: no point in
|
logger.info("[purge] finding state groups that can be deleted")
|
||||||
# updating them
|
|
||||||
if row["state_group"] not in state_groups_to_delete
|
state_groups_to_delete, remaining_state_groups = (
|
||||||
|
self._find_unreferenced_groups_during_purge(
|
||||||
|
txn, referenced_state_groups,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"[purge] found %i state groups to delete",
|
||||||
|
len(state_groups_to_delete),
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"[purge] de-delta-ing %i remaining state groups",
|
||||||
|
len(remaining_state_groups),
|
||||||
)
|
)
|
||||||
|
|
||||||
# Now we turn the state groups that reference to-be-deleted state
|
# Now we turn the state groups that reference to-be-deleted state
|
||||||
@ -2127,11 +2111,11 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
|||||||
logger.info("[purge] removing redundant state groups")
|
logger.info("[purge] removing redundant state groups")
|
||||||
txn.executemany(
|
txn.executemany(
|
||||||
"DELETE FROM state_groups_state WHERE state_group = ?",
|
"DELETE FROM state_groups_state WHERE state_group = ?",
|
||||||
state_rows
|
((sg,) for sg in state_groups_to_delete),
|
||||||
)
|
)
|
||||||
txn.executemany(
|
txn.executemany(
|
||||||
"DELETE FROM state_groups WHERE id = ?",
|
"DELETE FROM state_groups WHERE id = ?",
|
||||||
state_rows
|
((sg,) for sg in state_groups_to_delete),
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.info("[purge] removing events from event_to_state_groups")
|
logger.info("[purge] removing events from event_to_state_groups")
|
||||||
@ -2227,6 +2211,85 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
|||||||
|
|
||||||
logger.info("[purge] done")
|
logger.info("[purge] done")
|
||||||
|
|
||||||
|
def _find_unreferenced_groups_during_purge(self, txn, state_groups):
|
||||||
|
"""Used when purging history to figure out which state groups can be
|
||||||
|
deleted and which need to be de-delta'ed (due to one of its prev groups
|
||||||
|
being scheduled for deletion).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
txn
|
||||||
|
state_groups (set[int]): Set of state groups referenced by events
|
||||||
|
that are going to be deleted.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple[set[int], set[int]]: The set of state groups that can be
|
||||||
|
deleted and the set of state groups that need to be de-delta'ed
|
||||||
|
"""
|
||||||
|
# Graph of state group -> previous group
|
||||||
|
graph = {}
|
||||||
|
|
||||||
|
# Set of events that we have found to be referenced by events
|
||||||
|
referenced_groups = set()
|
||||||
|
|
||||||
|
# Set of state groups we've already seen
|
||||||
|
state_groups_seen = set(state_groups)
|
||||||
|
|
||||||
|
# Set of state groups to handle next.
|
||||||
|
next_to_search = set(state_groups)
|
||||||
|
while next_to_search:
|
||||||
|
# We bound size of groups we're looking up at once, to stop the
|
||||||
|
# SQL query getting too big
|
||||||
|
if len(next_to_search) < 100:
|
||||||
|
current_search = next_to_search
|
||||||
|
next_to_search = set()
|
||||||
|
else:
|
||||||
|
current_search = set(itertools.islice(next_to_search, 100))
|
||||||
|
next_to_search -= current_search
|
||||||
|
|
||||||
|
# Check if state groups are referenced
|
||||||
|
sql = """
|
||||||
|
SELECT DISTINCT state_group FROM event_to_state_groups
|
||||||
|
LEFT JOIN events_to_purge AS ep USING (event_id)
|
||||||
|
WHERE state_group IN (%s) AND ep.event_id IS NULL
|
||||||
|
""" % (",".join("?" for _ in current_search),)
|
||||||
|
txn.execute(sql, list(current_search))
|
||||||
|
|
||||||
|
referenced = set(sg for sg, in txn)
|
||||||
|
referenced_groups |= referenced
|
||||||
|
|
||||||
|
# We don't continue iterating up the state group graphs for state
|
||||||
|
# groups that are referenced.
|
||||||
|
current_search -= referenced
|
||||||
|
|
||||||
|
rows = self._simple_select_many_txn(
|
||||||
|
txn,
|
||||||
|
table="state_group_edges",
|
||||||
|
column="prev_state_group",
|
||||||
|
iterable=current_search,
|
||||||
|
keyvalues={},
|
||||||
|
retcols=("prev_state_group", "state_group",),
|
||||||
|
)
|
||||||
|
|
||||||
|
prevs = set(row["state_group"] for row in rows)
|
||||||
|
# We don't bother re-handling groups we've already seen
|
||||||
|
prevs -= state_groups_seen
|
||||||
|
next_to_search |= prevs
|
||||||
|
state_groups_seen |= prevs
|
||||||
|
|
||||||
|
for row in rows:
|
||||||
|
# Note: Each state group can have at most one prev group
|
||||||
|
graph[row["state_group"]] = row["prev_state_group"]
|
||||||
|
|
||||||
|
to_delete = state_groups_seen - referenced_groups
|
||||||
|
|
||||||
|
to_dedelta = set()
|
||||||
|
for sg in referenced_groups:
|
||||||
|
prev_sg = graph.get(sg)
|
||||||
|
if prev_sg and prev_sg in to_delete:
|
||||||
|
to_dedelta.add(sg)
|
||||||
|
|
||||||
|
return to_delete, to_dedelta
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def is_event_after(self, event_id1, event_id2):
|
def is_event_after(self, event_id1, event_id2):
|
||||||
"""Returns True if event_id1 is after event_id2 in the stream
|
"""Returns True if event_id1 is after event_id2 in the stream
|
||||||
|
@ -25,7 +25,7 @@ logger = logging.getLogger(__name__)
|
|||||||
|
|
||||||
# Remember to update this number every time a change is made to database
|
# Remember to update this number every time a change is made to database
|
||||||
# schema files, so the users will be informed on server restarts.
|
# schema files, so the users will be informed on server restarts.
|
||||||
SCHEMA_VERSION = 51
|
SCHEMA_VERSION = 52
|
||||||
|
|
||||||
dir_path = os.path.abspath(os.path.dirname(__file__))
|
dir_path = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
|
||||||
|
@ -47,7 +47,7 @@ class RoomWorkerStore(SQLBaseStore):
|
|||||||
Args:
|
Args:
|
||||||
room_id (str): The ID of the room to retrieve.
|
room_id (str): The ID of the room to retrieve.
|
||||||
Returns:
|
Returns:
|
||||||
A namedtuple containing the room information, or an empty list.
|
A dict containing the room information, or None if the room is unknown.
|
||||||
"""
|
"""
|
||||||
return self._simple_select_one(
|
return self._simple_select_one(
|
||||||
table="rooms",
|
table="rooms",
|
||||||
|
@ -20,9 +20,6 @@ CREATE TABLE device_lists_remote_cache (
|
|||||||
content TEXT NOT NULL
|
content TEXT NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX device_lists_remote_cache_id ON device_lists_remote_cache(user_id, device_id);
|
|
||||||
|
|
||||||
|
|
||||||
-- The last update we got for a user. Empty if we're not receiving updates for
|
-- The last update we got for a user. Empty if we're not receiving updates for
|
||||||
-- that user.
|
-- that user.
|
||||||
CREATE TABLE device_lists_remote_extremeties (
|
CREATE TABLE device_lists_remote_extremeties (
|
||||||
@ -30,7 +27,11 @@ CREATE TABLE device_lists_remote_extremeties (
|
|||||||
stream_id TEXT NOT NULL
|
stream_id TEXT NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE INDEX device_lists_remote_extremeties_id ON device_lists_remote_extremeties(user_id, stream_id);
|
-- we used to create non-unique indexes on these tables, but as of update 52 we create
|
||||||
|
-- unique indexes concurrently:
|
||||||
|
--
|
||||||
|
-- CREATE INDEX device_lists_remote_cache_id ON device_lists_remote_cache(user_id, device_id);
|
||||||
|
-- CREATE INDEX device_lists_remote_extremeties_id ON device_lists_remote_extremeties(user_id, stream_id);
|
||||||
|
|
||||||
|
|
||||||
-- Stream of device lists updates. Includes both local and remotes
|
-- Stream of device lists updates. Includes both local and remotes
|
||||||
|
@ -0,0 +1,19 @@
|
|||||||
|
/* Copyright 2018 New Vector Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
-- This is needed to efficiently check for unreferenced state groups during
|
||||||
|
-- purge. Added events_to_state_group(state_group) index
|
||||||
|
INSERT into background_updates (update_name, progress_json)
|
||||||
|
VALUES ('event_to_state_groups_sg_index', '{}');
|
@ -0,0 +1,36 @@
|
|||||||
|
/* Copyright 2018 New Vector Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
-- register a background update which will create a unique index on
|
||||||
|
-- device_lists_remote_cache
|
||||||
|
INSERT into background_updates (update_name, progress_json)
|
||||||
|
VALUES ('device_lists_remote_cache_unique_idx', '{}');
|
||||||
|
|
||||||
|
-- and one on device_lists_remote_extremeties
|
||||||
|
INSERT into background_updates (update_name, progress_json, depends_on)
|
||||||
|
VALUES (
|
||||||
|
'device_lists_remote_extremeties_unique_idx', '{}',
|
||||||
|
|
||||||
|
-- doesn't really depend on this, but we need to make sure both happen
|
||||||
|
-- before we drop the old indexes.
|
||||||
|
'device_lists_remote_cache_unique_idx'
|
||||||
|
);
|
||||||
|
|
||||||
|
-- once they complete, we can drop the old indexes.
|
||||||
|
INSERT into background_updates (update_name, progress_json, depends_on)
|
||||||
|
VALUES (
|
||||||
|
'drop_device_list_streams_non_unique_indexes', '{}',
|
||||||
|
'device_lists_remote_extremeties_unique_idx'
|
||||||
|
);
|
53
synapse/storage/schema/delta/52/e2e_room_keys.sql
Normal file
53
synapse/storage/schema/delta/52/e2e_room_keys.sql
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
/* Copyright 2018 New Vector Ltd
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/* Change version column to an integer so we can do MAX() sensibly
|
||||||
|
*/
|
||||||
|
CREATE TABLE e2e_room_keys_versions_new (
|
||||||
|
user_id TEXT NOT NULL,
|
||||||
|
version BIGINT NOT NULL,
|
||||||
|
algorithm TEXT NOT NULL,
|
||||||
|
auth_data TEXT NOT NULL,
|
||||||
|
deleted SMALLINT DEFAULT 0 NOT NULL
|
||||||
|
);
|
||||||
|
|
||||||
|
INSERT INTO e2e_room_keys_versions_new
|
||||||
|
SELECT user_id, CAST(version as BIGINT), algorithm, auth_data, deleted FROM e2e_room_keys_versions;
|
||||||
|
|
||||||
|
DROP TABLE e2e_room_keys_versions;
|
||||||
|
ALTER TABLE e2e_room_keys_versions_new RENAME TO e2e_room_keys_versions;
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX e2e_room_keys_versions_idx ON e2e_room_keys_versions(user_id, version);
|
||||||
|
|
||||||
|
/* Change e2e_rooms_keys to match
|
||||||
|
*/
|
||||||
|
CREATE TABLE e2e_room_keys_new (
|
||||||
|
user_id TEXT NOT NULL,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
session_id TEXT NOT NULL,
|
||||||
|
version BIGINT NOT NULL,
|
||||||
|
first_message_index INT,
|
||||||
|
forwarded_count INT,
|
||||||
|
is_verified BOOLEAN,
|
||||||
|
session_data TEXT NOT NULL
|
||||||
|
);
|
||||||
|
|
||||||
|
INSERT INTO e2e_room_keys_new
|
||||||
|
SELECT user_id, room_id, session_id, CAST(version as BIGINT), first_message_index, forwarded_count, is_verified, session_data FROM e2e_room_keys;
|
||||||
|
|
||||||
|
DROP TABLE e2e_room_keys;
|
||||||
|
ALTER TABLE e2e_room_keys_new RENAME TO e2e_room_keys;
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX e2e_room_keys_idx ON e2e_room_keys(user_id, room_id, session_id);
|
@ -1257,6 +1257,7 @@ class StateStore(StateGroupWorkerStore, BackgroundUpdateStore):
|
|||||||
STATE_GROUP_DEDUPLICATION_UPDATE_NAME = "state_group_state_deduplication"
|
STATE_GROUP_DEDUPLICATION_UPDATE_NAME = "state_group_state_deduplication"
|
||||||
STATE_GROUP_INDEX_UPDATE_NAME = "state_group_state_type_index"
|
STATE_GROUP_INDEX_UPDATE_NAME = "state_group_state_type_index"
|
||||||
CURRENT_STATE_INDEX_UPDATE_NAME = "current_state_members_idx"
|
CURRENT_STATE_INDEX_UPDATE_NAME = "current_state_members_idx"
|
||||||
|
EVENT_STATE_GROUP_INDEX_UPDATE_NAME = "event_to_state_groups_sg_index"
|
||||||
|
|
||||||
def __init__(self, db_conn, hs):
|
def __init__(self, db_conn, hs):
|
||||||
super(StateStore, self).__init__(db_conn, hs)
|
super(StateStore, self).__init__(db_conn, hs)
|
||||||
@ -1275,6 +1276,12 @@ class StateStore(StateGroupWorkerStore, BackgroundUpdateStore):
|
|||||||
columns=["state_key"],
|
columns=["state_key"],
|
||||||
where_clause="type='m.room.member'",
|
where_clause="type='m.room.member'",
|
||||||
)
|
)
|
||||||
|
self.register_background_index_update(
|
||||||
|
self.EVENT_STATE_GROUP_INDEX_UPDATE_NAME,
|
||||||
|
index_name="event_to_state_groups_sg_index",
|
||||||
|
table="event_to_state_groups",
|
||||||
|
columns=["state_group"],
|
||||||
|
)
|
||||||
|
|
||||||
def _store_event_state_mappings_txn(self, txn, events_and_contexts):
|
def _store_event_state_mappings_txn(self, txn, events_and_contexts):
|
||||||
state_groups = {}
|
state_groups = {}
|
||||||
|
@ -169,8 +169,8 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
|||||||
self.assertEqual(res, 404)
|
self.assertEqual(res, 404)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_get_missing_room_keys(self):
|
def test_get_missing_backup(self):
|
||||||
"""Check that we get a 404 on querying missing room_keys
|
"""Check that we get a 404 on querying missing backup
|
||||||
"""
|
"""
|
||||||
res = None
|
res = None
|
||||||
try:
|
try:
|
||||||
@ -179,19 +179,20 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
|||||||
res = e.code
|
res = e.code
|
||||||
self.assertEqual(res, 404)
|
self.assertEqual(res, 404)
|
||||||
|
|
||||||
# check we also get a 404 even if the version is valid
|
@defer.inlineCallbacks
|
||||||
|
def test_get_missing_room_keys(self):
|
||||||
|
"""Check we get an empty response from an empty backup
|
||||||
|
"""
|
||||||
version = yield self.handler.create_version(self.local_user, {
|
version = yield self.handler.create_version(self.local_user, {
|
||||||
"algorithm": "m.megolm_backup.v1",
|
"algorithm": "m.megolm_backup.v1",
|
||||||
"auth_data": "first_version_auth_data",
|
"auth_data": "first_version_auth_data",
|
||||||
})
|
})
|
||||||
self.assertEqual(version, "1")
|
self.assertEqual(version, "1")
|
||||||
|
|
||||||
res = None
|
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||||
try:
|
self.assertDictEqual(res, {
|
||||||
yield self.handler.get_room_keys(self.local_user, version)
|
"rooms": {}
|
||||||
except errors.SynapseError as e:
|
})
|
||||||
res = e.code
|
|
||||||
self.assertEqual(res, 404)
|
|
||||||
|
|
||||||
# TODO: test the locking semantics when uploading room_keys,
|
# TODO: test the locking semantics when uploading room_keys,
|
||||||
# although this is probably best done in sytest
|
# although this is probably best done in sytest
|
||||||
@ -345,17 +346,15 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
|||||||
# check for bulk-delete
|
# check for bulk-delete
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
yield self.handler.delete_room_keys(self.local_user, version)
|
yield self.handler.delete_room_keys(self.local_user, version)
|
||||||
res = None
|
res = yield self.handler.get_room_keys(
|
||||||
try:
|
|
||||||
yield self.handler.get_room_keys(
|
|
||||||
self.local_user,
|
self.local_user,
|
||||||
version,
|
version,
|
||||||
room_id="!abc:matrix.org",
|
room_id="!abc:matrix.org",
|
||||||
session_id="c0ff33",
|
session_id="c0ff33",
|
||||||
)
|
)
|
||||||
except errors.SynapseError as e:
|
self.assertDictEqual(res, {
|
||||||
res = e.code
|
"rooms": {}
|
||||||
self.assertEqual(res, 404)
|
})
|
||||||
|
|
||||||
# check for bulk-delete per room
|
# check for bulk-delete per room
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
@ -364,17 +363,15 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
|||||||
version,
|
version,
|
||||||
room_id="!abc:matrix.org",
|
room_id="!abc:matrix.org",
|
||||||
)
|
)
|
||||||
res = None
|
res = yield self.handler.get_room_keys(
|
||||||
try:
|
|
||||||
yield self.handler.get_room_keys(
|
|
||||||
self.local_user,
|
self.local_user,
|
||||||
version,
|
version,
|
||||||
room_id="!abc:matrix.org",
|
room_id="!abc:matrix.org",
|
||||||
session_id="c0ff33",
|
session_id="c0ff33",
|
||||||
)
|
)
|
||||||
except errors.SynapseError as e:
|
self.assertDictEqual(res, {
|
||||||
res = e.code
|
"rooms": {}
|
||||||
self.assertEqual(res, 404)
|
})
|
||||||
|
|
||||||
# check for bulk-delete per session
|
# check for bulk-delete per session
|
||||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||||
@ -384,14 +381,12 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
|||||||
room_id="!abc:matrix.org",
|
room_id="!abc:matrix.org",
|
||||||
session_id="c0ff33",
|
session_id="c0ff33",
|
||||||
)
|
)
|
||||||
res = None
|
res = yield self.handler.get_room_keys(
|
||||||
try:
|
|
||||||
yield self.handler.get_room_keys(
|
|
||||||
self.local_user,
|
self.local_user,
|
||||||
version,
|
version,
|
||||||
room_id="!abc:matrix.org",
|
room_id="!abc:matrix.org",
|
||||||
session_id="c0ff33",
|
session_id="c0ff33",
|
||||||
)
|
)
|
||||||
except errors.SynapseError as e:
|
self.assertDictEqual(res, {
|
||||||
res = e.code
|
"rooms": {}
|
||||||
self.assertEqual(res, 404)
|
})
|
||||||
|
0
tests/push/__init__.py
Normal file
0
tests/push/__init__.py
Normal file
148
tests/push/test_email.py
Normal file
148
tests/push/test_email.py
Normal file
@ -0,0 +1,148 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2018 New Vector
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
import pkg_resources
|
||||||
|
|
||||||
|
from twisted.internet.defer import Deferred
|
||||||
|
|
||||||
|
from synapse.rest.client.v1 import admin, login, room
|
||||||
|
|
||||||
|
from tests.unittest import HomeserverTestCase
|
||||||
|
|
||||||
|
try:
|
||||||
|
from synapse.push.mailer import load_jinja2_templates
|
||||||
|
except Exception:
|
||||||
|
load_jinja2_templates = None
|
||||||
|
|
||||||
|
|
||||||
|
class EmailPusherTests(HomeserverTestCase):
|
||||||
|
|
||||||
|
skip = "No Jinja installed" if not load_jinja2_templates else None
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
]
|
||||||
|
user_id = True
|
||||||
|
hijack_auth = False
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
|
# List[Tuple[Deferred, args, kwargs]]
|
||||||
|
self.email_attempts = []
|
||||||
|
|
||||||
|
def sendmail(*args, **kwargs):
|
||||||
|
d = Deferred()
|
||||||
|
self.email_attempts.append((d, args, kwargs))
|
||||||
|
return d
|
||||||
|
|
||||||
|
config = self.default_config()
|
||||||
|
config.email_enable_notifs = True
|
||||||
|
config.start_pushers = True
|
||||||
|
|
||||||
|
config.email_template_dir = os.path.abspath(
|
||||||
|
pkg_resources.resource_filename('synapse', 'res/templates')
|
||||||
|
)
|
||||||
|
config.email_notif_template_html = "notif_mail.html"
|
||||||
|
config.email_notif_template_text = "notif_mail.txt"
|
||||||
|
config.email_smtp_host = "127.0.0.1"
|
||||||
|
config.email_smtp_port = 20
|
||||||
|
config.require_transport_security = False
|
||||||
|
config.email_smtp_user = None
|
||||||
|
config.email_app_name = "Matrix"
|
||||||
|
config.email_notif_from = "test@example.com"
|
||||||
|
|
||||||
|
hs = self.setup_test_homeserver(config=config, sendmail=sendmail)
|
||||||
|
|
||||||
|
return hs
|
||||||
|
|
||||||
|
def test_sends_email(self):
|
||||||
|
|
||||||
|
# Register the user who gets notified
|
||||||
|
user_id = self.register_user("user", "pass")
|
||||||
|
access_token = self.login("user", "pass")
|
||||||
|
|
||||||
|
# Register the user who sends the message
|
||||||
|
other_user_id = self.register_user("otheruser", "pass")
|
||||||
|
other_access_token = self.login("otheruser", "pass")
|
||||||
|
|
||||||
|
# Register the pusher
|
||||||
|
user_tuple = self.get_success(
|
||||||
|
self.hs.get_datastore().get_user_by_access_token(access_token)
|
||||||
|
)
|
||||||
|
token_id = user_tuple["token_id"]
|
||||||
|
|
||||||
|
self.get_success(
|
||||||
|
self.hs.get_pusherpool().add_pusher(
|
||||||
|
user_id=user_id,
|
||||||
|
access_token=token_id,
|
||||||
|
kind="email",
|
||||||
|
app_id="m.email",
|
||||||
|
app_display_name="Email Notifications",
|
||||||
|
device_display_name="a@example.com",
|
||||||
|
pushkey="a@example.com",
|
||||||
|
lang=None,
|
||||||
|
data={},
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create a room
|
||||||
|
room = self.helper.create_room_as(user_id, tok=access_token)
|
||||||
|
|
||||||
|
# Invite the other person
|
||||||
|
self.helper.invite(room=room, src=user_id, tok=access_token, targ=other_user_id)
|
||||||
|
|
||||||
|
# The other user joins
|
||||||
|
self.helper.join(room=room, user=other_user_id, tok=other_access_token)
|
||||||
|
|
||||||
|
# The other user sends some messages
|
||||||
|
self.helper.send(room, body="Hi!", tok=other_access_token)
|
||||||
|
self.helper.send(room, body="There!", tok=other_access_token)
|
||||||
|
|
||||||
|
# Get the stream ordering before it gets sent
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
last_stream_ordering = pushers[0]["last_stream_ordering"]
|
||||||
|
|
||||||
|
# Advance time a bit, so the pusher will register something has happened
|
||||||
|
self.pump(100)
|
||||||
|
|
||||||
|
# It hasn't succeeded yet, so the stream ordering shouldn't have moved
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
self.assertEqual(last_stream_ordering, pushers[0]["last_stream_ordering"])
|
||||||
|
|
||||||
|
# One email was attempted to be sent
|
||||||
|
self.assertEqual(len(self.email_attempts), 1)
|
||||||
|
|
||||||
|
# Make the email succeed
|
||||||
|
self.email_attempts[0][0].callback(True)
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# One email was attempted to be sent
|
||||||
|
self.assertEqual(len(self.email_attempts), 1)
|
||||||
|
|
||||||
|
# The stream ordering has increased
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
self.assertTrue(pushers[0]["last_stream_ordering"] > last_stream_ordering)
|
159
tests/push/test_http.py
Normal file
159
tests/push/test_http.py
Normal file
@ -0,0 +1,159 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2018 New Vector
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
from mock import Mock
|
||||||
|
|
||||||
|
from twisted.internet.defer import Deferred
|
||||||
|
|
||||||
|
from synapse.rest.client.v1 import admin, login, room
|
||||||
|
|
||||||
|
from tests.unittest import HomeserverTestCase
|
||||||
|
|
||||||
|
try:
|
||||||
|
from synapse.push.mailer import load_jinja2_templates
|
||||||
|
except Exception:
|
||||||
|
load_jinja2_templates = None
|
||||||
|
|
||||||
|
|
||||||
|
class HTTPPusherTests(HomeserverTestCase):
|
||||||
|
|
||||||
|
skip = "No Jinja installed" if not load_jinja2_templates else None
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
]
|
||||||
|
user_id = True
|
||||||
|
hijack_auth = False
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
|
self.push_attempts = []
|
||||||
|
|
||||||
|
m = Mock()
|
||||||
|
|
||||||
|
def post_json_get_json(url, body):
|
||||||
|
d = Deferred()
|
||||||
|
self.push_attempts.append((d, url, body))
|
||||||
|
return d
|
||||||
|
|
||||||
|
m.post_json_get_json = post_json_get_json
|
||||||
|
|
||||||
|
config = self.default_config()
|
||||||
|
config.start_pushers = True
|
||||||
|
|
||||||
|
hs = self.setup_test_homeserver(config=config, simple_http_client=m)
|
||||||
|
|
||||||
|
return hs
|
||||||
|
|
||||||
|
def test_sends_http(self):
|
||||||
|
"""
|
||||||
|
The HTTP pusher will send pushes for each message to a HTTP endpoint
|
||||||
|
when configured to do so.
|
||||||
|
"""
|
||||||
|
# Register the user who gets notified
|
||||||
|
user_id = self.register_user("user", "pass")
|
||||||
|
access_token = self.login("user", "pass")
|
||||||
|
|
||||||
|
# Register the user who sends the message
|
||||||
|
other_user_id = self.register_user("otheruser", "pass")
|
||||||
|
other_access_token = self.login("otheruser", "pass")
|
||||||
|
|
||||||
|
# Register the pusher
|
||||||
|
user_tuple = self.get_success(
|
||||||
|
self.hs.get_datastore().get_user_by_access_token(access_token)
|
||||||
|
)
|
||||||
|
token_id = user_tuple["token_id"]
|
||||||
|
|
||||||
|
self.get_success(
|
||||||
|
self.hs.get_pusherpool().add_pusher(
|
||||||
|
user_id=user_id,
|
||||||
|
access_token=token_id,
|
||||||
|
kind="http",
|
||||||
|
app_id="m.http",
|
||||||
|
app_display_name="HTTP Push Notifications",
|
||||||
|
device_display_name="pushy push",
|
||||||
|
pushkey="a@example.com",
|
||||||
|
lang=None,
|
||||||
|
data={"url": "example.com"},
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create a room
|
||||||
|
room = self.helper.create_room_as(user_id, tok=access_token)
|
||||||
|
|
||||||
|
# Invite the other person
|
||||||
|
self.helper.invite(room=room, src=user_id, tok=access_token, targ=other_user_id)
|
||||||
|
|
||||||
|
# The other user joins
|
||||||
|
self.helper.join(room=room, user=other_user_id, tok=other_access_token)
|
||||||
|
|
||||||
|
# The other user sends some messages
|
||||||
|
self.helper.send(room, body="Hi!", tok=other_access_token)
|
||||||
|
self.helper.send(room, body="There!", tok=other_access_token)
|
||||||
|
|
||||||
|
# Get the stream ordering before it gets sent
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
last_stream_ordering = pushers[0]["last_stream_ordering"]
|
||||||
|
|
||||||
|
# Advance time a bit, so the pusher will register something has happened
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# It hasn't succeeded yet, so the stream ordering shouldn't have moved
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
self.assertEqual(last_stream_ordering, pushers[0]["last_stream_ordering"])
|
||||||
|
|
||||||
|
# One push was attempted to be sent -- it'll be the first message
|
||||||
|
self.assertEqual(len(self.push_attempts), 1)
|
||||||
|
self.assertEqual(self.push_attempts[0][1], "example.com")
|
||||||
|
self.assertEqual(
|
||||||
|
self.push_attempts[0][2]["notification"]["content"]["body"], "Hi!"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Make the push succeed
|
||||||
|
self.push_attempts[0][0].callback({})
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# The stream ordering has increased
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
self.assertTrue(pushers[0]["last_stream_ordering"] > last_stream_ordering)
|
||||||
|
last_stream_ordering = pushers[0]["last_stream_ordering"]
|
||||||
|
|
||||||
|
# Now it'll try and send the second push message, which will be the second one
|
||||||
|
self.assertEqual(len(self.push_attempts), 2)
|
||||||
|
self.assertEqual(self.push_attempts[1][1], "example.com")
|
||||||
|
self.assertEqual(
|
||||||
|
self.push_attempts[1][2]["notification"]["content"]["body"], "There!"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Make the second push succeed
|
||||||
|
self.push_attempts[1][0].callback({})
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# The stream ordering has increased, again
|
||||||
|
pushers = self.get_success(
|
||||||
|
self.hs.get_datastore().get_pushers_by(dict(user_name=user_id))
|
||||||
|
)
|
||||||
|
self.assertEqual(len(pushers), 1)
|
||||||
|
self.assertTrue(pushers[0]["last_stream_ordering"] > last_stream_ordering)
|
@ -28,8 +28,8 @@ ROOM_ID = "!room:blue"
|
|||||||
|
|
||||||
|
|
||||||
def dict_equals(self, other):
|
def dict_equals(self, other):
|
||||||
me = encode_canonical_json(self._event_dict)
|
me = encode_canonical_json(self.get_pdu_json())
|
||||||
them = encode_canonical_json(other._event_dict)
|
them = encode_canonical_json(other.get_pdu_json())
|
||||||
return me == them
|
return me == them
|
||||||
|
|
||||||
|
|
||||||
|
118
tests/rest/client/test_consent.py
Normal file
118
tests/rest/client/test_consent.py
Normal file
@ -0,0 +1,118 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2018 New Vector
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
from synapse.api.urls import ConsentURIBuilder
|
||||||
|
from synapse.rest.client.v1 import admin, login, room
|
||||||
|
from synapse.rest.consent import consent_resource
|
||||||
|
|
||||||
|
from tests import unittest
|
||||||
|
from tests.server import render
|
||||||
|
|
||||||
|
try:
|
||||||
|
from synapse.push.mailer import load_jinja2_templates
|
||||||
|
except Exception:
|
||||||
|
load_jinja2_templates = None
|
||||||
|
|
||||||
|
|
||||||
|
class ConsentResourceTestCase(unittest.HomeserverTestCase):
|
||||||
|
skip = "No Jinja installed" if not load_jinja2_templates else None
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
]
|
||||||
|
user_id = True
|
||||||
|
hijack_auth = False
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
|
config = self.default_config()
|
||||||
|
config.user_consent_version = "1"
|
||||||
|
config.public_baseurl = ""
|
||||||
|
config.form_secret = "123abc"
|
||||||
|
|
||||||
|
# Make some temporary templates...
|
||||||
|
temp_consent_path = self.mktemp()
|
||||||
|
os.mkdir(temp_consent_path)
|
||||||
|
os.mkdir(os.path.join(temp_consent_path, 'en'))
|
||||||
|
config.user_consent_template_dir = os.path.abspath(temp_consent_path)
|
||||||
|
|
||||||
|
with open(os.path.join(temp_consent_path, "en/1.html"), 'w') as f:
|
||||||
|
f.write("{{version}},{{has_consented}}")
|
||||||
|
|
||||||
|
with open(os.path.join(temp_consent_path, "en/success.html"), 'w') as f:
|
||||||
|
f.write("yay!")
|
||||||
|
|
||||||
|
hs = self.setup_test_homeserver(config=config)
|
||||||
|
return hs
|
||||||
|
|
||||||
|
def test_render_public_consent(self):
|
||||||
|
"""You can observe the terms form without specifying a user"""
|
||||||
|
resource = consent_resource.ConsentResource(self.hs)
|
||||||
|
request, channel = self.make_request("GET", "/consent?v=1", shorthand=False)
|
||||||
|
render(request, resource, self.reactor)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
||||||
|
def test_accept_consent(self):
|
||||||
|
"""
|
||||||
|
A user can use the consent form to accept the terms.
|
||||||
|
"""
|
||||||
|
uri_builder = ConsentURIBuilder(self.hs.config)
|
||||||
|
resource = consent_resource.ConsentResource(self.hs)
|
||||||
|
|
||||||
|
# Register a user
|
||||||
|
user_id = self.register_user("user", "pass")
|
||||||
|
access_token = self.login("user", "pass")
|
||||||
|
|
||||||
|
# Fetch the consent page, to get the consent version
|
||||||
|
consent_uri = (
|
||||||
|
uri_builder.build_user_consent_uri(user_id).replace("_matrix/", "")
|
||||||
|
+ "&u=user"
|
||||||
|
)
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", consent_uri, access_token=access_token, shorthand=False
|
||||||
|
)
|
||||||
|
render(request, resource, self.reactor)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
||||||
|
# Get the version from the body, and whether we've consented
|
||||||
|
version, consented = channel.result["body"].decode('ascii').split(",")
|
||||||
|
self.assertEqual(consented, "False")
|
||||||
|
|
||||||
|
# POST to the consent page, saying we've agreed
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
consent_uri + "&v=" + version,
|
||||||
|
access_token=access_token,
|
||||||
|
shorthand=False,
|
||||||
|
)
|
||||||
|
render(request, resource, self.reactor)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
||||||
|
# Fetch the consent page, to get the consent version -- it should have
|
||||||
|
# changed
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", consent_uri, access_token=access_token, shorthand=False
|
||||||
|
)
|
||||||
|
render(request, resource, self.reactor)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
||||||
|
# Get the version from the body, and check that it's the version we
|
||||||
|
# agreed to, and that we've consented to it.
|
||||||
|
version, consented = channel.result["body"].decode('ascii').split(",")
|
||||||
|
self.assertEqual(consented, "True")
|
||||||
|
self.assertEqual(version, "1")
|
@ -19,24 +19,17 @@ import json
|
|||||||
|
|
||||||
from mock import Mock
|
from mock import Mock
|
||||||
|
|
||||||
from synapse.http.server import JsonResource
|
|
||||||
from synapse.rest.client.v1.admin import register_servlets
|
from synapse.rest.client.v1.admin import register_servlets
|
||||||
from synapse.util import Clock
|
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
from tests.server import (
|
|
||||||
ThreadedMemoryReactorClock,
|
|
||||||
make_request,
|
|
||||||
render,
|
|
||||||
setup_test_homeserver,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class UserRegisterTestCase(unittest.TestCase):
|
class UserRegisterTestCase(unittest.HomeserverTestCase):
|
||||||
def setUp(self):
|
|
||||||
|
servlets = [register_servlets]
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
self.clock = ThreadedMemoryReactorClock()
|
|
||||||
self.hs_clock = Clock(self.clock)
|
|
||||||
self.url = "/_matrix/client/r0/admin/register"
|
self.url = "/_matrix/client/r0/admin/register"
|
||||||
|
|
||||||
self.registration_handler = Mock()
|
self.registration_handler = Mock()
|
||||||
@ -50,17 +43,14 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
self.secrets = Mock()
|
self.secrets = Mock()
|
||||||
|
|
||||||
self.hs = setup_test_homeserver(
|
self.hs = self.setup_test_homeserver()
|
||||||
self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.clock
|
|
||||||
)
|
|
||||||
|
|
||||||
self.hs.config.registration_shared_secret = u"shared"
|
self.hs.config.registration_shared_secret = u"shared"
|
||||||
|
|
||||||
self.hs.get_media_repository = Mock()
|
self.hs.get_media_repository = Mock()
|
||||||
self.hs.get_deactivate_account_handler = Mock()
|
self.hs.get_deactivate_account_handler = Mock()
|
||||||
|
|
||||||
self.resource = JsonResource(self.hs)
|
return self.hs
|
||||||
register_servlets(self.hs, self.resource)
|
|
||||||
|
|
||||||
def test_disabled(self):
|
def test_disabled(self):
|
||||||
"""
|
"""
|
||||||
@ -69,8 +59,8 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
"""
|
"""
|
||||||
self.hs.config.registration_shared_secret = None
|
self.hs.config.registration_shared_secret = None
|
||||||
|
|
||||||
request, channel = make_request("POST", self.url, b'{}')
|
request, channel = self.make_request("POST", self.url, b'{}')
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
@ -87,8 +77,8 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
self.hs.get_secrets = Mock(return_value=secrets)
|
self.hs.get_secrets = Mock(return_value=secrets)
|
||||||
|
|
||||||
request, channel = make_request("GET", self.url)
|
request, channel = self.make_request("GET", self.url)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(channel.json_body, {"nonce": "abcd"})
|
self.assertEqual(channel.json_body, {"nonce": "abcd"})
|
||||||
|
|
||||||
@ -97,25 +87,25 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
Calling GET on the endpoint will return a randomised nonce, which will
|
Calling GET on the endpoint will return a randomised nonce, which will
|
||||||
only last for SALT_TIMEOUT (60s).
|
only last for SALT_TIMEOUT (60s).
|
||||||
"""
|
"""
|
||||||
request, channel = make_request("GET", self.url)
|
request, channel = self.make_request("GET", self.url)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
nonce = channel.json_body["nonce"]
|
nonce = channel.json_body["nonce"]
|
||||||
|
|
||||||
# 59 seconds
|
# 59 seconds
|
||||||
self.clock.advance(59)
|
self.reactor.advance(59)
|
||||||
|
|
||||||
body = json.dumps({"nonce": nonce})
|
body = json.dumps({"nonce": nonce})
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('username must be specified', channel.json_body["error"])
|
self.assertEqual('username must be specified', channel.json_body["error"])
|
||||||
|
|
||||||
# 61 seconds
|
# 61 seconds
|
||||||
self.clock.advance(2)
|
self.reactor.advance(2)
|
||||||
|
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('unrecognised nonce', channel.json_body["error"])
|
self.assertEqual('unrecognised nonce', channel.json_body["error"])
|
||||||
@ -124,8 +114,8 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
"""
|
"""
|
||||||
Only the provided nonce can be used, as it's checked in the MAC.
|
Only the provided nonce can be used, as it's checked in the MAC.
|
||||||
"""
|
"""
|
||||||
request, channel = make_request("GET", self.url)
|
request, channel = self.make_request("GET", self.url)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
nonce = channel.json_body["nonce"]
|
nonce = channel.json_body["nonce"]
|
||||||
|
|
||||||
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
||||||
@ -141,8 +131,8 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
"mac": want_mac,
|
"mac": want_mac,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual("HMAC incorrect", channel.json_body["error"])
|
self.assertEqual("HMAC incorrect", channel.json_body["error"])
|
||||||
@ -152,8 +142,8 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
When the correct nonce is provided, and the right key is provided, the
|
When the correct nonce is provided, and the right key is provided, the
|
||||||
user is registered.
|
user is registered.
|
||||||
"""
|
"""
|
||||||
request, channel = make_request("GET", self.url)
|
request, channel = self.make_request("GET", self.url)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
nonce = channel.json_body["nonce"]
|
nonce = channel.json_body["nonce"]
|
||||||
|
|
||||||
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
||||||
@ -169,8 +159,8 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
"mac": want_mac,
|
"mac": want_mac,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual("@bob:test", channel.json_body["user_id"])
|
self.assertEqual("@bob:test", channel.json_body["user_id"])
|
||||||
@ -179,8 +169,8 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
"""
|
"""
|
||||||
A valid unrecognised nonce.
|
A valid unrecognised nonce.
|
||||||
"""
|
"""
|
||||||
request, channel = make_request("GET", self.url)
|
request, channel = self.make_request("GET", self.url)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
nonce = channel.json_body["nonce"]
|
nonce = channel.json_body["nonce"]
|
||||||
|
|
||||||
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
||||||
@ -196,15 +186,15 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
"mac": want_mac,
|
"mac": want_mac,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual("@bob:test", channel.json_body["user_id"])
|
self.assertEqual("@bob:test", channel.json_body["user_id"])
|
||||||
|
|
||||||
# Now, try and reuse it
|
# Now, try and reuse it
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('unrecognised nonce', channel.json_body["error"])
|
self.assertEqual('unrecognised nonce', channel.json_body["error"])
|
||||||
@ -217,8 +207,8 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
def nonce():
|
def nonce():
|
||||||
request, channel = make_request("GET", self.url)
|
request, channel = self.make_request("GET", self.url)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
return channel.json_body["nonce"]
|
return channel.json_body["nonce"]
|
||||||
|
|
||||||
#
|
#
|
||||||
@ -227,8 +217,8 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
# Must be present
|
# Must be present
|
||||||
body = json.dumps({})
|
body = json.dumps({})
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('nonce must be specified', channel.json_body["error"])
|
self.assertEqual('nonce must be specified', channel.json_body["error"])
|
||||||
@ -239,32 +229,32 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
# Must be present
|
# Must be present
|
||||||
body = json.dumps({"nonce": nonce()})
|
body = json.dumps({"nonce": nonce()})
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('username must be specified', channel.json_body["error"])
|
self.assertEqual('username must be specified', channel.json_body["error"])
|
||||||
|
|
||||||
# Must be a string
|
# Must be a string
|
||||||
body = json.dumps({"nonce": nonce(), "username": 1234})
|
body = json.dumps({"nonce": nonce(), "username": 1234})
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('Invalid username', channel.json_body["error"])
|
self.assertEqual('Invalid username', channel.json_body["error"])
|
||||||
|
|
||||||
# Must not have null bytes
|
# Must not have null bytes
|
||||||
body = json.dumps({"nonce": nonce(), "username": u"abcd\u0000"})
|
body = json.dumps({"nonce": nonce(), "username": u"abcd\u0000"})
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('Invalid username', channel.json_body["error"])
|
self.assertEqual('Invalid username', channel.json_body["error"])
|
||||||
|
|
||||||
# Must not have null bytes
|
# Must not have null bytes
|
||||||
body = json.dumps({"nonce": nonce(), "username": "a" * 1000})
|
body = json.dumps({"nonce": nonce(), "username": "a" * 1000})
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('Invalid username', channel.json_body["error"])
|
self.assertEqual('Invalid username', channel.json_body["error"])
|
||||||
@ -275,16 +265,16 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
# Must be present
|
# Must be present
|
||||||
body = json.dumps({"nonce": nonce(), "username": "a"})
|
body = json.dumps({"nonce": nonce(), "username": "a"})
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('password must be specified', channel.json_body["error"])
|
self.assertEqual('password must be specified', channel.json_body["error"])
|
||||||
|
|
||||||
# Must be a string
|
# Must be a string
|
||||||
body = json.dumps({"nonce": nonce(), "username": "a", "password": 1234})
|
body = json.dumps({"nonce": nonce(), "username": "a", "password": 1234})
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('Invalid password', channel.json_body["error"])
|
self.assertEqual('Invalid password', channel.json_body["error"])
|
||||||
@ -293,16 +283,16 @@ class UserRegisterTestCase(unittest.TestCase):
|
|||||||
body = json.dumps(
|
body = json.dumps(
|
||||||
{"nonce": nonce(), "username": "a", "password": u"abcd\u0000"}
|
{"nonce": nonce(), "username": "a", "password": u"abcd\u0000"}
|
||||||
)
|
)
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('Invalid password', channel.json_body["error"])
|
self.assertEqual('Invalid password', channel.json_body["error"])
|
||||||
|
|
||||||
# Super long
|
# Super long
|
||||||
body = json.dumps({"nonce": nonce(), "username": "a", "password": "A" * 1000})
|
body = json.dumps({"nonce": nonce(), "username": "a", "password": "A" * 1000})
|
||||||
request, channel = make_request("POST", self.url, body.encode('utf8'))
|
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||||
self.assertEqual('Invalid password', channel.json_body["error"])
|
self.assertEqual('Invalid password', channel.json_body["error"])
|
||||||
|
@ -45,11 +45,11 @@ class CreateUserServletTestCase(unittest.TestCase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
handlers = Mock(registration_handler=self.registration_handler)
|
handlers = Mock(registration_handler=self.registration_handler)
|
||||||
self.clock = MemoryReactorClock()
|
self.reactor = MemoryReactorClock()
|
||||||
self.hs_clock = Clock(self.clock)
|
self.hs_clock = Clock(self.reactor)
|
||||||
|
|
||||||
self.hs = self.hs = setup_test_homeserver(
|
self.hs = self.hs = setup_test_homeserver(
|
||||||
self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.clock
|
self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.reactor
|
||||||
)
|
)
|
||||||
self.hs.get_datastore = Mock(return_value=self.datastore)
|
self.hs.get_datastore = Mock(return_value=self.datastore)
|
||||||
self.hs.get_handlers = Mock(return_value=handlers)
|
self.hs.get_handlers = Mock(return_value=handlers)
|
||||||
@ -76,8 +76,8 @@ class CreateUserServletTestCase(unittest.TestCase):
|
|||||||
return_value=(user_id, token)
|
return_value=(user_id, token)
|
||||||
)
|
)
|
||||||
|
|
||||||
request, channel = make_request(b"POST", url, request_data)
|
request, channel = make_request(self.reactor, b"POST", url, request_data)
|
||||||
render(request, res, self.clock)
|
render(request, res, self.reactor)
|
||||||
|
|
||||||
self.assertEquals(channel.result["code"], b"200")
|
self.assertEquals(channel.result["code"], b"200")
|
||||||
|
|
||||||
|
@ -169,7 +169,7 @@ class RestHelper(object):
|
|||||||
path = path + "?access_token=%s" % tok
|
path = path + "?access_token=%s" % tok
|
||||||
|
|
||||||
request, channel = make_request(
|
request, channel = make_request(
|
||||||
"POST", path, json.dumps(content).encode('utf8')
|
self.hs.get_reactor(), "POST", path, json.dumps(content).encode('utf8')
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.hs.get_reactor())
|
render(request, self.resource, self.hs.get_reactor())
|
||||||
|
|
||||||
@ -217,7 +217,9 @@ class RestHelper(object):
|
|||||||
|
|
||||||
data = {"membership": membership}
|
data = {"membership": membership}
|
||||||
|
|
||||||
request, channel = make_request("PUT", path, json.dumps(data).encode('utf8'))
|
request, channel = make_request(
|
||||||
|
self.hs.get_reactor(), "PUT", path, json.dumps(data).encode('utf8')
|
||||||
|
)
|
||||||
|
|
||||||
render(request, self.resource, self.hs.get_reactor())
|
render(request, self.resource, self.hs.get_reactor())
|
||||||
|
|
||||||
@ -228,18 +230,6 @@ class RestHelper(object):
|
|||||||
|
|
||||||
self.auth_user_id = temp_id
|
self.auth_user_id = temp_id
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def register(self, user_id):
|
|
||||||
(code, response) = yield self.mock_resource.trigger(
|
|
||||||
"POST",
|
|
||||||
"/_matrix/client/r0/register",
|
|
||||||
json.dumps(
|
|
||||||
{"user": user_id, "password": "test", "type": "m.login.password"}
|
|
||||||
),
|
|
||||||
)
|
|
||||||
self.assertEquals(200, code)
|
|
||||||
defer.returnValue(response)
|
|
||||||
|
|
||||||
def send(self, room_id, body=None, txn_id=None, tok=None, expect_code=200):
|
def send(self, room_id, body=None, txn_id=None, tok=None, expect_code=200):
|
||||||
if txn_id is None:
|
if txn_id is None:
|
||||||
txn_id = "m%s" % (str(time.time()))
|
txn_id = "m%s" % (str(time.time()))
|
||||||
@ -251,7 +241,9 @@ class RestHelper(object):
|
|||||||
if tok:
|
if tok:
|
||||||
path = path + "?access_token=%s" % tok
|
path = path + "?access_token=%s" % tok
|
||||||
|
|
||||||
request, channel = make_request("PUT", path, json.dumps(content).encode('utf8'))
|
request, channel = make_request(
|
||||||
|
self.hs.get_reactor(), "PUT", path, json.dumps(content).encode('utf8')
|
||||||
|
)
|
||||||
render(request, self.resource, self.hs.get_reactor())
|
render(request, self.resource, self.hs.get_reactor())
|
||||||
|
|
||||||
assert int(channel.result["code"]) == expect_code, (
|
assert int(channel.result["code"]) == expect_code, (
|
||||||
|
@ -13,84 +13,47 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import synapse.types
|
|
||||||
from synapse.api.errors import Codes
|
from synapse.api.errors import Codes
|
||||||
from synapse.http.server import JsonResource
|
|
||||||
from synapse.rest.client.v2_alpha import filter
|
from synapse.rest.client.v2_alpha import filter
|
||||||
from synapse.types import UserID
|
|
||||||
from synapse.util import Clock
|
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
from tests.server import (
|
|
||||||
ThreadedMemoryReactorClock as MemoryReactorClock,
|
|
||||||
make_request,
|
|
||||||
render,
|
|
||||||
setup_test_homeserver,
|
|
||||||
)
|
|
||||||
|
|
||||||
PATH_PREFIX = "/_matrix/client/v2_alpha"
|
PATH_PREFIX = "/_matrix/client/v2_alpha"
|
||||||
|
|
||||||
|
|
||||||
class FilterTestCase(unittest.TestCase):
|
class FilterTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
USER_ID = "@apple:test"
|
user_id = "@apple:test"
|
||||||
|
hijack_auth = True
|
||||||
EXAMPLE_FILTER = {"room": {"timeline": {"types": ["m.room.message"]}}}
|
EXAMPLE_FILTER = {"room": {"timeline": {"types": ["m.room.message"]}}}
|
||||||
EXAMPLE_FILTER_JSON = b'{"room": {"timeline": {"types": ["m.room.message"]}}}'
|
EXAMPLE_FILTER_JSON = b'{"room": {"timeline": {"types": ["m.room.message"]}}}'
|
||||||
TO_REGISTER = [filter]
|
servlets = [filter.register_servlets]
|
||||||
|
|
||||||
def setUp(self):
|
def prepare(self, reactor, clock, hs):
|
||||||
self.clock = MemoryReactorClock()
|
self.filtering = hs.get_filtering()
|
||||||
self.hs_clock = Clock(self.clock)
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
self.hs = setup_test_homeserver(
|
|
||||||
self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.clock
|
|
||||||
)
|
|
||||||
|
|
||||||
self.auth = self.hs.get_auth()
|
|
||||||
|
|
||||||
def get_user_by_access_token(token=None, allow_guest=False):
|
|
||||||
return {
|
|
||||||
"user": UserID.from_string(self.USER_ID),
|
|
||||||
"token_id": 1,
|
|
||||||
"is_guest": False,
|
|
||||||
}
|
|
||||||
|
|
||||||
def get_user_by_req(request, allow_guest=False, rights="access"):
|
|
||||||
return synapse.types.create_requester(
|
|
||||||
UserID.from_string(self.USER_ID), 1, False, None
|
|
||||||
)
|
|
||||||
|
|
||||||
self.auth.get_user_by_access_token = get_user_by_access_token
|
|
||||||
self.auth.get_user_by_req = get_user_by_req
|
|
||||||
|
|
||||||
self.store = self.hs.get_datastore()
|
|
||||||
self.filtering = self.hs.get_filtering()
|
|
||||||
self.resource = JsonResource(self.hs)
|
|
||||||
|
|
||||||
for r in self.TO_REGISTER:
|
|
||||||
r.register_servlets(self.hs, self.resource)
|
|
||||||
|
|
||||||
def test_add_filter(self):
|
def test_add_filter(self):
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
"POST",
|
"POST",
|
||||||
"/_matrix/client/r0/user/%s/filter" % (self.USER_ID),
|
"/_matrix/client/r0/user/%s/filter" % (self.user_id),
|
||||||
self.EXAMPLE_FILTER_JSON,
|
self.EXAMPLE_FILTER_JSON,
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b"200")
|
self.assertEqual(channel.result["code"], b"200")
|
||||||
self.assertEqual(channel.json_body, {"filter_id": "0"})
|
self.assertEqual(channel.json_body, {"filter_id": "0"})
|
||||||
filter = self.store.get_user_filter(user_localpart="apple", filter_id=0)
|
filter = self.store.get_user_filter(user_localpart="apple", filter_id=0)
|
||||||
self.clock.advance(0)
|
self.pump()
|
||||||
self.assertEquals(filter.result, self.EXAMPLE_FILTER)
|
self.assertEquals(filter.result, self.EXAMPLE_FILTER)
|
||||||
|
|
||||||
def test_add_filter_for_other_user(self):
|
def test_add_filter_for_other_user(self):
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
"POST",
|
"POST",
|
||||||
"/_matrix/client/r0/user/%s/filter" % ("@watermelon:test"),
|
"/_matrix/client/r0/user/%s/filter" % ("@watermelon:test"),
|
||||||
self.EXAMPLE_FILTER_JSON,
|
self.EXAMPLE_FILTER_JSON,
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b"403")
|
self.assertEqual(channel.result["code"], b"403")
|
||||||
self.assertEquals(channel.json_body["errcode"], Codes.FORBIDDEN)
|
self.assertEquals(channel.json_body["errcode"], Codes.FORBIDDEN)
|
||||||
@ -98,12 +61,12 @@ class FilterTestCase(unittest.TestCase):
|
|||||||
def test_add_filter_non_local_user(self):
|
def test_add_filter_non_local_user(self):
|
||||||
_is_mine = self.hs.is_mine
|
_is_mine = self.hs.is_mine
|
||||||
self.hs.is_mine = lambda target_user: False
|
self.hs.is_mine = lambda target_user: False
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
"POST",
|
"POST",
|
||||||
"/_matrix/client/r0/user/%s/filter" % (self.USER_ID),
|
"/_matrix/client/r0/user/%s/filter" % (self.user_id),
|
||||||
self.EXAMPLE_FILTER_JSON,
|
self.EXAMPLE_FILTER_JSON,
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.hs.is_mine = _is_mine
|
self.hs.is_mine = _is_mine
|
||||||
self.assertEqual(channel.result["code"], b"403")
|
self.assertEqual(channel.result["code"], b"403")
|
||||||
@ -113,21 +76,21 @@ class FilterTestCase(unittest.TestCase):
|
|||||||
filter_id = self.filtering.add_user_filter(
|
filter_id = self.filtering.add_user_filter(
|
||||||
user_localpart="apple", user_filter=self.EXAMPLE_FILTER
|
user_localpart="apple", user_filter=self.EXAMPLE_FILTER
|
||||||
)
|
)
|
||||||
self.clock.advance(1)
|
self.reactor.advance(1)
|
||||||
filter_id = filter_id.result
|
filter_id = filter_id.result
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
"GET", "/_matrix/client/r0/user/%s/filter/%s" % (self.USER_ID, filter_id)
|
"GET", "/_matrix/client/r0/user/%s/filter/%s" % (self.user_id, filter_id)
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b"200")
|
self.assertEqual(channel.result["code"], b"200")
|
||||||
self.assertEquals(channel.json_body, self.EXAMPLE_FILTER)
|
self.assertEquals(channel.json_body, self.EXAMPLE_FILTER)
|
||||||
|
|
||||||
def test_get_filter_non_existant(self):
|
def test_get_filter_non_existant(self):
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
"GET", "/_matrix/client/r0/user/%s/filter/12382148321" % (self.USER_ID)
|
"GET", "/_matrix/client/r0/user/%s/filter/12382148321" % (self.user_id)
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b"400")
|
self.assertEqual(channel.result["code"], b"400")
|
||||||
self.assertEquals(channel.json_body["errcode"], Codes.NOT_FOUND)
|
self.assertEquals(channel.json_body["errcode"], Codes.NOT_FOUND)
|
||||||
@ -135,18 +98,18 @@ class FilterTestCase(unittest.TestCase):
|
|||||||
# Currently invalid params do not have an appropriate errcode
|
# Currently invalid params do not have an appropriate errcode
|
||||||
# in errors.py
|
# in errors.py
|
||||||
def test_get_filter_invalid_id(self):
|
def test_get_filter_invalid_id(self):
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
"GET", "/_matrix/client/r0/user/%s/filter/foobar" % (self.USER_ID)
|
"GET", "/_matrix/client/r0/user/%s/filter/foobar" % (self.user_id)
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b"400")
|
self.assertEqual(channel.result["code"], b"400")
|
||||||
|
|
||||||
# No ID also returns an invalid_id error
|
# No ID also returns an invalid_id error
|
||||||
def test_get_filter_no_id(self):
|
def test_get_filter_no_id(self):
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
"GET", "/_matrix/client/r0/user/%s/filter/" % (self.USER_ID)
|
"GET", "/_matrix/client/r0/user/%s/filter/" % (self.user_id)
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b"400")
|
self.assertEqual(channel.result["code"], b"400")
|
||||||
|
@ -3,22 +3,19 @@ import json
|
|||||||
from mock import Mock
|
from mock import Mock
|
||||||
|
|
||||||
from twisted.python import failure
|
from twisted.python import failure
|
||||||
from twisted.test.proto_helpers import MemoryReactorClock
|
|
||||||
|
|
||||||
from synapse.api.errors import InteractiveAuthIncompleteError
|
from synapse.api.errors import InteractiveAuthIncompleteError
|
||||||
from synapse.http.server import JsonResource
|
|
||||||
from synapse.rest.client.v2_alpha.register import register_servlets
|
from synapse.rest.client.v2_alpha.register import register_servlets
|
||||||
from synapse.util import Clock
|
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
from tests.server import make_request, render, setup_test_homeserver
|
|
||||||
|
|
||||||
|
|
||||||
class RegisterRestServletTestCase(unittest.TestCase):
|
class RegisterRestServletTestCase(unittest.HomeserverTestCase):
|
||||||
def setUp(self):
|
|
||||||
|
servlets = [register_servlets]
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
self.clock = MemoryReactorClock()
|
|
||||||
self.hs_clock = Clock(self.clock)
|
|
||||||
self.url = b"/_matrix/client/r0/register"
|
self.url = b"/_matrix/client/r0/register"
|
||||||
|
|
||||||
self.appservice = None
|
self.appservice = None
|
||||||
@ -46,9 +43,7 @@ class RegisterRestServletTestCase(unittest.TestCase):
|
|||||||
identity_handler=self.identity_handler,
|
identity_handler=self.identity_handler,
|
||||||
login_handler=self.login_handler,
|
login_handler=self.login_handler,
|
||||||
)
|
)
|
||||||
self.hs = setup_test_homeserver(
|
self.hs = self.setup_test_homeserver()
|
||||||
self.addCleanup, http_client=None, clock=self.hs_clock, reactor=self.clock
|
|
||||||
)
|
|
||||||
self.hs.get_auth = Mock(return_value=self.auth)
|
self.hs.get_auth = Mock(return_value=self.auth)
|
||||||
self.hs.get_handlers = Mock(return_value=self.handlers)
|
self.hs.get_handlers = Mock(return_value=self.handlers)
|
||||||
self.hs.get_auth_handler = Mock(return_value=self.auth_handler)
|
self.hs.get_auth_handler = Mock(return_value=self.auth_handler)
|
||||||
@ -58,8 +53,7 @@ class RegisterRestServletTestCase(unittest.TestCase):
|
|||||||
self.hs.config.registrations_require_3pid = []
|
self.hs.config.registrations_require_3pid = []
|
||||||
self.hs.config.auto_join_rooms = []
|
self.hs.config.auto_join_rooms = []
|
||||||
|
|
||||||
self.resource = JsonResource(self.hs)
|
return self.hs
|
||||||
register_servlets(self.hs, self.resource)
|
|
||||||
|
|
||||||
def test_POST_appservice_registration_valid(self):
|
def test_POST_appservice_registration_valid(self):
|
||||||
user_id = "@kermit:muppet"
|
user_id = "@kermit:muppet"
|
||||||
@ -69,10 +63,10 @@ class RegisterRestServletTestCase(unittest.TestCase):
|
|||||||
self.auth_handler.get_access_token_for_user_id = Mock(return_value=token)
|
self.auth_handler.get_access_token_for_user_id = Mock(return_value=token)
|
||||||
request_data = json.dumps({"username": "kermit"})
|
request_data = json.dumps({"username": "kermit"})
|
||||||
|
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
b"POST", self.url + b"?access_token=i_am_an_app_service", request_data
|
b"POST", self.url + b"?access_token=i_am_an_app_service", request_data
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||||
det_data = {
|
det_data = {
|
||||||
@ -85,25 +79,25 @@ class RegisterRestServletTestCase(unittest.TestCase):
|
|||||||
def test_POST_appservice_registration_invalid(self):
|
def test_POST_appservice_registration_invalid(self):
|
||||||
self.appservice = None # no application service exists
|
self.appservice = None # no application service exists
|
||||||
request_data = json.dumps({"username": "kermit"})
|
request_data = json.dumps({"username": "kermit"})
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
b"POST", self.url + b"?access_token=i_am_an_app_service", request_data
|
b"POST", self.url + b"?access_token=i_am_an_app_service", request_data
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEquals(channel.result["code"], b"401", channel.result)
|
self.assertEquals(channel.result["code"], b"401", channel.result)
|
||||||
|
|
||||||
def test_POST_bad_password(self):
|
def test_POST_bad_password(self):
|
||||||
request_data = json.dumps({"username": "kermit", "password": 666})
|
request_data = json.dumps({"username": "kermit", "password": 666})
|
||||||
request, channel = make_request(b"POST", self.url, request_data)
|
request, channel = self.make_request(b"POST", self.url, request_data)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEquals(channel.result["code"], b"400", channel.result)
|
self.assertEquals(channel.result["code"], b"400", channel.result)
|
||||||
self.assertEquals(channel.json_body["error"], "Invalid password")
|
self.assertEquals(channel.json_body["error"], "Invalid password")
|
||||||
|
|
||||||
def test_POST_bad_username(self):
|
def test_POST_bad_username(self):
|
||||||
request_data = json.dumps({"username": 777, "password": "monkey"})
|
request_data = json.dumps({"username": 777, "password": "monkey"})
|
||||||
request, channel = make_request(b"POST", self.url, request_data)
|
request, channel = self.make_request(b"POST", self.url, request_data)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEquals(channel.result["code"], b"400", channel.result)
|
self.assertEquals(channel.result["code"], b"400", channel.result)
|
||||||
self.assertEquals(channel.json_body["error"], "Invalid username")
|
self.assertEquals(channel.json_body["error"], "Invalid username")
|
||||||
@ -121,8 +115,8 @@ class RegisterRestServletTestCase(unittest.TestCase):
|
|||||||
self.auth_handler.get_access_token_for_user_id = Mock(return_value=token)
|
self.auth_handler.get_access_token_for_user_id = Mock(return_value=token)
|
||||||
self.device_handler.check_device_registered = Mock(return_value=device_id)
|
self.device_handler.check_device_registered = Mock(return_value=device_id)
|
||||||
|
|
||||||
request, channel = make_request(b"POST", self.url, request_data)
|
request, channel = self.make_request(b"POST", self.url, request_data)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
det_data = {
|
det_data = {
|
||||||
"user_id": user_id,
|
"user_id": user_id,
|
||||||
@ -143,8 +137,8 @@ class RegisterRestServletTestCase(unittest.TestCase):
|
|||||||
self.auth_result = (None, {"username": "kermit", "password": "monkey"}, None)
|
self.auth_result = (None, {"username": "kermit", "password": "monkey"}, None)
|
||||||
self.registration_handler.register = Mock(return_value=("@user:id", "t"))
|
self.registration_handler.register = Mock(return_value=("@user:id", "t"))
|
||||||
|
|
||||||
request, channel = make_request(b"POST", self.url, request_data)
|
request, channel = self.make_request(b"POST", self.url, request_data)
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEquals(channel.result["code"], b"403", channel.result)
|
self.assertEquals(channel.result["code"], b"403", channel.result)
|
||||||
self.assertEquals(channel.json_body["error"], "Registration has been disabled")
|
self.assertEquals(channel.json_body["error"], "Registration has been disabled")
|
||||||
@ -155,8 +149,8 @@ class RegisterRestServletTestCase(unittest.TestCase):
|
|||||||
self.hs.config.allow_guest_access = True
|
self.hs.config.allow_guest_access = True
|
||||||
self.registration_handler.register = Mock(return_value=(user_id, None))
|
self.registration_handler.register = Mock(return_value=(user_id, None))
|
||||||
|
|
||||||
request, channel = make_request(b"POST", self.url + b"?kind=guest", b"{}")
|
request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
det_data = {
|
det_data = {
|
||||||
"user_id": user_id,
|
"user_id": user_id,
|
||||||
@ -169,8 +163,8 @@ class RegisterRestServletTestCase(unittest.TestCase):
|
|||||||
def test_POST_disabled_guest_registration(self):
|
def test_POST_disabled_guest_registration(self):
|
||||||
self.hs.config.allow_guest_access = False
|
self.hs.config.allow_guest_access = False
|
||||||
|
|
||||||
request, channel = make_request(b"POST", self.url + b"?kind=guest", b"{}")
|
request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
|
||||||
render(request, self.resource, self.clock)
|
self.render(request)
|
||||||
|
|
||||||
self.assertEquals(channel.result["code"], b"403", channel.result)
|
self.assertEquals(channel.result["code"], b"403", channel.result)
|
||||||
self.assertEquals(channel.json_body["error"], "Guest access is disabled")
|
self.assertEquals(channel.json_body["error"], "Guest access is disabled")
|
||||||
|
@ -15,9 +15,11 @@
|
|||||||
|
|
||||||
from mock import Mock
|
from mock import Mock
|
||||||
|
|
||||||
|
from synapse.rest.client.v1 import admin, login, room
|
||||||
from synapse.rest.client.v2_alpha import sync
|
from synapse.rest.client.v2_alpha import sync
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
|
from tests.server import TimedOutException
|
||||||
|
|
||||||
|
|
||||||
class FilterTestCase(unittest.HomeserverTestCase):
|
class FilterTestCase(unittest.HomeserverTestCase):
|
||||||
@ -65,3 +67,124 @@ class FilterTestCase(unittest.HomeserverTestCase):
|
|||||||
["next_batch", "rooms", "account_data", "to_device", "device_lists"]
|
["next_batch", "rooms", "account_data", "to_device", "device_lists"]
|
||||||
).issubset(set(channel.json_body.keys()))
|
).issubset(set(channel.json_body.keys()))
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class SyncTypingTests(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
sync.register_servlets,
|
||||||
|
]
|
||||||
|
user_id = True
|
||||||
|
hijack_auth = False
|
||||||
|
|
||||||
|
def test_sync_backwards_typing(self):
|
||||||
|
"""
|
||||||
|
If the typing serial goes backwards and the typing handler is then reset
|
||||||
|
(such as when the master restarts and sets the typing serial to 0), we
|
||||||
|
do not incorrectly return typing information that had a serial greater
|
||||||
|
than the now-reset serial.
|
||||||
|
"""
|
||||||
|
typing_url = "/rooms/%s/typing/%s?access_token=%s"
|
||||||
|
sync_url = "/sync?timeout=3000000&access_token=%s&since=%s"
|
||||||
|
|
||||||
|
# Register the user who gets notified
|
||||||
|
user_id = self.register_user("user", "pass")
|
||||||
|
access_token = self.login("user", "pass")
|
||||||
|
|
||||||
|
# Register the user who sends the message
|
||||||
|
other_user_id = self.register_user("otheruser", "pass")
|
||||||
|
other_access_token = self.login("otheruser", "pass")
|
||||||
|
|
||||||
|
# Create a room
|
||||||
|
room = self.helper.create_room_as(user_id, tok=access_token)
|
||||||
|
|
||||||
|
# Invite the other person
|
||||||
|
self.helper.invite(room=room, src=user_id, tok=access_token, targ=other_user_id)
|
||||||
|
|
||||||
|
# The other user joins
|
||||||
|
self.helper.join(room=room, user=other_user_id, tok=other_access_token)
|
||||||
|
|
||||||
|
# The other user sends some messages
|
||||||
|
self.helper.send(room, body="Hi!", tok=other_access_token)
|
||||||
|
self.helper.send(room, body="There!", tok=other_access_token)
|
||||||
|
|
||||||
|
# Start typing.
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
typing_url % (room, other_user_id, other_access_token),
|
||||||
|
b'{"typing": true, "timeout": 30000}',
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", "/sync?access_token=%s" % (access_token,)
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
next_batch = channel.json_body["next_batch"]
|
||||||
|
|
||||||
|
# Stop typing.
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
typing_url % (room, other_user_id, other_access_token),
|
||||||
|
b'{"typing": false}',
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
|
||||||
|
# Start typing.
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
typing_url % (room, other_user_id, other_access_token),
|
||||||
|
b'{"typing": true, "timeout": 30000}',
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
|
||||||
|
# Should return immediately
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", sync_url % (access_token, next_batch)
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
next_batch = channel.json_body["next_batch"]
|
||||||
|
|
||||||
|
# Reset typing serial back to 0, as if the master had.
|
||||||
|
typing = self.hs.get_typing_handler()
|
||||||
|
typing._latest_room_serial = 0
|
||||||
|
|
||||||
|
# Since it checks the state token, we need some state to update to
|
||||||
|
# invalidate the stream token.
|
||||||
|
self.helper.send(room, body="There!", tok=other_access_token)
|
||||||
|
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", sync_url % (access_token, next_batch)
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
next_batch = channel.json_body["next_batch"]
|
||||||
|
|
||||||
|
# This should time out! But it does not, because our stream token is
|
||||||
|
# ahead, and therefore it's saying the typing (that we've actually
|
||||||
|
# already seen) is new, since it's got a token above our new, now-reset
|
||||||
|
# stream token.
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", sync_url % (access_token, next_batch)
|
||||||
|
)
|
||||||
|
self.render(request)
|
||||||
|
self.assertEquals(200, channel.code)
|
||||||
|
next_batch = channel.json_body["next_batch"]
|
||||||
|
|
||||||
|
# Clear the typing information, so that it doesn't think everything is
|
||||||
|
# in the future.
|
||||||
|
typing._reset()
|
||||||
|
|
||||||
|
# Now it SHOULD fail as it never completes!
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", sync_url % (access_token, next_batch)
|
||||||
|
)
|
||||||
|
self.assertRaises(TimedOutException, self.render, request)
|
||||||
|
164
tests/rest/media/v1/test_url_preview.py
Normal file
164
tests/rest/media/v1/test_url_preview.py
Normal file
@ -0,0 +1,164 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2018 New Vector Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
from mock import Mock
|
||||||
|
|
||||||
|
from twisted.internet.defer import Deferred
|
||||||
|
|
||||||
|
from synapse.config.repository import MediaStorageProviderConfig
|
||||||
|
from synapse.util.module_loader import load_module
|
||||||
|
|
||||||
|
from tests import unittest
|
||||||
|
|
||||||
|
|
||||||
|
class URLPreviewTests(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
|
hijack_auth = True
|
||||||
|
user_id = "@test:user"
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
|
self.storage_path = self.mktemp()
|
||||||
|
os.mkdir(self.storage_path)
|
||||||
|
|
||||||
|
config = self.default_config()
|
||||||
|
config.url_preview_enabled = True
|
||||||
|
config.max_spider_size = 9999999
|
||||||
|
config.url_preview_url_blacklist = []
|
||||||
|
config.media_store_path = self.storage_path
|
||||||
|
|
||||||
|
provider_config = {
|
||||||
|
"module": "synapse.rest.media.v1.storage_provider.FileStorageProviderBackend",
|
||||||
|
"store_local": True,
|
||||||
|
"store_synchronous": False,
|
||||||
|
"store_remote": True,
|
||||||
|
"config": {"directory": self.storage_path},
|
||||||
|
}
|
||||||
|
|
||||||
|
loaded = list(load_module(provider_config)) + [
|
||||||
|
MediaStorageProviderConfig(False, False, False)
|
||||||
|
]
|
||||||
|
|
||||||
|
config.media_storage_providers = [loaded]
|
||||||
|
|
||||||
|
hs = self.setup_test_homeserver(config=config)
|
||||||
|
|
||||||
|
return hs
|
||||||
|
|
||||||
|
def prepare(self, reactor, clock, hs):
|
||||||
|
|
||||||
|
self.fetches = []
|
||||||
|
|
||||||
|
def get_file(url, output_stream, max_size):
|
||||||
|
"""
|
||||||
|
Returns tuple[int,dict,str,int] of file length, response headers,
|
||||||
|
absolute URI, and response code.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def write_to(r):
|
||||||
|
data, response = r
|
||||||
|
output_stream.write(data)
|
||||||
|
return response
|
||||||
|
|
||||||
|
d = Deferred()
|
||||||
|
d.addCallback(write_to)
|
||||||
|
self.fetches.append((d, url))
|
||||||
|
return d
|
||||||
|
|
||||||
|
client = Mock()
|
||||||
|
client.get_file = get_file
|
||||||
|
|
||||||
|
self.media_repo = hs.get_media_repository_resource()
|
||||||
|
preview_url = self.media_repo.children[b'preview_url']
|
||||||
|
preview_url.client = client
|
||||||
|
self.preview_url = preview_url
|
||||||
|
|
||||||
|
def test_cache_returns_correct_type(self):
|
||||||
|
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", "url_preview?url=matrix.org", shorthand=False
|
||||||
|
)
|
||||||
|
request.render(self.preview_url)
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# We've made one fetch
|
||||||
|
self.assertEqual(len(self.fetches), 1)
|
||||||
|
|
||||||
|
end_content = (
|
||||||
|
b'<html><head>'
|
||||||
|
b'<meta property="og:title" content="~matrix~" />'
|
||||||
|
b'<meta property="og:description" content="hi" />'
|
||||||
|
b'</head></html>'
|
||||||
|
)
|
||||||
|
|
||||||
|
self.fetches[0][0].callback(
|
||||||
|
(
|
||||||
|
end_content,
|
||||||
|
(
|
||||||
|
len(end_content),
|
||||||
|
{
|
||||||
|
b"Content-Length": [b"%d" % (len(end_content))],
|
||||||
|
b"Content-Type": [b'text/html; charset="utf8"'],
|
||||||
|
},
|
||||||
|
"https://example.com",
|
||||||
|
200,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.pump()
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
self.assertEqual(
|
||||||
|
channel.json_body, {"og:title": "~matrix~", "og:description": "hi"}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check the cache returns the correct response
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", "url_preview?url=matrix.org", shorthand=False
|
||||||
|
)
|
||||||
|
request.render(self.preview_url)
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# Only one fetch, still, since we'll lean on the cache
|
||||||
|
self.assertEqual(len(self.fetches), 1)
|
||||||
|
|
||||||
|
# Check the cache response has the same content
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
self.assertEqual(
|
||||||
|
channel.json_body, {"og:title": "~matrix~", "og:description": "hi"}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Clear the in-memory cache
|
||||||
|
self.assertIn("matrix.org", self.preview_url._cache)
|
||||||
|
self.preview_url._cache.pop("matrix.org")
|
||||||
|
self.assertNotIn("matrix.org", self.preview_url._cache)
|
||||||
|
|
||||||
|
# Check the database cache returns the correct response
|
||||||
|
request, channel = self.make_request(
|
||||||
|
"GET", "url_preview?url=matrix.org", shorthand=False
|
||||||
|
)
|
||||||
|
request.render(self.preview_url)
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# Only one fetch, still, since we'll lean on the cache
|
||||||
|
self.assertEqual(len(self.fetches), 1)
|
||||||
|
|
||||||
|
# Check the cache response has the same content
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
self.assertEqual(
|
||||||
|
channel.json_body, {"og:title": "~matrix~", "og:description": "hi"}
|
||||||
|
)
|
@ -21,6 +21,12 @@ from synapse.util import Clock
|
|||||||
from tests.utils import setup_test_homeserver as _sth
|
from tests.utils import setup_test_homeserver as _sth
|
||||||
|
|
||||||
|
|
||||||
|
class TimedOutException(Exception):
|
||||||
|
"""
|
||||||
|
A web query timed out.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
@attr.s
|
@attr.s
|
||||||
class FakeChannel(object):
|
class FakeChannel(object):
|
||||||
"""
|
"""
|
||||||
@ -28,6 +34,7 @@ class FakeChannel(object):
|
|||||||
wire).
|
wire).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
_reactor = attr.ib()
|
||||||
result = attr.ib(default=attr.Factory(dict))
|
result = attr.ib(default=attr.Factory(dict))
|
||||||
_producer = None
|
_producer = None
|
||||||
|
|
||||||
@ -50,6 +57,8 @@ class FakeChannel(object):
|
|||||||
self.result["headers"] = headers
|
self.result["headers"] = headers
|
||||||
|
|
||||||
def write(self, content):
|
def write(self, content):
|
||||||
|
assert isinstance(content, bytes), "Should be bytes! " + repr(content)
|
||||||
|
|
||||||
if "body" not in self.result:
|
if "body" not in self.result:
|
||||||
self.result["body"] = b""
|
self.result["body"] = b""
|
||||||
|
|
||||||
@ -57,6 +66,15 @@ class FakeChannel(object):
|
|||||||
|
|
||||||
def registerProducer(self, producer, streaming):
|
def registerProducer(self, producer, streaming):
|
||||||
self._producer = producer
|
self._producer = producer
|
||||||
|
self.producerStreaming = streaming
|
||||||
|
|
||||||
|
def _produce():
|
||||||
|
if self._producer:
|
||||||
|
self._producer.resumeProducing()
|
||||||
|
self._reactor.callLater(0.1, _produce)
|
||||||
|
|
||||||
|
if not streaming:
|
||||||
|
self._reactor.callLater(0.0, _produce)
|
||||||
|
|
||||||
def unregisterProducer(self):
|
def unregisterProducer(self):
|
||||||
if self._producer is None:
|
if self._producer is None:
|
||||||
@ -98,10 +116,30 @@ class FakeSite:
|
|||||||
return FakeLogger()
|
return FakeLogger()
|
||||||
|
|
||||||
|
|
||||||
def make_request(method, path, content=b"", access_token=None, request=SynapseRequest):
|
def make_request(
|
||||||
|
reactor,
|
||||||
|
method,
|
||||||
|
path,
|
||||||
|
content=b"",
|
||||||
|
access_token=None,
|
||||||
|
request=SynapseRequest,
|
||||||
|
shorthand=True,
|
||||||
|
):
|
||||||
"""
|
"""
|
||||||
Make a web request using the given method and path, feed it the
|
Make a web request using the given method and path, feed it the
|
||||||
content, and return the Request and the Channel underneath.
|
content, and return the Request and the Channel underneath.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
method (bytes/unicode): The HTTP request method ("verb").
|
||||||
|
path (bytes/unicode): The HTTP path, suitably URL encoded (e.g.
|
||||||
|
escaped UTF-8 & spaces and such).
|
||||||
|
content (bytes or dict): The body of the request. JSON-encoded, if
|
||||||
|
a dict.
|
||||||
|
shorthand: Whether to try and be helpful and prefix the given URL
|
||||||
|
with the usual REST API path, if it doesn't contain it.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A synapse.http.site.SynapseRequest.
|
||||||
"""
|
"""
|
||||||
if not isinstance(method, bytes):
|
if not isinstance(method, bytes):
|
||||||
method = method.encode('ascii')
|
method = method.encode('ascii')
|
||||||
@ -109,8 +147,8 @@ def make_request(method, path, content=b"", access_token=None, request=SynapseRe
|
|||||||
if not isinstance(path, bytes):
|
if not isinstance(path, bytes):
|
||||||
path = path.encode('ascii')
|
path = path.encode('ascii')
|
||||||
|
|
||||||
# Decorate it to be the full path
|
# Decorate it to be the full path, if we're using shorthand
|
||||||
if not path.startswith(b"/_matrix"):
|
if shorthand and not path.startswith(b"/_matrix"):
|
||||||
path = b"/_matrix/client/r0/" + path
|
path = b"/_matrix/client/r0/" + path
|
||||||
path = path.replace(b"//", b"/")
|
path = path.replace(b"//", b"/")
|
||||||
|
|
||||||
@ -118,14 +156,16 @@ def make_request(method, path, content=b"", access_token=None, request=SynapseRe
|
|||||||
content = content.encode('utf8')
|
content = content.encode('utf8')
|
||||||
|
|
||||||
site = FakeSite()
|
site = FakeSite()
|
||||||
channel = FakeChannel()
|
channel = FakeChannel(reactor)
|
||||||
|
|
||||||
req = request(site, channel)
|
req = request(site, channel)
|
||||||
req.process = lambda: b""
|
req.process = lambda: b""
|
||||||
req.content = BytesIO(content)
|
req.content = BytesIO(content)
|
||||||
|
|
||||||
if access_token:
|
if access_token:
|
||||||
req.requestHeaders.addRawHeader(b"Authorization", b"Bearer " + access_token)
|
req.requestHeaders.addRawHeader(
|
||||||
|
b"Authorization", b"Bearer " + access_token.encode('ascii')
|
||||||
|
)
|
||||||
|
|
||||||
if content:
|
if content:
|
||||||
req.requestHeaders.addRawHeader(b"Content-Type", b"application/json")
|
req.requestHeaders.addRawHeader(b"Content-Type", b"application/json")
|
||||||
@ -151,7 +191,7 @@ def wait_until_result(clock, request, timeout=100):
|
|||||||
x += 1
|
x += 1
|
||||||
|
|
||||||
if x > timeout:
|
if x > timeout:
|
||||||
raise Exception("Timed out waiting for request to finish.")
|
raise TimedOutException("Timed out waiting for request to finish.")
|
||||||
|
|
||||||
clock.advance(0.1)
|
clock.advance(0.1)
|
||||||
|
|
||||||
|
@ -4,7 +4,6 @@ from twisted.internet import defer
|
|||||||
|
|
||||||
from synapse.api.constants import EventTypes, ServerNoticeMsgType
|
from synapse.api.constants import EventTypes, ServerNoticeMsgType
|
||||||
from synapse.api.errors import ResourceLimitError
|
from synapse.api.errors import ResourceLimitError
|
||||||
from synapse.handlers.auth import AuthHandler
|
|
||||||
from synapse.server_notices.resource_limits_server_notices import (
|
from synapse.server_notices.resource_limits_server_notices import (
|
||||||
ResourceLimitsServerNotices,
|
ResourceLimitsServerNotices,
|
||||||
)
|
)
|
||||||
@ -13,17 +12,10 @@ from tests import unittest
|
|||||||
from tests.utils import setup_test_homeserver
|
from tests.utils import setup_test_homeserver
|
||||||
|
|
||||||
|
|
||||||
class AuthHandlers(object):
|
|
||||||
def __init__(self, hs):
|
|
||||||
self.auth_handler = AuthHandler(hs)
|
|
||||||
|
|
||||||
|
|
||||||
class TestResourceLimitsServerNotices(unittest.TestCase):
|
class TestResourceLimitsServerNotices(unittest.TestCase):
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.hs = yield setup_test_homeserver(self.addCleanup, handlers=None)
|
self.hs = yield setup_test_homeserver(self.addCleanup)
|
||||||
self.hs.handlers = AuthHandlers(self.hs)
|
|
||||||
self.auth_handler = self.hs.handlers.auth_handler
|
|
||||||
self.server_notices_sender = self.hs.get_server_notices_sender()
|
self.server_notices_sender = self.hs.get_server_notices_sender()
|
||||||
|
|
||||||
# relying on [1] is far from ideal, but the only case where
|
# relying on [1] is far from ideal, but the only case where
|
||||||
|
@ -544,8 +544,7 @@ class StateTestCase(unittest.TestCase):
|
|||||||
state_res_store=TestStateResolutionStore(event_map),
|
state_res_store=TestStateResolutionStore(event_map),
|
||||||
)
|
)
|
||||||
|
|
||||||
self.assertTrue(state_d.called)
|
state_before = self.successResultOf(state_d)
|
||||||
state_before = state_d.result
|
|
||||||
|
|
||||||
state_after = dict(state_before)
|
state_after = dict(state_before)
|
||||||
if fake_event.state_key is not None:
|
if fake_event.state_key is not None:
|
||||||
@ -599,6 +598,103 @@ class LexicographicalTestCase(unittest.TestCase):
|
|||||||
self.assertEqual(["o", "l", "n", "m", "p"], res)
|
self.assertEqual(["o", "l", "n", "m", "p"], res)
|
||||||
|
|
||||||
|
|
||||||
|
class SimpleParamStateTestCase(unittest.TestCase):
|
||||||
|
def setUp(self):
|
||||||
|
# We build up a simple DAG.
|
||||||
|
|
||||||
|
event_map = {}
|
||||||
|
|
||||||
|
create_event = FakeEvent(
|
||||||
|
id="CREATE",
|
||||||
|
sender=ALICE,
|
||||||
|
type=EventTypes.Create,
|
||||||
|
state_key="",
|
||||||
|
content={"creator": ALICE},
|
||||||
|
).to_event([], [])
|
||||||
|
event_map[create_event.event_id] = create_event
|
||||||
|
|
||||||
|
alice_member = FakeEvent(
|
||||||
|
id="IMA",
|
||||||
|
sender=ALICE,
|
||||||
|
type=EventTypes.Member,
|
||||||
|
state_key=ALICE,
|
||||||
|
content=MEMBERSHIP_CONTENT_JOIN,
|
||||||
|
).to_event([create_event.event_id], [create_event.event_id])
|
||||||
|
event_map[alice_member.event_id] = alice_member
|
||||||
|
|
||||||
|
join_rules = FakeEvent(
|
||||||
|
id="IJR",
|
||||||
|
sender=ALICE,
|
||||||
|
type=EventTypes.JoinRules,
|
||||||
|
state_key="",
|
||||||
|
content={"join_rule": JoinRules.PUBLIC},
|
||||||
|
).to_event(
|
||||||
|
auth_events=[create_event.event_id, alice_member.event_id],
|
||||||
|
prev_events=[alice_member.event_id],
|
||||||
|
)
|
||||||
|
event_map[join_rules.event_id] = join_rules
|
||||||
|
|
||||||
|
# Bob and Charlie join at the same time, so there is a fork
|
||||||
|
bob_member = FakeEvent(
|
||||||
|
id="IMB",
|
||||||
|
sender=BOB,
|
||||||
|
type=EventTypes.Member,
|
||||||
|
state_key=BOB,
|
||||||
|
content=MEMBERSHIP_CONTENT_JOIN,
|
||||||
|
).to_event(
|
||||||
|
auth_events=[create_event.event_id, join_rules.event_id],
|
||||||
|
prev_events=[join_rules.event_id],
|
||||||
|
)
|
||||||
|
event_map[bob_member.event_id] = bob_member
|
||||||
|
|
||||||
|
charlie_member = FakeEvent(
|
||||||
|
id="IMC",
|
||||||
|
sender=CHARLIE,
|
||||||
|
type=EventTypes.Member,
|
||||||
|
state_key=CHARLIE,
|
||||||
|
content=MEMBERSHIP_CONTENT_JOIN,
|
||||||
|
).to_event(
|
||||||
|
auth_events=[create_event.event_id, join_rules.event_id],
|
||||||
|
prev_events=[join_rules.event_id],
|
||||||
|
)
|
||||||
|
event_map[charlie_member.event_id] = charlie_member
|
||||||
|
|
||||||
|
self.event_map = event_map
|
||||||
|
self.create_event = create_event
|
||||||
|
self.alice_member = alice_member
|
||||||
|
self.join_rules = join_rules
|
||||||
|
self.bob_member = bob_member
|
||||||
|
self.charlie_member = charlie_member
|
||||||
|
|
||||||
|
self.state_at_bob = {
|
||||||
|
(e.type, e.state_key): e.event_id
|
||||||
|
for e in [create_event, alice_member, join_rules, bob_member]
|
||||||
|
}
|
||||||
|
|
||||||
|
self.state_at_charlie = {
|
||||||
|
(e.type, e.state_key): e.event_id
|
||||||
|
for e in [create_event, alice_member, join_rules, charlie_member]
|
||||||
|
}
|
||||||
|
|
||||||
|
self.expected_combined_state = {
|
||||||
|
(e.type, e.state_key): e.event_id
|
||||||
|
for e in [create_event, alice_member, join_rules, bob_member, charlie_member]
|
||||||
|
}
|
||||||
|
|
||||||
|
def test_event_map_none(self):
|
||||||
|
# Test that we correctly handle passing `None` as the event_map
|
||||||
|
|
||||||
|
state_d = resolve_events_with_store(
|
||||||
|
[self.state_at_bob, self.state_at_charlie],
|
||||||
|
event_map=None,
|
||||||
|
state_res_store=TestStateResolutionStore(self.event_map),
|
||||||
|
)
|
||||||
|
|
||||||
|
state = self.successResultOf(state_d)
|
||||||
|
|
||||||
|
self.assert_dict(self.expected_combined_state, state)
|
||||||
|
|
||||||
|
|
||||||
def pairwise(iterable):
|
def pairwise(iterable):
|
||||||
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
|
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
|
||||||
a, b = itertools.tee(iterable)
|
a, b = itertools.tee(iterable)
|
||||||
@ -657,7 +753,7 @@ class TestStateResolutionStore(object):
|
|||||||
result.add(event_id)
|
result.add(event_id)
|
||||||
|
|
||||||
event = self.event_map[event_id]
|
event = self.event_map[event_id]
|
||||||
for aid, _ in event.auth_events:
|
for aid in event.auth_event_ids():
|
||||||
stack.append(aid)
|
stack.append(aid)
|
||||||
|
|
||||||
return list(result)
|
return list(result)
|
||||||
|
@ -44,6 +44,21 @@ class EndToEndKeyStoreTestCase(tests.unittest.TestCase):
|
|||||||
dev = res["user"]["device"]
|
dev = res["user"]["device"]
|
||||||
self.assertDictContainsSubset({"keys": json, "device_display_name": None}, dev)
|
self.assertDictContainsSubset({"keys": json, "device_display_name": None}, dev)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def test_reupload_key(self):
|
||||||
|
now = 1470174257070
|
||||||
|
json = {"key": "value"}
|
||||||
|
|
||||||
|
yield self.store.store_device("user", "device", None)
|
||||||
|
|
||||||
|
changed = yield self.store.set_e2e_device_keys("user", "device", now, json)
|
||||||
|
self.assertTrue(changed)
|
||||||
|
|
||||||
|
# If we try to upload the same key then we should be told nothing
|
||||||
|
# changed
|
||||||
|
changed = yield self.store.set_e2e_device_keys("user", "device", now, json)
|
||||||
|
self.assertFalse(changed)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_get_key_with_device_name(self):
|
def test_get_key_with_device_name(self):
|
||||||
now = 1470174257070
|
now = 1470174257070
|
||||||
|
@ -112,7 +112,7 @@ class MessageAcceptTests(unittest.TestCase):
|
|||||||
"origin_server_ts": 1,
|
"origin_server_ts": 1,
|
||||||
"type": "m.room.message",
|
"type": "m.room.message",
|
||||||
"origin": "test.serv",
|
"origin": "test.serv",
|
||||||
"content": "hewwo?",
|
"content": {"body": "hewwo?"},
|
||||||
"auth_events": [],
|
"auth_events": [],
|
||||||
"prev_events": [("two:test.serv", {}), (most_recent, {})],
|
"prev_events": [("two:test.serv", {}), (most_recent, {})],
|
||||||
}
|
}
|
||||||
|
@ -21,30 +21,20 @@ from mock import Mock, NonCallableMock
|
|||||||
|
|
||||||
from synapse.api.constants import LoginType
|
from synapse.api.constants import LoginType
|
||||||
from synapse.api.errors import Codes, HttpResponseException, SynapseError
|
from synapse.api.errors import Codes, HttpResponseException, SynapseError
|
||||||
from synapse.http.server import JsonResource
|
|
||||||
from synapse.rest.client.v2_alpha import register, sync
|
from synapse.rest.client.v2_alpha import register, sync
|
||||||
from synapse.util import Clock
|
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
from tests.server import (
|
|
||||||
ThreadedMemoryReactorClock,
|
|
||||||
make_request,
|
|
||||||
render,
|
|
||||||
setup_test_homeserver,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestMauLimit(unittest.TestCase):
|
class TestMauLimit(unittest.HomeserverTestCase):
|
||||||
def setUp(self):
|
|
||||||
self.reactor = ThreadedMemoryReactorClock()
|
|
||||||
self.clock = Clock(self.reactor)
|
|
||||||
|
|
||||||
self.hs = setup_test_homeserver(
|
servlets = [register.register_servlets, sync.register_servlets]
|
||||||
self.addCleanup,
|
|
||||||
|
def make_homeserver(self, reactor, clock):
|
||||||
|
|
||||||
|
self.hs = self.setup_test_homeserver(
|
||||||
"red",
|
"red",
|
||||||
http_client=None,
|
http_client=None,
|
||||||
clock=self.clock,
|
|
||||||
reactor=self.reactor,
|
|
||||||
federation_client=Mock(),
|
federation_client=Mock(),
|
||||||
ratelimiter=NonCallableMock(spec_set=["send_message"]),
|
ratelimiter=NonCallableMock(spec_set=["send_message"]),
|
||||||
)
|
)
|
||||||
@ -63,10 +53,7 @@ class TestMauLimit(unittest.TestCase):
|
|||||||
self.hs.config.server_notices_mxid_display_name = None
|
self.hs.config.server_notices_mxid_display_name = None
|
||||||
self.hs.config.server_notices_mxid_avatar_url = None
|
self.hs.config.server_notices_mxid_avatar_url = None
|
||||||
self.hs.config.server_notices_room_name = "Test Server Notice Room"
|
self.hs.config.server_notices_room_name = "Test Server Notice Room"
|
||||||
|
return self.hs
|
||||||
self.resource = JsonResource(self.hs)
|
|
||||||
register.register_servlets(self.hs, self.resource)
|
|
||||||
sync.register_servlets(self.hs, self.resource)
|
|
||||||
|
|
||||||
def test_simple_deny_mau(self):
|
def test_simple_deny_mau(self):
|
||||||
# Create and sync so that the MAU counts get updated
|
# Create and sync so that the MAU counts get updated
|
||||||
@ -193,8 +180,8 @@ class TestMauLimit(unittest.TestCase):
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
request, channel = make_request("POST", "/register", request_data)
|
request, channel = self.make_request("POST", "/register", request_data)
|
||||||
render(request, self.resource, self.reactor)
|
self.render(request)
|
||||||
|
|
||||||
if channel.code != 200:
|
if channel.code != 200:
|
||||||
raise HttpResponseException(
|
raise HttpResponseException(
|
||||||
@ -206,10 +193,10 @@ class TestMauLimit(unittest.TestCase):
|
|||||||
return access_token
|
return access_token
|
||||||
|
|
||||||
def do_sync_for_user(self, token):
|
def do_sync_for_user(self, token):
|
||||||
request, channel = make_request(
|
request, channel = self.make_request(
|
||||||
"GET", "/sync", access_token=token.encode('ascii')
|
"GET", "/sync", access_token=token
|
||||||
)
|
)
|
||||||
render(request, self.resource, self.reactor)
|
self.render(request)
|
||||||
|
|
||||||
if channel.code != 200:
|
if channel.code != 200:
|
||||||
raise HttpResponseException(
|
raise HttpResponseException(
|
||||||
|
@ -57,7 +57,9 @@ class JsonResourceTests(unittest.TestCase):
|
|||||||
"GET", [re.compile("^/_matrix/foo/(?P<room_id>[^/]*)$")], _callback
|
"GET", [re.compile("^/_matrix/foo/(?P<room_id>[^/]*)$")], _callback
|
||||||
)
|
)
|
||||||
|
|
||||||
request, channel = make_request(b"GET", b"/_matrix/foo/%E2%98%83?a=%E2%98%83")
|
request, channel = make_request(
|
||||||
|
self.reactor, b"GET", b"/_matrix/foo/%E2%98%83?a=%E2%98%83"
|
||||||
|
)
|
||||||
render(request, res, self.reactor)
|
render(request, res, self.reactor)
|
||||||
|
|
||||||
self.assertEqual(request.args, {b'a': [u"\N{SNOWMAN}".encode('utf8')]})
|
self.assertEqual(request.args, {b'a': [u"\N{SNOWMAN}".encode('utf8')]})
|
||||||
@ -75,7 +77,7 @@ class JsonResourceTests(unittest.TestCase):
|
|||||||
res = JsonResource(self.homeserver)
|
res = JsonResource(self.homeserver)
|
||||||
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
||||||
|
|
||||||
request, channel = make_request(b"GET", b"/_matrix/foo")
|
request, channel = make_request(self.reactor, b"GET", b"/_matrix/foo")
|
||||||
render(request, res, self.reactor)
|
render(request, res, self.reactor)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b'500')
|
self.assertEqual(channel.result["code"], b'500')
|
||||||
@ -98,7 +100,7 @@ class JsonResourceTests(unittest.TestCase):
|
|||||||
res = JsonResource(self.homeserver)
|
res = JsonResource(self.homeserver)
|
||||||
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
||||||
|
|
||||||
request, channel = make_request(b"GET", b"/_matrix/foo")
|
request, channel = make_request(self.reactor, b"GET", b"/_matrix/foo")
|
||||||
render(request, res, self.reactor)
|
render(request, res, self.reactor)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b'500')
|
self.assertEqual(channel.result["code"], b'500')
|
||||||
@ -115,7 +117,7 @@ class JsonResourceTests(unittest.TestCase):
|
|||||||
res = JsonResource(self.homeserver)
|
res = JsonResource(self.homeserver)
|
||||||
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
||||||
|
|
||||||
request, channel = make_request(b"GET", b"/_matrix/foo")
|
request, channel = make_request(self.reactor, b"GET", b"/_matrix/foo")
|
||||||
render(request, res, self.reactor)
|
render(request, res, self.reactor)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b'403')
|
self.assertEqual(channel.result["code"], b'403')
|
||||||
@ -136,7 +138,7 @@ class JsonResourceTests(unittest.TestCase):
|
|||||||
res = JsonResource(self.homeserver)
|
res = JsonResource(self.homeserver)
|
||||||
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
||||||
|
|
||||||
request, channel = make_request(b"GET", b"/_matrix/foobar")
|
request, channel = make_request(self.reactor, b"GET", b"/_matrix/foobar")
|
||||||
render(request, res, self.reactor)
|
render(request, res, self.reactor)
|
||||||
|
|
||||||
self.assertEqual(channel.result["code"], b'400')
|
self.assertEqual(channel.result["code"], b'400')
|
||||||
|
123
tests/test_terms_auth.py
Normal file
123
tests/test_terms_auth.py
Normal file
@ -0,0 +1,123 @@
|
|||||||
|
# Copyright 2018 New Vector Ltd
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
import six
|
||||||
|
from mock import Mock
|
||||||
|
|
||||||
|
from twisted.test.proto_helpers import MemoryReactorClock
|
||||||
|
|
||||||
|
from synapse.rest.client.v2_alpha.register import register_servlets
|
||||||
|
from synapse.util import Clock
|
||||||
|
|
||||||
|
from tests import unittest
|
||||||
|
|
||||||
|
|
||||||
|
class TermsTestCase(unittest.HomeserverTestCase):
|
||||||
|
servlets = [register_servlets]
|
||||||
|
|
||||||
|
def prepare(self, reactor, clock, hs):
|
||||||
|
self.clock = MemoryReactorClock()
|
||||||
|
self.hs_clock = Clock(self.clock)
|
||||||
|
self.url = "/_matrix/client/r0/register"
|
||||||
|
self.registration_handler = Mock()
|
||||||
|
self.auth_handler = Mock()
|
||||||
|
self.device_handler = Mock()
|
||||||
|
hs.config.enable_registration = True
|
||||||
|
hs.config.registrations_require_3pid = []
|
||||||
|
hs.config.auto_join_rooms = []
|
||||||
|
hs.config.enable_registration_captcha = False
|
||||||
|
|
||||||
|
def test_ui_auth(self):
|
||||||
|
self.hs.config.user_consent_at_registration = True
|
||||||
|
self.hs.config.user_consent_policy_name = "My Cool Privacy Policy"
|
||||||
|
self.hs.config.public_baseurl = "https://example.org"
|
||||||
|
self.hs.config.user_consent_version = "1.0"
|
||||||
|
|
||||||
|
# Do a UI auth request
|
||||||
|
request, channel = self.make_request(b"POST", self.url, b"{}")
|
||||||
|
self.render(request)
|
||||||
|
|
||||||
|
self.assertEquals(channel.result["code"], b"401", channel.result)
|
||||||
|
|
||||||
|
self.assertTrue(channel.json_body is not None)
|
||||||
|
self.assertIsInstance(channel.json_body["session"], six.text_type)
|
||||||
|
|
||||||
|
self.assertIsInstance(channel.json_body["flows"], list)
|
||||||
|
for flow in channel.json_body["flows"]:
|
||||||
|
self.assertIsInstance(flow["stages"], list)
|
||||||
|
self.assertTrue(len(flow["stages"]) > 0)
|
||||||
|
self.assertEquals(flow["stages"][-1], "m.login.terms")
|
||||||
|
|
||||||
|
expected_params = {
|
||||||
|
"m.login.terms": {
|
||||||
|
"policies": {
|
||||||
|
"privacy_policy": {
|
||||||
|
"en": {
|
||||||
|
"name": "My Cool Privacy Policy",
|
||||||
|
"url": "https://example.org/_matrix/consent?v=1.0",
|
||||||
|
},
|
||||||
|
"version": "1.0"
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
self.assertIsInstance(channel.json_body["params"], dict)
|
||||||
|
self.assertDictContainsSubset(channel.json_body["params"], expected_params)
|
||||||
|
|
||||||
|
# We have to complete the dummy auth stage before completing the terms stage
|
||||||
|
request_data = json.dumps(
|
||||||
|
{
|
||||||
|
"username": "kermit",
|
||||||
|
"password": "monkey",
|
||||||
|
"auth": {
|
||||||
|
"session": channel.json_body["session"],
|
||||||
|
"type": "m.login.dummy",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
self.registration_handler.check_username = Mock(return_value=True)
|
||||||
|
|
||||||
|
request, channel = self.make_request(b"POST", self.url, request_data)
|
||||||
|
self.render(request)
|
||||||
|
|
||||||
|
# We don't bother checking that the response is correct - we'll leave that to
|
||||||
|
# other tests. We just want to make sure we're on the right path.
|
||||||
|
self.assertEquals(channel.result["code"], b"401", channel.result)
|
||||||
|
|
||||||
|
# Finish the UI auth for terms
|
||||||
|
request_data = json.dumps(
|
||||||
|
{
|
||||||
|
"username": "kermit",
|
||||||
|
"password": "monkey",
|
||||||
|
"auth": {
|
||||||
|
"session": channel.json_body["session"],
|
||||||
|
"type": "m.login.terms",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
request, channel = self.make_request(b"POST", self.url, request_data)
|
||||||
|
self.render(request)
|
||||||
|
|
||||||
|
# We're interested in getting a response that looks like a successful
|
||||||
|
# registration, not so much that the details are exactly what we want.
|
||||||
|
|
||||||
|
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||||
|
|
||||||
|
self.assertTrue(channel.json_body is not None)
|
||||||
|
self.assertIsInstance(channel.json_body["user_id"], six.text_type)
|
||||||
|
self.assertIsInstance(channel.json_body["access_token"], six.text_type)
|
||||||
|
self.assertIsInstance(channel.json_body["device_id"], six.text_type)
|
@ -146,6 +146,13 @@ def DEBUG(target):
|
|||||||
return target
|
return target
|
||||||
|
|
||||||
|
|
||||||
|
def INFO(target):
|
||||||
|
"""A decorator to set the .loglevel attribute to logging.INFO.
|
||||||
|
Can apply to either a TestCase or an individual test method."""
|
||||||
|
target.loglevel = logging.INFO
|
||||||
|
return target
|
||||||
|
|
||||||
|
|
||||||
class HomeserverTestCase(TestCase):
|
class HomeserverTestCase(TestCase):
|
||||||
"""
|
"""
|
||||||
A base TestCase that reduces boilerplate for HomeServer-using test cases.
|
A base TestCase that reduces boilerplate for HomeServer-using test cases.
|
||||||
@ -182,11 +189,11 @@ class HomeserverTestCase(TestCase):
|
|||||||
for servlet in self.servlets:
|
for servlet in self.servlets:
|
||||||
servlet(self.hs, self.resource)
|
servlet(self.hs, self.resource)
|
||||||
|
|
||||||
if hasattr(self, "user_id"):
|
|
||||||
from tests.rest.client.v1.utils import RestHelper
|
from tests.rest.client.v1.utils import RestHelper
|
||||||
|
|
||||||
self.helper = RestHelper(self.hs, self.resource, self.user_id)
|
self.helper = RestHelper(self.hs, self.resource, getattr(self, "user_id", None))
|
||||||
|
|
||||||
|
if hasattr(self, "user_id"):
|
||||||
if self.hijack_auth:
|
if self.hijack_auth:
|
||||||
|
|
||||||
def get_user_by_access_token(token=None, allow_guest=False):
|
def get_user_by_access_token(token=None, allow_guest=False):
|
||||||
@ -251,7 +258,13 @@ class HomeserverTestCase(TestCase):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
def make_request(
|
def make_request(
|
||||||
self, method, path, content=b"", access_token=None, request=SynapseRequest
|
self,
|
||||||
|
method,
|
||||||
|
path,
|
||||||
|
content=b"",
|
||||||
|
access_token=None,
|
||||||
|
request=SynapseRequest,
|
||||||
|
shorthand=True,
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Create a SynapseRequest at the path using the method and containing the
|
Create a SynapseRequest at the path using the method and containing the
|
||||||
@ -263,6 +276,8 @@ class HomeserverTestCase(TestCase):
|
|||||||
escaped UTF-8 & spaces and such).
|
escaped UTF-8 & spaces and such).
|
||||||
content (bytes or dict): The body of the request. JSON-encoded, if
|
content (bytes or dict): The body of the request. JSON-encoded, if
|
||||||
a dict.
|
a dict.
|
||||||
|
shorthand: Whether to try and be helpful and prefix the given URL
|
||||||
|
with the usual REST API path, if it doesn't contain it.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A synapse.http.site.SynapseRequest.
|
A synapse.http.site.SynapseRequest.
|
||||||
@ -270,7 +285,9 @@ class HomeserverTestCase(TestCase):
|
|||||||
if isinstance(content, dict):
|
if isinstance(content, dict):
|
||||||
content = json.dumps(content).encode('utf8')
|
content = json.dumps(content).encode('utf8')
|
||||||
|
|
||||||
return make_request(method, path, content, access_token, request)
|
return make_request(
|
||||||
|
self.reactor, method, path, content, access_token, request, shorthand
|
||||||
|
)
|
||||||
|
|
||||||
def render(self, request):
|
def render(self, request):
|
||||||
"""
|
"""
|
||||||
@ -373,5 +390,5 @@ class HomeserverTestCase(TestCase):
|
|||||||
self.render(request)
|
self.render(request)
|
||||||
self.assertEqual(channel.code, 200)
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
||||||
access_token = channel.json_body["access_token"].encode('ascii')
|
access_token = channel.json_body["access_token"]
|
||||||
return access_token
|
return access_token
|
||||||
|
@ -123,6 +123,8 @@ def default_config(name):
|
|||||||
config.user_directory_search_all_users = False
|
config.user_directory_search_all_users = False
|
||||||
config.user_consent_server_notice_content = None
|
config.user_consent_server_notice_content = None
|
||||||
config.block_events_without_consent_error = None
|
config.block_events_without_consent_error = None
|
||||||
|
config.user_consent_at_registration = False
|
||||||
|
config.user_consent_policy_name = "Privacy Policy"
|
||||||
config.media_storage_providers = []
|
config.media_storage_providers = []
|
||||||
config.autocreate_auto_join_rooms = True
|
config.autocreate_auto_join_rooms = True
|
||||||
config.auto_join_rooms = []
|
config.auto_join_rooms = []
|
||||||
|
16
tox.ini
16
tox.ini
@ -11,6 +11,20 @@ deps =
|
|||||||
# needed by some of the tests
|
# needed by some of the tests
|
||||||
lxml
|
lxml
|
||||||
|
|
||||||
|
# cyptography 2.2 requires setuptools >= 18.5
|
||||||
|
#
|
||||||
|
# older versions of virtualenv (?) give us a virtualenv with the same
|
||||||
|
# version of setuptools as is installed on the system python (and tox runs
|
||||||
|
# virtualenv under python3, so we get the version of setuptools that is
|
||||||
|
# installed on that).
|
||||||
|
#
|
||||||
|
# anyway, make sure that we have a recent enough setuptools.
|
||||||
|
setuptools>=18.5
|
||||||
|
|
||||||
|
# we also need a semi-recent version of pip, because old ones fail to
|
||||||
|
# install the "enum34" dependency of cryptography.
|
||||||
|
pip>=10
|
||||||
|
|
||||||
setenv =
|
setenv =
|
||||||
PYTHONDONTWRITEBYTECODE = no_byte_code
|
PYTHONDONTWRITEBYTECODE = no_byte_code
|
||||||
|
|
||||||
@ -108,7 +122,7 @@ skip_install = True
|
|||||||
basepython = python3.6
|
basepython = python3.6
|
||||||
deps =
|
deps =
|
||||||
flake8
|
flake8
|
||||||
commands = /bin/sh -c "flake8 synapse tests scripts scripts-dev scripts/register_new_matrix_user scripts/synapse_port_db synctl {env:PEP8SUFFIX:}"
|
commands = /bin/sh -c "flake8 synapse tests scripts scripts-dev scripts/hash_password scripts/register_new_matrix_user scripts/synapse_port_db synctl {env:PEP8SUFFIX:}"
|
||||||
|
|
||||||
[testenv:check_isort]
|
[testenv:check_isort]
|
||||||
skip_install = True
|
skip_install = True
|
||||||
|
Loading…
Reference in New Issue
Block a user