Merge remote-tracking branch 'origin/dinsic' into dbkr/profile_replication

This commit is contained in:
David Baker 2018-04-16 18:35:25 +01:00
commit 3c446d0a81
116 changed files with 2285 additions and 785 deletions

View File

@ -1,11 +1,144 @@
Unreleased Changes in synapse v0.27.3-rc2 (2018-04-09)
========== ==========================================
v0.27.3-rc1 used a stale version of the develop branch so the changelog overstates
the functionality. v0.27.3-rc2 is up to date, rc1 should be ignored.
synctl no longer starts the main synapse when using ``-a`` option with workers. Changes in synapse v0.27.3-rc1 (2018-04-09)
A new worker file should be added with ``worker_app: synapse.app.homeserver``. =======================================
Notable changes include API support for joinability of groups. Also new metrics
and phone home stats. Phone home stats include better visibility of system usage
so we can tweak synpase to work better for all users rather than our own experience
with matrix.org. Also, recording 'r30' stat which is the measure we use to track
overal growth of the Matrix ecosystem. It is defined as:-
Counts the number of native 30 day retained users, defined as:-
* Users who have created their accounts more than 30 days
* Where last seen at most 30 days ago
* Where account creation and last_seen are > 30 days"
Features:
* Add joinability for groups (PR #3045)
* Implement group join API (PR #3046)
* Add counter metrics for calculating state delta (PR #3033)
* R30 stats (PR #3041)
* Measure time it takes to calculate state group ID (PR #3043)
* Add basic performance statistics to phone home (PR #3044)
* Add response size metrics (PR #3071)
* phone home cache size configurations (PR #3063)
Changes:
* Add a blurb explaining the main synapse worker (PR #2886) Thanks to @turt2live!
* Replace old style error catching with 'as' keyword (PR #3000) Thanks to @NotAFile!
* Use .iter* to avoid copies in StateHandler (PR #3006)
* Linearize calls to _generate_user_id (PR #3029)
* Remove last usage of ujson (PR #3030)
* Use simplejson throughout (PR #3048)
* Use static JSONEncoders (PR #3049)
* Remove uses of events.content (PR #3060)
* Improve database cache performance (PR #3068)
Bug fixes:
* Add room_id to the response of `rooms/{roomId}/join` (PR #2986) Thanks to @jplatte!
* Fix replication after switch to simplejson (PR #3015)
* Fix replication after switch to simplejson (PR #3015)
* 404 correctly on missing paths via NoResource (PR #3022)
* Fix error when claiming e2e keys from offline servers (PR #3034)
* fix tests/storage/test_user_directory.py (PR #3042)
* use PUT instead of POST for federating groups/m.join_policy (PR #3070) Thanks to @krombel!
* postgres port script: fix state_groups_pkey error (PR #3072)
Changes in synapse v0.27.2 (2018-03-26)
=======================================
Bug fixes:
* Fix bug which broke TCP replication between workers (PR #3015)
Changes in synapse v0.27.1 (2018-03-26)
=======================================
Meta release as v0.27.0 temporarily pointed to the wrong commit
Changes in synapse v0.27.0 (2018-03-26)
=======================================
No changes since v0.27.0-rc2
Changes in synapse v0.27.0-rc2 (2018-03-19)
===========================================
Pulls in v0.26.1
Bug fixes:
* Fix bug introduced in v0.27.0-rc1 that causes much increased memory usage in state cache (PR #3005)
Changes in synapse v0.26.1 (2018-03-15)
=======================================
Bug fixes:
* Fix bug where an invalid event caused server to stop functioning correctly,
due to parsing and serializing bugs in ujson library (PR #3008)
Changes in synapse v0.27.0-rc1 (2018-03-14)
===========================================
The common case for running Synapse is not to run separate workers, but for those that do, be aware that synctl no longer starts the main synapse when using ``-a`` option with workers. A new worker file should be added with ``worker_app: synapse.app.homeserver``.
This release also begins the process of renaming a number of the metrics This release also begins the process of renaming a number of the metrics
reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_. reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_.
Note that the v0.28.0 release will remove the deprecated metric names.
Features:
* Add ability for ASes to override message send time (PR #2754)
* Add support for custom storage providers for media repository (PR #2867, #2777, #2783, #2789, #2791, #2804, #2812, #2814, #2857, #2868, #2767)
* Add purge API features, see `docs/admin_api/purge_history_api.rst <docs/admin_api/purge_history_api.rst>`_ for full details (PR #2858, #2867, #2882, #2946, #2962, #2943)
* Add support for whitelisting 3PIDs that users can register. (PR #2813)
* Add ``/room/{id}/event/{id}`` API (PR #2766)
* Add an admin API to get all the media in a room (PR #2818) Thanks to @turt2live!
* Add ``federation_domain_whitelist`` option (PR #2820, #2821)
Changes:
* Continue to factor out processing from main process and into worker processes. See updated `docs/workers.rst <docs/workers.rst>`_ (PR #2892 - #2904, #2913, #2920 - #2926, #2947, #2847, #2854, #2872, #2873, #2874, #2928, #2929, #2934, #2856, #2976 - #2984, #2987 - #2989, #2991 - #2993, #2995, #2784)
* Ensure state cache is used when persisting events (PR #2864, #2871, #2802, #2835, #2836, #2841, #2842, #2849)
* Change the default config to bind on both IPv4 and IPv6 on all platforms (PR #2435) Thanks to @silkeh!
* No longer require a specific version of saml2 (PR #2695) Thanks to @okurz!
* Remove ``verbosity``/``log_file`` from generated config (PR #2755)
* Add and improve metrics and logging (PR #2770, #2778, #2785, #2786, #2787, #2793, #2794, #2795, #2809, #2810, #2833, #2834, #2844, #2965, #2927, #2975, #2790, #2796, #2838)
* When using synctl with workers, don't start the main synapse automatically (PR #2774)
* Minor performance improvements (PR #2773, #2792)
* Use a connection pool for non-federation outbound connections (PR #2817)
* Make it possible to run unit tests against postgres (PR #2829)
* Update pynacl dependency to 1.2.1 or higher (PR #2888) Thanks to @bachp!
* Remove ability for AS users to call /events and /sync (PR #2948)
* Use bcrypt.checkpw (PR #2949) Thanks to @krombel!
Bug fixes:
* Fix broken ``ldap_config`` config option (PR #2683) Thanks to @seckrv!
* Fix error message when user is not allowed to unban (PR #2761) Thanks to @turt2live!
* Fix publicised groups GET API (singular) over federation (PR #2772)
* Fix user directory when using ``user_directory_search_all_users`` config option (PR #2803, #2831)
* Fix error on ``/publicRooms`` when no rooms exist (PR #2827)
* Fix bug in quarantine_media (PR #2837)
* Fix url_previews when no Content-Type is returned from URL (PR #2845)
* Fix rare race in sync API when joining room (PR #2944)
* Fix slow event search, switch back from GIST to GIN indexes (PR #2769, #2848)
Changes in synapse v0.26.0 (2018-01-05) Changes in synapse v0.26.0 (2018-01-05)

View File

@ -30,8 +30,12 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release. changes will then land on master when we next do a release.
We use Jenkins for continuous integration (http://matrix.org/jenkins), and We use `Jenkins <http://matrix.org/jenkins>`_ and
typically all pull requests get automatically tested Jenkins: if your change breaks the build, Jenkins will yell about it in #matrix-dev:matrix.org so please lurk there and keep an eye open. `Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
Code style Code style
~~~~~~~~~~ ~~~~~~~~~~
@ -115,4 +119,4 @@ can't be accepted. Git makes this trivial - just use the -s flag when you do
Conclusion Conclusion
~~~~~~~~~~ ~~~~~~~~~~
That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully matrix together all the fragmented communication technologies out there we are reliant on contributions and collaboration from the community to do so. So please get involved - and we hope you have as much fun hacking on Matrix as we do! That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully matrix together all the fragmented communication technologies out there we are reliant on contributions and collaboration from the community to do so. So please get involved - and we hope you have as much fun hacking on Matrix as we do!

View File

@ -354,6 +354,10 @@ https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one
Fedora Fedora
------ ------
Synapse is in the Fedora repositories as ``matrix-synapse``::
sudo dnf install matrix-synapse
Oleg Girko provides Fedora RPMs at Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse https://obs.infoserver.lv/project/monitor/matrix-synapse
@ -890,6 +894,17 @@ This should end with a 'PASSED' result::
PASSED (successes=143) PASSED (successes=143)
Running the Integration Tests
=============================
Synapse is accompanied by `SyTest <https://github.com/matrix-org/sytest>`_,
a Matrix homeserver integration testing suite, which uses HTTP requests to
access the API as a Matrix client would. It is able to run Synapse directly from
the source tree, so installation of the server is not required.
Testing with SyTest is recommended for verifying that changes related to the
Client-Server API are functioning correctly. See the `installation instructions
<https://github.com/matrix-org/sytest#installing>`_ for details.
Building Internal API Documentation Building Internal API Documentation
=================================== ===================================

View File

@ -48,6 +48,18 @@ returned by the Client-Server API:
# configured on port 443. # configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:" curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to $NEXT_VERSION
====================
This release expands the anonymous usage stats sent if the opt-in
``report_stats`` configuration is set to ``true``. We now capture RSS memory
and cpu use at a very coarse level. This requires administrators to install
the optional ``psutil`` python module.
We would appreciate it if you could assist by ensuring this module is available
and ``report_stats`` is enabled. This will let us see if performance changes to
synapse are having an impact to the general community.
Upgrading to v0.15.0 Upgrading to v0.15.0
==================== ====================

View File

@ -22,6 +22,8 @@ import argparse
from synapse.events import FrozenEvent from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze from synapse.util.frozenutils import unfreeze
from six import string_types
def make_graph(file_name, room_id, file_prefix, limit): def make_graph(file_name, room_id, file_prefix, limit):
print "Reading lines" print "Reading lines"
@ -58,7 +60,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for key, value in unfreeze(event.get_dict()["content"]).items(): for key, value in unfreeze(event.get_dict()["content"]).items():
if value is None: if value is None:
value = "<null>" value = "<null>"
elif isinstance(value, basestring): elif isinstance(value, string_types):
pass pass
else: else:
value = json.dumps(value) value = json.dumps(value)

View File

@ -202,11 +202,11 @@ new PromConsole.Graph({
<h1>Requests</h1> <h1>Requests</h1>
<h3>Requests by Servlet</h3> <h3>Requests by Servlet</h3>
<div id="synapse_http_server_requests_servlet"></div> <div id="synapse_http_server_request_count_servlet"></div>
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet"), node: document.querySelector("#synapse_http_server_request_count_servlet"),
expr: "rate(synapse_http_server_requests:servlet[2m])", expr: "rate(synapse_http_server_request_count:servlet[2m])",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -215,11 +215,11 @@ new PromConsole.Graph({
}) })
</script> </script>
<h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4> <h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
<div id="synapse_http_server_requests_servlet_minus_events"></div> <div id="synapse_http_server_request_count_servlet_minus_events"></div>
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"), node: document.querySelector("#synapse_http_server_request_count_servlet_minus_events"),
expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])", expr: "rate(synapse_http_server_request_count:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -233,7 +233,7 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"), node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000", expr: "rate(synapse_http_server_response_time_seconds[2m]) / rate(synapse_http_server_response_count[2m]) / 1000",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -276,7 +276,7 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"), node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime:total[2m])", expr: "rate(synapse_http_server_response_ru_utime_seconds[2m])",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -291,7 +291,7 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"), node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])", expr: "rate(synapse_http_server_response_db_txn_duration_seconds[2m])",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@ -306,7 +306,7 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"), node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000", expr: "rate(synapse_http_server_response_time_second{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,

View File

@ -1,10 +1,10 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0) synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0) synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method) synapse_http_server_request_count:method{servlet=""} = sum(synapse_http_server_request_count) by (method)
synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet) synapse_http_server_request_count:servlet{method=""} = sum(synapse_http_server_request_count) by (servlet)
synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet) synapse_http_server_request_count:total{servlet=""} = sum(synapse_http_server_request_count:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m]) synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s]) synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])

View File

@ -5,19 +5,19 @@ groups:
expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)" expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)"
- record: "synapse_federation_transaction_queue_pendingPdus:total" - record: "synapse_federation_transaction_queue_pendingPdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)" expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)"
- record: 'synapse_http_server_requests:method' - record: 'synapse_http_server_request_count:method'
labels: labels:
servlet: "" servlet: ""
expr: "sum(synapse_http_server_requests) by (method)" expr: "sum(synapse_http_server_request_count) by (method)"
- record: 'synapse_http_server_requests:servlet' - record: 'synapse_http_server_request_count:servlet'
labels: labels:
method: "" method: ""
expr: 'sum(synapse_http_server_requests) by (servlet)' expr: 'sum(synapse_http_server_request_count) by (servlet)'
- record: 'synapse_http_server_requests:total' - record: 'synapse_http_server_request_count:total'
labels: labels:
servlet: "" servlet: ""
expr: 'sum(synapse_http_server_requests:by_method) by (servlet)' expr: 'sum(synapse_http_server_request_count:by_method) by (servlet)'
- record: 'synapse_cache:hit_ratio_5m' - record: 'synapse_cache:hit_ratio_5m'
expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])' expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])'

View File

@ -16,9 +16,11 @@ including an ``access_token`` of a server admin.
By default, events sent by local users are not deleted, as they may represent By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are the only copies of this content in existence. (Events sent by remote users are
deleted, and room state data before the cutoff is always removed). deleted.)
To delete local events as well, set ``delete_local_events`` in the body: Room state data (such as joins, leaves, topic) is always preserved.
To delete local message events as well, set ``delete_local_events`` in the body:
.. code:: json .. code:: json

View File

@ -55,7 +55,12 @@ synapse process.)
You then create a set of configs for the various worker processes. These You then create a set of configs for the various worker processes. These
should be worker configuration files, and should be stored in a dedicated should be worker configuration files, and should be stored in a dedicated
subdirectory, to allow synctl to manipulate them. subdirectory, to allow synctl to manipulate them. An additional configuration
for the master synapse process will need to be created because the process will
not be started automatically. That configuration should look like this::
worker_app: synapse.app.homeserver
daemonize: true
Each worker configuration file inherits the configuration of the main homeserver Each worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that worker, configuration file. You can then override configuration specific to that worker,
@ -230,9 +235,11 @@ file. For example::
``synapse.app.event_creator`` ``synapse.app.event_creator``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles non-state event creation. It can handle REST endpoints matching: Handles some event creation. It can handle REST endpoints matching::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
It will create events locally and then send them on to the main synapse It will create events locally and then send them on to the main synapse
instance to be persisted and handled. instance to be persisted and handled.

View File

@ -1,6 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd # Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -29,6 +30,8 @@ import time
import traceback import traceback
import yaml import yaml
from six import string_types
logger = logging.getLogger("synapse_port_db") logger = logging.getLogger("synapse_port_db")
@ -250,6 +253,12 @@ class Porter(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def handle_table(self, table, postgres_size, table_size, forward_chunk, def handle_table(self, table, postgres_size, table_size, forward_chunk,
backward_chunk): backward_chunk):
logger.info(
"Table %s: %i/%i (rows %i-%i) already ported",
table, postgres_size, table_size,
backward_chunk+1, forward_chunk-1,
)
if not table_size: if not table_size:
return return
@ -467,31 +476,10 @@ class Porter(object):
self.progress.set_state("Preparing PostgreSQL") self.progress.set_state("Preparing PostgreSQL")
self.setup_db(postgres_config, postgres_engine) self.setup_db(postgres_config, postgres_engine)
# Step 2. Get tables. self.progress.set_state("Creating port tables")
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
self.progress.set_state("Creating tables")
logger.info("Found %d tables", len(tables))
def create_port_table(txn): def create_port_table(txn):
txn.execute( txn.execute(
"CREATE TABLE port_from_sqlite3 (" "CREATE TABLE IF NOT EXISTS port_from_sqlite3 ("
" table_name varchar(100) NOT NULL UNIQUE," " table_name varchar(100) NOT NULL UNIQUE,"
" forward_rowid bigint NOT NULL," " forward_rowid bigint NOT NULL,"
" backward_rowid bigint NOT NULL" " backward_rowid bigint NOT NULL"
@ -517,18 +505,33 @@ class Porter(object):
"alter_table", alter_table "alter_table", alter_table
) )
except Exception as e: except Exception as e:
logger.info("Failed to create port table: %s", e) pass
try: yield self.postgres_store.runInteraction(
yield self.postgres_store.runInteraction( "create_port_table", create_port_table
"create_port_table", create_port_table )
)
except Exception as e:
logger.info("Failed to create port table: %s", e)
self.progress.set_state("Setting up") # Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
# Set up tables. postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
logger.info("Found %d tables", len(tables))
# Step 3. Figure out what still needs copying
self.progress.set_state("Checking on port progress")
setup_res = yield defer.gatherResults( setup_res = yield defer.gatherResults(
[ [
self.setup_table(table) self.setup_table(table)
@ -539,7 +542,8 @@ class Porter(object):
consumeErrors=True, consumeErrors=True,
) )
# Process tables. # Step 4. Do the copying.
self.progress.set_state("Copying to postgres")
yield defer.gatherResults( yield defer.gatherResults(
[ [
self.handle_table(*res) self.handle_table(*res)
@ -548,6 +552,9 @@ class Porter(object):
consumeErrors=True, consumeErrors=True,
) )
# Step 5. Do final post-processing
yield self._setup_state_group_id_seq()
self.progress.done() self.progress.done()
except: except:
global end_error_exec_info global end_error_exec_info
@ -569,7 +576,7 @@ class Porter(object):
def conv(j, col): def conv(j, col):
if j in bool_cols: if j in bool_cols:
return bool(col) return bool(col)
elif isinstance(col, basestring) and "\0" in col: elif isinstance(col, string_types) and "\0" in col:
logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col) logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col)
raise BadValueException(); raise BadValueException();
return col return col
@ -707,6 +714,16 @@ class Porter(object):
defer.returnValue((done, remaining + done)) defer.returnValue((done, remaining + done))
def _setup_state_group_id_seq(self):
def r(txn):
txn.execute("SELECT MAX(id) FROM state_groups")
next_id = txn.fetchone()[0]+1
txn.execute(
"ALTER SEQUENCE state_group_id_seq RESTART WITH %s",
(next_id,),
)
return self.postgres_store.runInteraction("setup_state_group_id_seq", r)
############################################## ##############################################
###### The following is simply UI stuff ###### ###### The following is simply UI stuff ######

View File

@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server. """ This is a reference implementation of a Matrix home server.
""" """
__version__ = "0.26.0" __version__ = "0.27.3-rc2"

View File

@ -204,8 +204,8 @@ class Auth(object):
ip_addr = self.hs.get_ip_from_request(request) ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders( user_agent = request.requestHeaders.getRawHeaders(
"User-Agent", b"User-Agent",
default=[""] default=[b""]
)[0] )[0]
if user and access_token and ip_addr: if user and access_token and ip_addr:
self.store.insert_client_ip( self.store.insert_client_ip(
@ -672,7 +672,7 @@ def has_access_token(request):
bool: False if no access_token was given, True otherwise. bool: False if no access_token was given, True otherwise.
""" """
query_params = request.args.get("access_token") query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders("Authorization") auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers) return bool(query_params) or bool(auth_headers)
@ -692,8 +692,8 @@ def get_access_token_from_request(request, token_not_found_http_status=401):
AuthError: If there isn't an access_token in the request. AuthError: If there isn't an access_token in the request.
""" """
auth_headers = request.requestHeaders.getRawHeaders("Authorization") auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get("access_token") query_params = request.args.get(b"access_token")
if auth_headers: if auth_headers:
# Try the get the access_token from a "Authorization: Bearer" # Try the get the access_token from a "Authorization: Bearer"
# header # header

View File

@ -15,9 +15,10 @@
"""Contains exceptions and error codes.""" """Contains exceptions and error codes."""
import json
import logging import logging
import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -17,7 +17,7 @@ from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID from synapse.types import UserID, RoomID
from twisted.internet import defer from twisted.internet import defer
import ujson as json import simplejson as json
import jsonschema import jsonschema
from jsonschema import FormatChecker from jsonschema import FormatChecker

View File

@ -36,7 +36,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.appservice") logger = logging.getLogger("synapse.app.appservice")
@ -64,7 +64,7 @@ class AppserviceServer(HomeServer):
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -44,7 +44,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.client_reader") logger = logging.getLogger("synapse.app.client_reader")
@ -88,7 +88,7 @@ class ClientReaderServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -31,14 +31,20 @@ from synapse.replication.slave.storage.account_data import SlavedAccountDataStor
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import RoomSendEventRestServlet from synapse.rest.client.v1.room import (
RoomSendEventRestServlet, RoomMembershipRestServlet, RoomStateEventRestServlet,
JoinRoomAliasServlet,
)
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.engines import create_engine from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
@ -46,12 +52,15 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.event_creator") logger = logging.getLogger("synapse.app.event_creator")
class EventCreatorSlavedStore( class EventCreatorSlavedStore(
DirectoryStore,
TransactionStore,
SlavedProfileStore,
SlavedAccountDataStore, SlavedAccountDataStore,
SlavedPusherStore, SlavedPusherStore,
SlavedReceiptsStore, SlavedReceiptsStore,
@ -85,6 +94,9 @@ class EventCreatorServer(HomeServer):
elif name == "client": elif name == "client":
resource = JsonResource(self, canonical_json=False) resource = JsonResource(self, canonical_json=False)
RoomSendEventRestServlet(self).register(resource) RoomSendEventRestServlet(self).register(resource)
RoomMembershipRestServlet(self).register(resource)
RoomStateEventRestServlet(self).register(resource)
JoinRoomAliasServlet(self).register(resource)
resources.update({ resources.update({
"/_matrix/client/r0": resource, "/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource, "/_matrix/client/unstable": resource,
@ -92,7 +104,7 @@ class EventCreatorServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -41,7 +41,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_reader") logger = logging.getLogger("synapse.app.federation_reader")
@ -77,7 +77,7 @@ class FederationReaderServer(HomeServer):
FEDERATION_PREFIX: TransportLayerServer(self), FEDERATION_PREFIX: TransportLayerServer(self),
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -42,7 +42,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_sender") logger = logging.getLogger("synapse.app.federation_sender")
@ -91,7 +91,7 @@ class FederationSenderServer(HomeServer):
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -44,7 +44,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.frontend_proxy") logger = logging.getLogger("synapse.app.frontend_proxy")
@ -90,7 +90,7 @@ class KeyUploadServlet(RestServlet):
# They're actually trying to upload something, proxy to main synapse. # They're actually trying to upload something, proxy to main synapse.
# Pass through the auth headers, if any, in case the access token # Pass through the auth headers, if any, in case the access token
# is there. # is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", []) auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = { headers = {
"Authorization": auth_headers, "Authorization": auth_headers,
} }
@ -142,7 +142,7 @@ class FrontendProxyServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -48,6 +48,7 @@ from synapse.server import HomeServer
from synapse.storage import are_all_users_on_domain from synapse.storage import are_all_users_on_domain
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
@ -56,7 +57,7 @@ from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.application import service from twisted.application import service
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, Resource from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File from twisted.web.static import File
@ -126,7 +127,7 @@ class SynapseHomeServer(HomeServer):
if WEB_CLIENT_PREFIX in resources: if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX) root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else: else:
root_resource = Resource() root_resource = NoResource()
root_resource = create_resource_tree(resources, root_resource) root_resource = create_resource_tree(resources, root_resource)
@ -402,6 +403,10 @@ def run(hs):
stats = {} stats = {}
# Contains the list of processes we will be monitoring
# currently either 0 or 1
stats_process = []
@defer.inlineCallbacks @defer.inlineCallbacks
def phone_stats_home(): def phone_stats_home():
logger.info("Gathering stats for reporting") logger.info("Gathering stats for reporting")
@ -425,8 +430,21 @@ def run(hs):
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms() stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages() stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in r30_results.iteritems():
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages() daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR
stats["event_cache_size"] = hs.config.event_cache_size
if len(stats_process) > 0:
stats["memory_rss"] = 0
stats["cpu_average"] = 0
for process in stats_process:
stats["memory_rss"] += process.memory_info().rss
stats["cpu_average"] += int(process.cpu_percent(interval=None))
logger.info("Reporting stats to matrix.org: %s" % (stats,)) logger.info("Reporting stats to matrix.org: %s" % (stats,))
try: try:
@ -437,10 +455,32 @@ def run(hs):
except Exception as e: except Exception as e:
logger.warn("Error reporting stats: %s", e) logger.warn("Error reporting stats: %s", e)
def performance_stats_init():
try:
import psutil
process = psutil.Process()
# Ensure we can fetch both, and make the initial request for cpu_percent
# so the next request will use this as the initial point.
process.memory_info().rss
process.cpu_percent(interval=None)
logger.info("report_stats can use psutil")
stats_process.append(process)
except (ImportError, AttributeError):
logger.warn(
"report_stats enabled but psutil is not installed or incorrect version."
" Disabling reporting of memory/cpu stats."
" Ensuring psutil is available will help matrix.org track performance"
" changes across releases."
)
if hs.config.report_stats: if hs.config.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals") logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000) clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
clock.call_later(0, performance_stats_init)
# We wait 5 minutes to send the first set of stats as the server can # We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes # be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home) clock.call_later(5 * 60, phone_stats_home)

View File

@ -43,7 +43,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.media_repository") logger = logging.getLogger("synapse.app.media_repository")
@ -84,7 +84,7 @@ class MediaRepositoryServer(HomeServer):
), ),
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -37,7 +37,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.pusher") logger = logging.getLogger("synapse.app.pusher")
@ -94,7 +94,7 @@ class PusherServer(HomeServer):
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -56,7 +56,7 @@ from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.synchrotron") logger = logging.getLogger("synapse.app.synchrotron")
@ -269,7 +269,7 @@ class SynchrotronServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -38,7 +38,7 @@ def pid_running(pid):
try: try:
os.kill(pid, 0) os.kill(pid, 0)
return True return True
except OSError, err: except OSError as err:
if err.errno == errno.EPERM: if err.errno == errno.EPERM:
return True return True
return False return False
@ -98,7 +98,7 @@ def stop(pidfile, app):
try: try:
os.kill(pid, signal.SIGTERM) os.kill(pid, signal.SIGTERM)
write("stopped %s" % (app,), colour=GREEN) write("stopped %s" % (app,), colour=GREEN)
except OSError, err: except OSError as err:
if err.errno == errno.ESRCH: if err.errno == errno.ESRCH:
write("%s not running" % (app,), colour=YELLOW) write("%s not running" % (app,), colour=YELLOW)
elif err.errno == errno.EPERM: elif err.errno == errno.EPERM:
@ -252,6 +252,7 @@ def main():
for running_pid in running_pids: for running_pid in running_pids:
while pid_running(running_pid): while pid_running(running_pid):
time.sleep(0.2) time.sleep(0.2)
write("All processes exited; now restarting...")
if action == "start" or action == "restart": if action == "start" or action == "restart":
if start_stop_synapse: if start_stop_synapse:

View File

@ -43,7 +43,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.user_dir") logger = logging.getLogger("synapse.app.user_dir")
@ -116,7 +116,7 @@ class UserDirectoryServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -21,6 +21,8 @@ from twisted.internet import defer
import logging import logging
import re import re
from six import string_types
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -146,7 +148,7 @@ class ApplicationService(object):
) )
regex = regex_obj.get("regex") regex = regex_obj.get("regex")
if isinstance(regex, basestring): if isinstance(regex, string_types):
regex_obj["regex"] = re.compile(regex) # Pre-compile regex regex_obj["regex"] = re.compile(regex) # Pre-compile regex
else: else:
raise ValueError( raise ValueError(

View File

@ -73,7 +73,8 @@ class ApplicationServiceApi(SimpleHttpClient):
super(ApplicationServiceApi, self).__init__(hs) super(ApplicationServiceApi, self).__init__(hs)
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.protocol_meta_cache = ResponseCache(hs, timeout_ms=HOUR_IN_MS) self.protocol_meta_cache = ResponseCache(hs, "as_protocol_meta",
timeout_ms=HOUR_IN_MS)
@defer.inlineCallbacks @defer.inlineCallbacks
def query_user(self, service, user_id): def query_user(self, service, user_id):

View File

@ -19,6 +19,8 @@ import os
import yaml import yaml
from textwrap import dedent from textwrap import dedent
from six import integer_types
class ConfigError(Exception): class ConfigError(Exception):
pass pass
@ -49,7 +51,7 @@ Missing mandatory `server_name` config option.
class Config(object): class Config(object):
@staticmethod @staticmethod
def parse_size(value): def parse_size(value):
if isinstance(value, int) or isinstance(value, long): if isinstance(value, integer_types):
return value return value
sizes = {"K": 1024, "M": 1024 * 1024} sizes = {"K": 1024, "M": 1024 * 1024}
size = 1 size = 1
@ -61,7 +63,7 @@ class Config(object):
@staticmethod @staticmethod
def parse_duration(value): def parse_duration(value):
if isinstance(value, int) or isinstance(value, long): if isinstance(value, integer_types):
return value return value
second = 1000 second = 1000
minute = 60 * second minute = 60 * second
@ -288,22 +290,22 @@ class Config(object):
) )
obj.invoke_all("generate_files", config) obj.invoke_all("generate_files", config)
config_file.write(config_bytes) config_file.write(config_bytes)
print ( print((
"A config file has been generated in %r for server name" "A config file has been generated in %r for server name"
" %r with corresponding SSL keys and self-signed" " %r with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it" " certificates. Please review this file and customise it"
" to your needs." " to your needs."
) % (config_path, server_name) ) % (config_path, server_name))
print ( print(
"If this server name is incorrect, you will need to" "If this server name is incorrect, you will need to"
" regenerate the SSL certificates" " regenerate the SSL certificates"
) )
return return
else: else:
print ( print((
"Config file %r already exists. Generating any missing key" "Config file %r already exists. Generating any missing key"
" files." " files."
) % (config_path,) ) % (config_path,))
generate_keys = True generate_keys = True
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(

View File

@ -21,6 +21,8 @@ import urllib
import yaml import yaml
import logging import logging
from six import string_types
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -89,14 +91,14 @@ def _load_appservice(hostname, as_info, config_filename):
"id", "as_token", "hs_token", "sender_localpart" "id", "as_token", "hs_token", "sender_localpart"
] ]
for field in required_string_fields: for field in required_string_fields:
if not isinstance(as_info.get(field), basestring): if not isinstance(as_info.get(field), string_types):
raise KeyError("Required string field: '%s' (%s)" % ( raise KeyError("Required string field: '%s' (%s)" % (
field, config_filename, field, config_filename,
)) ))
# 'url' must either be a string or explicitly null, not missing # 'url' must either be a string or explicitly null, not missing
# to avoid accidentally turning off push for ASes. # to avoid accidentally turning off push for ASes.
if (not isinstance(as_info.get("url"), basestring) and if (not isinstance(as_info.get("url"), string_types) and
as_info.get("url", "") is not None): as_info.get("url", "") is not None):
raise KeyError( raise KeyError(
"Required string field or explicit null: 'url' (%s)" % (config_filename,) "Required string field or explicit null: 'url' (%s)" % (config_filename,)
@ -128,7 +130,7 @@ def _load_appservice(hostname, as_info, config_filename):
"Expected namespace entry in %s to be an object," "Expected namespace entry in %s to be an object,"
" but got %s", ns, regex_obj " but got %s", ns, regex_obj
) )
if not isinstance(regex_obj.get("regex"), basestring): if not isinstance(regex_obj.get("regex"), string_types):
raise ValueError( raise ValueError(
"Missing/bad type 'regex' key in %s", regex_obj "Missing/bad type 'regex' key in %s", regex_obj
) )

View File

@ -95,7 +95,9 @@ class RegistrationConfig(Config):
# Set the number of bcrypt rounds used to generate password hash. # Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash. # Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12. # The default number is 12 (which equates to 2^12 rounds).
# N.B. that increasing this will exponentially increase the time required
# to register or login - e.g. 24 => 2^24 rounds which will take >20 mins.
bcrypt_rounds: 12 bcrypt_rounds: 12
# Allows users to register as guests without a password/email/etc, and # Allows users to register as guests without a password/email/etc, and

View File

@ -65,7 +65,7 @@ class FederationServer(FederationBase):
# We cache responses to state queries, as they take a while and often # We cache responses to state queries, as they take a while and often
# come in waves. # come in waves.
self._state_resp_cache = ResponseCache(hs, timeout_ms=30000) self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function

View File

@ -35,7 +35,7 @@ from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
import synapse.metrics import synapse.metrics
from blist import sorteddict from sortedcontainers import SortedDict
from collections import namedtuple from collections import namedtuple
import logging import logging
@ -56,19 +56,19 @@ class FederationRemoteSendQueue(object):
self.is_mine_id = hs.is_mine_id self.is_mine_id = hs.is_mine_id
self.presence_map = {} # Pending presence map user_id -> UserPresenceState self.presence_map = {} # Pending presence map user_id -> UserPresenceState
self.presence_changed = sorteddict() # Stream position -> user_id self.presence_changed = SortedDict() # Stream position -> user_id
self.keyed_edu = {} # (destination, key) -> EDU self.keyed_edu = {} # (destination, key) -> EDU
self.keyed_edu_changed = sorteddict() # stream position -> (destination, key) self.keyed_edu_changed = SortedDict() # stream position -> (destination, key)
self.edus = sorteddict() # stream position -> Edu self.edus = SortedDict() # stream position -> Edu
self.failures = sorteddict() # stream position -> (destination, Failure) self.failures = SortedDict() # stream position -> (destination, Failure)
self.device_messages = sorteddict() # stream position -> destination self.device_messages = SortedDict() # stream position -> destination
self.pos = 1 self.pos = 1
self.pos_time = sorteddict() self.pos_time = SortedDict()
# EVERYTHING IS SAD. In particular, python only makes new scopes when # EVERYTHING IS SAD. In particular, python only makes new scopes when
# we make a new function, so we need to make a new function so the inner # we make a new function, so we need to make a new function so the inner

View File

@ -169,7 +169,7 @@ class TransactionQueue(object):
while True: while True:
last_token = yield self.store.get_federation_out_pos("events") last_token = yield self.store.get_federation_out_pos("events")
next_token, events = yield self.store.get_all_new_events_stream( next_token, events = yield self.store.get_all_new_events_stream(
last_token, self._last_poked_id, limit=20, last_token, self._last_poked_id, limit=100,
) )
logger.debug("Handling %s -> %s", last_token, next_token) logger.debug("Handling %s -> %s", last_token, next_token)
@ -177,24 +177,33 @@ class TransactionQueue(object):
if not events and next_token >= self._last_poked_id: if not events and next_token >= self._last_poked_id:
break break
for event in events: @defer.inlineCallbacks
def handle_event(event):
# Only send events for this server. # Only send events for this server.
send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of() send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of()
is_mine = self.is_mine_id(event.event_id) is_mine = self.is_mine_id(event.event_id)
if not is_mine and send_on_behalf_of is None: if not is_mine and send_on_behalf_of is None:
continue return
try:
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
)
except Exception:
logger.exception(
"Failed to calculate hosts in room for event: %s",
event.event_id,
)
return
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
)
destinations = set(destinations) destinations = set(destinations)
if send_on_behalf_of is not None: if send_on_behalf_of is not None:
@ -207,12 +216,44 @@ class TransactionQueue(object):
self._send_pdu(event, destinations) self._send_pdu(event, destinations)
events_processed_counter.inc_by(len(events)) @defer.inlineCallbacks
def handle_room_events(events):
for event in events:
yield handle_event(event)
events_by_room = {}
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
],
consumeErrors=True
))
yield self.store.update_federation_out_pos( yield self.store.update_federation_out_pos(
"events", next_token "events", next_token
) )
if events:
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_lag.set(
now - ts, "federation_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "federation_sender",
)
events_processed_counter.inc_by(len(events))
synapse.metrics.event_processing_positions.set(
next_token, "federation_sender",
)
finally: finally:
self._is_processing = False self._is_processing = False

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -20,6 +21,7 @@ from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
import logging import logging
import urllib
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -49,7 +51,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state dest=%s, room=%s", logger.debug("get_room_state dest=%s, room=%s",
destination, room_id) destination, room_id)
path = PREFIX + "/state/%s/" % room_id path = _create_path(PREFIX, "/state/%s/", room_id)
return self.client.get_json( return self.client.get_json(
destination, path=path, args={"event_id": event_id}, destination, path=path, args={"event_id": event_id},
) )
@ -71,7 +73,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state_ids dest=%s, room=%s", logger.debug("get_room_state_ids dest=%s, room=%s",
destination, room_id) destination, room_id)
path = PREFIX + "/state_ids/%s/" % room_id path = _create_path(PREFIX, "/state_ids/%s/", room_id)
return self.client.get_json( return self.client.get_json(
destination, path=path, args={"event_id": event_id}, destination, path=path, args={"event_id": event_id},
) )
@ -93,7 +95,7 @@ class TransportLayerClient(object):
logger.debug("get_pdu dest=%s, event_id=%s", logger.debug("get_pdu dest=%s, event_id=%s",
destination, event_id) destination, event_id)
path = PREFIX + "/event/%s/" % (event_id, ) path = _create_path(PREFIX, "/event/%s/", event_id)
return self.client.get_json(destination, path=path, timeout=timeout) return self.client.get_json(destination, path=path, timeout=timeout)
@log_function @log_function
@ -119,7 +121,7 @@ class TransportLayerClient(object):
# TODO: raise? # TODO: raise?
return return
path = PREFIX + "/backfill/%s/" % (room_id,) path = _create_path(PREFIX, "/backfill/%s/", room_id)
args = { args = {
"v": event_tuples, "v": event_tuples,
@ -157,9 +159,11 @@ class TransportLayerClient(object):
# generated by the json_data_callback. # generated by the json_data_callback.
json_data = transaction.get_dict() json_data = transaction.get_dict()
path = _create_path(PREFIX, "/send/%s/", transaction.transaction_id)
response = yield self.client.put_json( response = yield self.client.put_json(
transaction.destination, transaction.destination,
path=PREFIX + "/send/%s/" % transaction.transaction_id, path=path,
data=json_data, data=json_data,
json_data_callback=json_data_callback, json_data_callback=json_data_callback,
long_retries=True, long_retries=True,
@ -177,7 +181,7 @@ class TransportLayerClient(object):
@log_function @log_function
def make_query(self, destination, query_type, args, retry_on_dns_fail, def make_query(self, destination, query_type, args, retry_on_dns_fail,
ignore_backoff=False): ignore_backoff=False):
path = PREFIX + "/query/%s" % query_type path = _create_path(PREFIX, "/query/%s", query_type)
content = yield self.client.get_json( content = yield self.client.get_json(
destination=destination, destination=destination,
@ -222,7 +226,7 @@ class TransportLayerClient(object):
"make_membership_event called with membership='%s', must be one of %s" % "make_membership_event called with membership='%s', must be one of %s" %
(membership, ",".join(valid_memberships)) (membership, ",".join(valid_memberships))
) )
path = PREFIX + "/make_%s/%s/%s" % (membership, room_id, user_id) path = _create_path(PREFIX, "/make_%s/%s/%s", membership, room_id, user_id)
ignore_backoff = False ignore_backoff = False
retry_on_dns_fail = False retry_on_dns_fail = False
@ -248,7 +252,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def send_join(self, destination, room_id, event_id, content): def send_join(self, destination, room_id, event_id, content):
path = PREFIX + "/send_join/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/send_join/%s/%s", room_id, event_id)
response = yield self.client.put_json( response = yield self.client.put_json(
destination=destination, destination=destination,
@ -261,7 +265,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def send_leave(self, destination, room_id, event_id, content): def send_leave(self, destination, room_id, event_id, content):
path = PREFIX + "/send_leave/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/send_leave/%s/%s", room_id, event_id)
response = yield self.client.put_json( response = yield self.client.put_json(
destination=destination, destination=destination,
@ -280,7 +284,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def send_invite(self, destination, room_id, event_id, content): def send_invite(self, destination, room_id, event_id, content):
path = PREFIX + "/invite/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/invite/%s/%s", room_id, event_id)
response = yield self.client.put_json( response = yield self.client.put_json(
destination=destination, destination=destination,
@ -322,7 +326,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def exchange_third_party_invite(self, destination, room_id, event_dict): def exchange_third_party_invite(self, destination, room_id, event_dict):
path = PREFIX + "/exchange_third_party_invite/%s" % (room_id,) path = _create_path(PREFIX, "/exchange_third_party_invite/%s", room_id,)
response = yield self.client.put_json( response = yield self.client.put_json(
destination=destination, destination=destination,
@ -335,7 +339,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def get_event_auth(self, destination, room_id, event_id): def get_event_auth(self, destination, room_id, event_id):
path = PREFIX + "/event_auth/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/event_auth/%s/%s", room_id, event_id)
content = yield self.client.get_json( content = yield self.client.get_json(
destination=destination, destination=destination,
@ -347,7 +351,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def send_query_auth(self, destination, room_id, event_id, content): def send_query_auth(self, destination, room_id, event_id, content):
path = PREFIX + "/query_auth/%s/%s" % (room_id, event_id) path = _create_path(PREFIX, "/query_auth/%s/%s", room_id, event_id)
content = yield self.client.post_json( content = yield self.client.post_json(
destination=destination, destination=destination,
@ -409,7 +413,7 @@ class TransportLayerClient(object):
Returns: Returns:
A dict containg the device keys. A dict containg the device keys.
""" """
path = PREFIX + "/user/devices/" + user_id path = _create_path(PREFIX, "/user/devices/%s", user_id)
content = yield self.client.get_json( content = yield self.client.get_json(
destination=destination, destination=destination,
@ -459,7 +463,7 @@ class TransportLayerClient(object):
@log_function @log_function
def get_missing_events(self, destination, room_id, earliest_events, def get_missing_events(self, destination, room_id, earliest_events,
latest_events, limit, min_depth, timeout): latest_events, limit, min_depth, timeout):
path = PREFIX + "/get_missing_events/%s" % (room_id,) path = _create_path(PREFIX, "/get_missing_events/%s", room_id,)
content = yield self.client.post_json( content = yield self.client.post_json(
destination=destination, destination=destination,
@ -479,7 +483,7 @@ class TransportLayerClient(object):
def get_group_profile(self, destination, group_id, requester_user_id): def get_group_profile(self, destination, group_id, requester_user_id):
"""Get a group profile """Get a group profile
""" """
path = PREFIX + "/groups/%s/profile" % (group_id,) path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -498,7 +502,7 @@ class TransportLayerClient(object):
requester_user_id (str) requester_user_id (str)
content (dict): The new profile of the group content (dict): The new profile of the group
""" """
path = PREFIX + "/groups/%s/profile" % (group_id,) path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -512,7 +516,7 @@ class TransportLayerClient(object):
def get_group_summary(self, destination, group_id, requester_user_id): def get_group_summary(self, destination, group_id, requester_user_id):
"""Get a group summary """Get a group summary
""" """
path = PREFIX + "/groups/%s/summary" % (group_id,) path = _create_path(PREFIX, "/groups/%s/summary", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -525,7 +529,7 @@ class TransportLayerClient(object):
def get_rooms_in_group(self, destination, group_id, requester_user_id): def get_rooms_in_group(self, destination, group_id, requester_user_id):
"""Get all rooms in a group """Get all rooms in a group
""" """
path = PREFIX + "/groups/%s/rooms" % (group_id,) path = _create_path(PREFIX, "/groups/%s/rooms", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -538,7 +542,7 @@ class TransportLayerClient(object):
content): content):
"""Add a room to a group """Add a room to a group
""" """
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,) path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -552,7 +556,10 @@ class TransportLayerClient(object):
config_key, content): config_key, content):
"""Update room in group """Update room in group
""" """
path = PREFIX + "/groups/%s/room/%s/config/%s" % (group_id, room_id, config_key,) path = _create_path(
PREFIX, "/groups/%s/room/%s/config/%s",
group_id, room_id, config_key,
)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -565,7 +572,7 @@ class TransportLayerClient(object):
def remove_room_from_group(self, destination, group_id, requester_user_id, room_id): def remove_room_from_group(self, destination, group_id, requester_user_id, room_id):
"""Remove a room from a group """Remove a room from a group
""" """
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,) path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -578,7 +585,7 @@ class TransportLayerClient(object):
def get_users_in_group(self, destination, group_id, requester_user_id): def get_users_in_group(self, destination, group_id, requester_user_id):
"""Get users in a group """Get users in a group
""" """
path = PREFIX + "/groups/%s/users" % (group_id,) path = _create_path(PREFIX, "/groups/%s/users", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -591,7 +598,7 @@ class TransportLayerClient(object):
def get_invited_users_in_group(self, destination, group_id, requester_user_id): def get_invited_users_in_group(self, destination, group_id, requester_user_id):
"""Get users that have been invited to a group """Get users that have been invited to a group
""" """
path = PREFIX + "/groups/%s/invited_users" % (group_id,) path = _create_path(PREFIX, "/groups/%s/invited_users", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -604,7 +611,23 @@ class TransportLayerClient(object):
def accept_group_invite(self, destination, group_id, user_id, content): def accept_group_invite(self, destination, group_id, user_id, content):
"""Accept a group invite """Accept a group invite
""" """
path = PREFIX + "/groups/%s/users/%s/accept_invite" % (group_id, user_id) path = _create_path(
PREFIX, "/groups/%s/users/%s/accept_invite",
group_id, user_id,
)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def join_group(self, destination, group_id, user_id, content):
"""Attempts to join a group
"""
path = _create_path(PREFIX, "/groups/%s/users/%s/join", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -617,7 +640,7 @@ class TransportLayerClient(object):
def invite_to_group(self, destination, group_id, user_id, requester_user_id, content): def invite_to_group(self, destination, group_id, user_id, requester_user_id, content):
"""Invite a user to a group """Invite a user to a group
""" """
path = PREFIX + "/groups/%s/users/%s/invite" % (group_id, user_id) path = _create_path(PREFIX, "/groups/%s/users/%s/invite", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -633,7 +656,7 @@ class TransportLayerClient(object):
invited. invited.
""" """
path = PREFIX + "/groups/local/%s/users/%s/invite" % (group_id, user_id) path = _create_path(PREFIX, "/groups/local/%s/users/%s/invite", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -647,7 +670,7 @@ class TransportLayerClient(object):
user_id, content): user_id, content):
"""Remove a user fron a group """Remove a user fron a group
""" """
path = PREFIX + "/groups/%s/users/%s/remove" % (group_id, user_id) path = _create_path(PREFIX, "/groups/%s/users/%s/remove", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -664,7 +687,7 @@ class TransportLayerClient(object):
kicked from the group. kicked from the group.
""" """
path = PREFIX + "/groups/local/%s/users/%s/remove" % (group_id, user_id) path = _create_path(PREFIX, "/groups/local/%s/users/%s/remove", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -679,7 +702,7 @@ class TransportLayerClient(object):
the attestations the attestations
""" """
path = PREFIX + "/groups/%s/renew_attestation/%s" % (group_id, user_id) path = _create_path(PREFIX, "/groups/%s/renew_attestation/%s", group_id, user_id)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -694,11 +717,12 @@ class TransportLayerClient(object):
"""Update a room entry in a group summary """Update a room entry in a group summary
""" """
if category_id: if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % ( path = _create_path(
PREFIX, "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id, group_id, category_id, room_id,
) )
else: else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,) path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -714,11 +738,12 @@ class TransportLayerClient(object):
"""Delete a room entry in a group summary """Delete a room entry in a group summary
""" """
if category_id: if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % ( path = _create_path(
PREFIX + "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id, group_id, category_id, room_id,
) )
else: else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,) path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -731,7 +756,7 @@ class TransportLayerClient(object):
def get_group_categories(self, destination, group_id, requester_user_id): def get_group_categories(self, destination, group_id, requester_user_id):
"""Get all categories in a group """Get all categories in a group
""" """
path = PREFIX + "/groups/%s/categories" % (group_id,) path = _create_path(PREFIX, "/groups/%s/categories", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -744,7 +769,7 @@ class TransportLayerClient(object):
def get_group_category(self, destination, group_id, requester_user_id, category_id): def get_group_category(self, destination, group_id, requester_user_id, category_id):
"""Get category info in a group """Get category info in a group
""" """
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,) path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -758,7 +783,7 @@ class TransportLayerClient(object):
content): content):
"""Update a category in a group """Update a category in a group
""" """
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,) path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -773,7 +798,7 @@ class TransportLayerClient(object):
category_id): category_id):
"""Delete a category in a group """Delete a category in a group
""" """
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,) path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -786,7 +811,7 @@ class TransportLayerClient(object):
def get_group_roles(self, destination, group_id, requester_user_id): def get_group_roles(self, destination, group_id, requester_user_id):
"""Get all roles in a group """Get all roles in a group
""" """
path = PREFIX + "/groups/%s/roles" % (group_id,) path = _create_path(PREFIX, "/groups/%s/roles", group_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -799,7 +824,7 @@ class TransportLayerClient(object):
def get_group_role(self, destination, group_id, requester_user_id, role_id): def get_group_role(self, destination, group_id, requester_user_id, role_id):
"""Get a roles info """Get a roles info
""" """
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,) path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.get_json( return self.client.get_json(
destination=destination, destination=destination,
@ -813,7 +838,7 @@ class TransportLayerClient(object):
content): content):
"""Update a role in a group """Update a role in a group
""" """
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,) path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -827,7 +852,7 @@ class TransportLayerClient(object):
def delete_group_role(self, destination, group_id, requester_user_id, role_id): def delete_group_role(self, destination, group_id, requester_user_id, role_id):
"""Delete a role in a group """Delete a role in a group
""" """
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,) path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -842,11 +867,12 @@ class TransportLayerClient(object):
"""Update a users entry in a group """Update a users entry in a group
""" """
if role_id: if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % ( path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id, group_id, role_id, user_id,
) )
else: else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,) path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.post_json( return self.client.post_json(
destination=destination, destination=destination,
@ -856,17 +882,33 @@ class TransportLayerClient(object):
ignore_backoff=True, ignore_backoff=True,
) )
@log_function
def set_group_join_policy(self, destination, group_id, requester_user_id,
content):
"""Sets the join policy for a group
"""
path = _create_path(PREFIX, "/groups/%s/settings/m.join_policy", group_id,)
return self.client.put_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function @log_function
def delete_group_summary_user(self, destination, group_id, requester_user_id, def delete_group_summary_user(self, destination, group_id, requester_user_id,
user_id, role_id): user_id, role_id):
"""Delete a users entry in a group """Delete a users entry in a group
""" """
if role_id: if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % ( path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id, group_id, role_id, user_id,
) )
else: else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,) path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.delete_json( return self.client.delete_json(
destination=destination, destination=destination,
@ -889,3 +931,22 @@ class TransportLayerClient(object):
data=content, data=content,
ignore_backoff=True, ignore_backoff=True,
) )
def _create_path(prefix, path, *args):
"""Creates a path from the prefix, path template and args. Ensures that
all args are url encoded.
Example:
_create_path(PREFIX, "/event/%s/", event_id)
Args:
prefix (str)
path (str): String template for the path
args: ([str]): Args to insert into path. Each arg will be url encoded
Returns:
str
"""
return prefix + path % tuple(urllib.quote(arg, "") for arg in args)

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -802,6 +803,23 @@ class FederationGroupsAcceptInviteServlet(BaseFederationServlet):
defer.returnValue((200, new_content)) defer.returnValue((200, new_content))
class FederationGroupsJoinServlet(BaseFederationServlet):
"""Attempt to join a group
"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/join$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
if get_domain_from_id(user_id) != origin:
raise SynapseError(403, "user_id doesn't match origin")
new_content = yield self.handler.join_group(
group_id, user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsRemoveUserServlet(BaseFederationServlet): class FederationGroupsRemoveUserServlet(BaseFederationServlet):
"""Leave or kick a user from the group """Leave or kick a user from the group
""" """
@ -1124,6 +1142,24 @@ class FederationGroupsBulkPublicisedServlet(BaseFederationServlet):
defer.returnValue((200, resp)) defer.returnValue((200, resp))
class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
"""Sets whether a group is joinable without an invite or knock
"""
PATH = "/groups/(?P<group_id>[^/]*)/settings/m.join_policy$"
@defer.inlineCallbacks
def on_PUT(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.set_group_join_policy(
group_id, requester_user_id, content
)
defer.returnValue((200, new_content))
FEDERATION_SERVLET_CLASSES = ( FEDERATION_SERVLET_CLASSES = (
FederationSendServlet, FederationSendServlet,
FederationPullServlet, FederationPullServlet,
@ -1163,6 +1199,7 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsInvitedUsersServlet, FederationGroupsInvitedUsersServlet,
FederationGroupsInviteServlet, FederationGroupsInviteServlet,
FederationGroupsAcceptInviteServlet, FederationGroupsAcceptInviteServlet,
FederationGroupsJoinServlet,
FederationGroupsRemoveUserServlet, FederationGroupsRemoveUserServlet,
FederationGroupsSummaryRoomsServlet, FederationGroupsSummaryRoomsServlet,
FederationGroupsCategoriesServlet, FederationGroupsCategoriesServlet,
@ -1172,6 +1209,7 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsSummaryUsersServlet, FederationGroupsSummaryUsersServlet,
FederationGroupsAddRoomsServlet, FederationGroupsAddRoomsServlet,
FederationGroupsAddRoomsConfigServlet, FederationGroupsAddRoomsConfigServlet,
FederationGroupsSettingJoinPolicyServlet,
) )

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -205,6 +206,28 @@ class GroupsServerHandler(object):
defer.returnValue({}) defer.returnValue({})
@defer.inlineCallbacks
def set_group_join_policy(self, group_id, requester_user_id, content):
"""Sets the group join policy.
Currently supported policies are:
- "invite": an invite must be received and accepted in order to join.
- "open": anyone can join.
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
join_policy = _parse_join_policy_from_contents(content)
if join_policy is None:
raise SynapseError(
400, "No value specified for 'm.join_policy'"
)
yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
defer.returnValue({})
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_categories(self, group_id, requester_user_id): def get_group_categories(self, group_id, requester_user_id):
"""Get all categories in a group (as seen by user) """Get all categories in a group (as seen by user)
@ -381,9 +404,16 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id) yield self.check_group_is_ours(group_id, requester_user_id)
group_description = yield self.store.get_group(group_id) group = yield self.store.get_group(group_id)
if group:
cols = [
"name", "short_description", "long_description",
"avatar_url", "is_public",
]
group_description = {key: group[key] for key in cols}
group_description["is_openly_joinable"] = group["join_policy"] == "open"
if group_description:
defer.returnValue(group_description) defer.returnValue(group_description)
else: else:
raise SynapseError(404, "Unknown group") raise SynapseError(404, "Unknown group")
@ -654,6 +684,40 @@ class GroupsServerHandler(object):
else: else:
raise SynapseError(502, "Unknown state returned by HS") raise SynapseError(502, "Unknown state returned by HS")
@defer.inlineCallbacks
def _add_user(self, group_id, user_id, content):
"""Add a user to a group based on a content dict.
See accept_invite, join_group.
"""
if not self.hs.is_mine_id(user_id):
local_attestation = self.attestations.create_attestation(
group_id, user_id,
)
remote_attestation = content["attestation"]
yield self.attestations.verify_attestation(
remote_attestation,
user_id=user_id,
group_id=group_id,
)
else:
local_attestation = None
remote_attestation = None
is_public = _parse_visibility_from_contents(content)
yield self.store.add_user_to_group(
group_id, user_id,
is_admin=False,
is_public=is_public,
local_attestation=local_attestation,
remote_attestation=remote_attestation,
)
defer.returnValue(local_attestation)
@defer.inlineCallbacks @defer.inlineCallbacks
def accept_invite(self, group_id, requester_user_id, content): def accept_invite(self, group_id, requester_user_id, content):
"""User tries to accept an invite to the group. """User tries to accept an invite to the group.
@ -670,30 +734,27 @@ class GroupsServerHandler(object):
if not is_invited: if not is_invited:
raise SynapseError(403, "User not invited to group") raise SynapseError(403, "User not invited to group")
if not self.hs.is_mine_id(requester_user_id): local_attestation = yield self._add_user(group_id, requester_user_id, content)
local_attestation = self.attestations.create_attestation(
group_id, requester_user_id,
)
remote_attestation = content["attestation"]
yield self.attestations.verify_attestation( defer.returnValue({
remote_attestation, "state": "join",
user_id=requester_user_id, "attestation": local_attestation,
group_id=group_id, })
)
else:
local_attestation = None
remote_attestation = None
is_public = _parse_visibility_from_contents(content) @defer.inlineCallbacks
def join_group(self, group_id, requester_user_id, content):
"""User tries to join the group.
yield self.store.add_user_to_group( This will error if the group requires an invite/knock to join
group_id, requester_user_id, """
is_admin=False,
is_public=is_public, group_info = yield self.check_group_is_ours(
local_attestation=local_attestation, group_id, requester_user_id, and_exists=True
remote_attestation=remote_attestation,
) )
if group_info['join_policy'] != "open":
raise SynapseError(403, "Group is not publicly joinable")
local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({ defer.returnValue({
"state": "join", "state": "join",
@ -835,6 +896,31 @@ class GroupsServerHandler(object):
}) })
def _parse_join_policy_from_contents(content):
"""Given a content for a request, return the specified join policy or None
"""
join_policy_dict = content.get("m.join_policy")
if join_policy_dict:
return _parse_join_policy_dict(join_policy_dict)
else:
return None
def _parse_join_policy_dict(join_policy_dict):
"""Given a dict for the "m.join_policy" config return the join policy specified
"""
join_policy_type = join_policy_dict.get("type")
if not join_policy_type:
return "invite"
if join_policy_type not in ("invite", "open"):
raise SynapseError(
400, "Synapse only supports 'invite'/'open' join rule"
)
return join_policy_type
def _parse_visibility_from_contents(content): def _parse_visibility_from_contents(content):
"""Given a content for a request parse out whether the entity should be """Given a content for a request parse out whether the entity should be
public or not public or not

View File

@ -18,7 +18,9 @@ from twisted.internet import defer
import synapse import synapse
from synapse.api.constants import EventTypes from synapse.api.constants import EventTypes
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn from synapse.util.logcontext import (
make_deferred_yieldable, preserve_fn, run_in_background,
)
import logging import logging
@ -84,11 +86,16 @@ class ApplicationServicesHandler(object):
if not events: if not events:
break break
events_by_room = {}
for event in events: for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
@defer.inlineCallbacks
def handle_event(event):
# Gather interested services # Gather interested services
services = yield self._get_services_for_event(event) services = yield self._get_services_for_event(event)
if len(services) == 0: if len(services) == 0:
continue # no services need notifying return # no services need notifying
# Do we know this user exists? If not, poke the user # Do we know this user exists? If not, poke the user
# query API for all services which match that user regex. # query API for all services which match that user regex.
@ -108,9 +115,33 @@ class ApplicationServicesHandler(object):
service, event service, event
) )
events_processed_counter.inc_by(len(events)) @defer.inlineCallbacks
def handle_room_events(events):
for event in events:
yield handle_event(event)
yield make_deferred_yieldable(defer.gatherResults([
run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
], consumeErrors=True))
yield self.store.set_appservice_last_pos(upper_bound) yield self.store.set_appservice_last_pos(upper_bound)
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_positions.set(
upper_bound, "appservice_sender",
)
events_processed_counter.inc_by(len(events))
synapse.metrics.event_processing_lag.set(
now - ts, "appservice_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "appservice_sender",
)
finally: finally:
self.is_processing = False self.is_processing = False

View File

@ -155,7 +155,7 @@ class DeviceHandler(BaseHandler):
try: try:
yield self.store.delete_device(user_id, device_id) yield self.store.delete_device(user_id, device_id)
except errors.StoreError, e: except errors.StoreError as e:
if e.code == 404: if e.code == 404:
# no match # no match
pass pass
@ -204,7 +204,7 @@ class DeviceHandler(BaseHandler):
try: try:
yield self.store.delete_devices(user_id, device_ids) yield self.store.delete_devices(user_id, device_ids)
except errors.StoreError, e: except errors.StoreError as e:
if e.code == 404: if e.code == 404:
# no match # no match
pass pass
@ -243,7 +243,7 @@ class DeviceHandler(BaseHandler):
new_display_name=content.get("display_name") new_display_name=content.get("display_name")
) )
yield self.notify_device_update(user_id, [device_id]) yield self.notify_device_update(user_id, [device_id])
except errors.StoreError, e: except errors.StoreError as e:
if e.code == 404: if e.code == 404:
raise errors.NotFoundError() raise errors.NotFoundError()
else: else:

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd # Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -13,7 +14,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import ujson as json import simplejson as json
import logging import logging
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
@ -134,23 +135,8 @@ class E2eKeysHandler(object):
if user_id in destination_query: if user_id in destination_query:
results[user_id] = keys results[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
except FederationDeniedError as e:
failures[destination] = {
"status": 403, "message": "Federation Denied",
}
except Exception as e: except Exception as e:
# include ConnectionRefused and other errors failures[destination] = _exception_to_failure(e)
failures[destination] = {
"status": 503, "message": e.message
}
yield make_deferred_yieldable(defer.gatherResults([ yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(do_remote_query)(destination) preserve_fn(do_remote_query)(destination)
@ -252,19 +238,8 @@ class E2eKeysHandler(object):
for user_id, keys in remote_result["one_time_keys"].items(): for user_id, keys in remote_result["one_time_keys"].items():
if user_id in device_keys: if user_id in device_keys:
json_result[user_id] = keys json_result[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
except Exception as e: except Exception as e:
# include ConnectionRefused and other errors failures[destination] = _exception_to_failure(e)
failures[destination] = {
"status": 503, "message": e.message
}
yield make_deferred_yieldable(defer.gatherResults([ yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(claim_client_keys)(destination) preserve_fn(claim_client_keys)(destination)
@ -362,6 +337,31 @@ class E2eKeysHandler(object):
) )
def _exception_to_failure(e):
if isinstance(e, CodeMessageException):
return {
"status": e.code, "message": e.message,
}
if isinstance(e, NotRetryingDestination):
return {
"status": 503, "message": "Not ready for retry",
}
if isinstance(e, FederationDeniedError):
return {
"status": 403, "message": "Federation Denied",
}
# include ConnectionRefused and other errors
#
# Note that some Exceptions (notably twisted's ResponseFailed etc) don't
# give a string for e.message, which simplejson then fails to serialize.
return {
"status": 503, "message": str(e.message),
}
def _one_time_keys_match(old_key_json, new_key): def _one_time_keys_match(old_key_json, new_key):
old_key = json.loads(old_key_json) old_key = json.loads(old_key_json)

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -90,6 +91,8 @@ class GroupsLocalHandler(object):
get_group_role = _create_rerouter("get_group_role") get_group_role = _create_rerouter("get_group_role")
get_group_roles = _create_rerouter("get_group_roles") get_group_roles = _create_rerouter("get_group_roles")
set_group_join_policy = _create_rerouter("set_group_join_policy")
@defer.inlineCallbacks @defer.inlineCallbacks
def get_group_summary(self, group_id, requester_user_id): def get_group_summary(self, group_id, requester_user_id):
"""Get the group summary for a group. """Get the group summary for a group.
@ -226,7 +229,45 @@ class GroupsLocalHandler(object):
def join_group(self, group_id, user_id, content): def join_group(self, group_id, user_id, content):
"""Request to join a group """Request to join a group
""" """
raise NotImplementedError() # TODO if self.is_mine_id(group_id):
yield self.groups_server_handler.join_group(
group_id, user_id, content
)
local_attestation = None
remote_attestation = None
else:
local_attestation = self.attestations.create_attestation(group_id, user_id)
content["attestation"] = local_attestation
res = yield self.transport_client.join_group(
get_domain_from_id(group_id), group_id, user_id, content,
)
remote_attestation = res["attestation"]
yield self.attestations.verify_attestation(
remote_attestation,
group_id=group_id,
user_id=user_id,
server_name=get_domain_from_id(group_id),
)
# TODO: Check that the group is public and we're being added publically
is_publicised = content.get("publicise", False)
token = yield self.store.register_user_group_membership(
group_id, user_id,
membership="join",
is_admin=False,
local_attestation=local_attestation,
remote_attestation=remote_attestation,
is_publicised=is_publicised,
)
self.notifier.on_new_event(
"groups_key", token, users=[user_id],
)
defer.returnValue({})
@defer.inlineCallbacks @defer.inlineCallbacks
def accept_invite(self, group_id, user_id, content): def accept_invite(self, group_id, user_id, content):

View File

@ -15,6 +15,11 @@
# limitations under the License. # limitations under the License.
"""Utilities for interacting with Identity Servers""" """Utilities for interacting with Identity Servers"""
import logging
import simplejson as json
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import ( from synapse.api.errors import (
@ -24,9 +29,6 @@ from ._base import BaseHandler
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor
from synapse.api.errors import SynapseError, Codes from synapse.api.errors import SynapseError, Codes
import json
import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -27,7 +27,7 @@ from synapse.types import (
from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter
from synapse.util.logcontext import preserve_fn, run_in_background from synapse.util.logcontext import preserve_fn, run_in_background
from synapse.util.metrics import measure_func from synapse.util.metrics import measure_func
from synapse.util.frozenutils import unfreeze from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
from synapse.replication.http.send_event import send_event_to_master from synapse.replication.http.send_event import send_event_to_master
@ -38,7 +38,7 @@ from canonicaljson import encode_canonical_json
import logging import logging
import random import random
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -454,40 +454,39 @@ class EventCreationHandler(object):
""" """
builder = self.event_builder_factory.new(event_dict) builder = self.event_builder_factory.new(event_dict)
with (yield self.limiter.queue(builder.room_id)): self.validator.validate_new(builder)
self.validator.validate_new(builder)
if builder.type == EventTypes.Member: if builder.type == EventTypes.Member:
membership = builder.content.get("membership", None) membership = builder.content.get("membership", None)
target = UserID.from_string(builder.state_key) target = UserID.from_string(builder.state_key)
if membership in {Membership.JOIN, Membership.INVITE}: if membership in {Membership.JOIN, Membership.INVITE}:
# If event doesn't include a display name, add one. # If event doesn't include a display name, add one.
profile = self.profile_handler profile = self.profile_handler
content = builder.content content = builder.content
try: try:
if "displayname" not in content: if "displayname" not in content:
content["displayname"] = yield profile.get_displayname(target) content["displayname"] = yield profile.get_displayname(target)
if "avatar_url" not in content: if "avatar_url" not in content:
content["avatar_url"] = yield profile.get_avatar_url(target) content["avatar_url"] = yield profile.get_avatar_url(target)
except Exception as e: except Exception as e:
logger.info( logger.info(
"Failed to get profile information for %r: %s", "Failed to get profile information for %r: %s",
target, e target, e
) )
if token_id is not None: if token_id is not None:
builder.internal_metadata.token_id = token_id builder.internal_metadata.token_id = token_id
if txn_id is not None: if txn_id is not None:
builder.internal_metadata.txn_id = txn_id builder.internal_metadata.txn_id = txn_id
event, context = yield self.create_new_client_event( event, context = yield self.create_new_client_event(
builder=builder, builder=builder,
requester=requester, requester=requester,
prev_event_ids=prev_event_ids, prev_event_ids=prev_event_ids,
) )
defer.returnValue((event, context)) defer.returnValue((event, context))
@ -557,27 +556,34 @@ class EventCreationHandler(object):
See self.create_event and self.send_nonmember_event. See self.create_event and self.send_nonmember_event.
""" """
event, context = yield self.create_event(
requester,
event_dict,
token_id=requester.access_token_id,
txn_id=txn_id
)
spam_error = self.spam_checker.check_event_for_spam(event) # We limit the number of concurrent event sends in a room so that we
if spam_error: # don't fork the DAG too much. If we don't limit then we can end up in
if not isinstance(spam_error, basestring): # a situation where event persistence can't keep up, causing
spam_error = "Spam is not permitted here" # extremities to pile up, which in turn leads to state resolution
raise SynapseError( # taking longer.
403, spam_error, Codes.FORBIDDEN with (yield self.limiter.queue(event_dict["room_id"])):
event, context = yield self.create_event(
requester,
event_dict,
token_id=requester.access_token_id,
txn_id=txn_id
) )
yield self.send_nonmember_event( spam_error = self.spam_checker.check_event_for_spam(event)
requester, if spam_error:
event, if not isinstance(spam_error, basestring):
context, spam_error = "Spam is not permitted here"
ratelimit=ratelimit, raise SynapseError(
) 403, spam_error, Codes.FORBIDDEN
)
yield self.send_nonmember_event(
requester,
event,
context,
ratelimit=ratelimit,
)
defer.returnValue(event) defer.returnValue(event)
@measure_func("create_new_client_event") @measure_func("create_new_client_event")
@ -678,8 +684,8 @@ class EventCreationHandler(object):
# Ensure that we can round trip before trying to persist in db # Ensure that we can round trip before trying to persist in db
try: try:
dump = ujson.dumps(unfreeze(event.content)) dump = frozendict_json_encoder.encode(event.content)
ujson.loads(dump) simplejson.loads(dump)
except Exception: except Exception:
logger.exception("Failed to encode content: %r", event.content) logger.exception("Failed to encode content: %r", event.content)
raise raise

View File

@ -47,6 +47,11 @@ class ProfileHandler(BaseHandler):
self.clock.looping_call(self._update_remote_profile_cache, self.PROFILE_UPDATE_MS) self.clock.looping_call(self._update_remote_profile_cache, self.PROFILE_UPDATE_MS)
if hs.config.worker_app is None:
self.clock.looping_call(
self._update_remote_profile_cache, self.PROFILE_UPDATE_MS,
)
reactor.callWhenRunning(self._assign_profile_replication_batches) reactor.callWhenRunning(self._assign_profile_replication_batches)
reactor.callWhenRunning(self._replicate_profiles) reactor.callWhenRunning(self._replicate_profiles)
self.clock.looping_call(self._replicate_profiles, self.PROFILE_REPLICATE_INTERVAL) self.clock.looping_call(self._replicate_profiles, self.PROFILE_REPLICATE_INTERVAL)

View File

@ -23,8 +23,8 @@ from synapse.api.errors import (
) )
from synapse.http.client import CaptchaServerHttpClient from synapse.http.client import CaptchaServerHttpClient
from synapse import types from synapse import types
from synapse.types import UserID from synapse.types import UserID, create_requester, RoomID, RoomAlias
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor, Linearizer
from synapse.util.threepids import check_3pid_allowed from synapse.util.threepids import check_3pid_allowed
from ._base import BaseHandler from ._base import BaseHandler
@ -46,6 +46,10 @@ class RegistrationHandler(BaseHandler):
self.macaroon_gen = hs.get_macaroon_generator() self.macaroon_gen = hs.get_macaroon_generator()
self._generate_user_id_linearizer = Linearizer(
name="_generate_user_id_linearizer",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def check_username(self, localpart, guest_access_token=None, def check_username(self, localpart, guest_access_token=None,
assigned_user_id=None): assigned_user_id=None):
@ -201,10 +205,17 @@ class RegistrationHandler(BaseHandler):
token = None token = None
attempts += 1 attempts += 1
# auto-join the user to any rooms we're supposed to dump them into
fake_requester = create_requester(user_id)
for r in self.hs.config.auto_join_rooms:
try:
yield self._join_user_to_room(fake_requester, r)
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
# We used to generate default identicons here, but nowadays # We used to generate default identicons here, but nowadays
# we want clients to generate their own as part of their branding # we want clients to generate their own as part of their branding
# rather than there being consistent matrix-wide ones, so we don't. # rather than there being consistent matrix-wide ones, so we don't.
defer.returnValue((user_id, token)) defer.returnValue((user_id, token))
@defer.inlineCallbacks @defer.inlineCallbacks
@ -347,9 +358,11 @@ class RegistrationHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def _generate_user_id(self, reseed=False): def _generate_user_id(self, reseed=False):
if reseed or self._next_generated_user_id is None: if reseed or self._next_generated_user_id is None:
self._next_generated_user_id = ( with (yield self._generate_user_id_linearizer.queue(())):
yield self.store.find_next_generated_user_id_localpart() if reseed or self._next_generated_user_id is None:
) self._next_generated_user_id = (
yield self.store.find_next_generated_user_id_localpart()
)
id = self._next_generated_user_id id = self._next_generated_user_id
self._next_generated_user_id += 1 self._next_generated_user_id += 1
@ -479,3 +492,28 @@ class RegistrationHandler(BaseHandler):
) )
defer.returnValue((user_id, access_token)) defer.returnValue((user_id, access_token))
@defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier):
room_id = None
room_member_handler = self.hs.get_room_member_handler()
if RoomID.is_valid(room_identifier):
room_id = room_identifier
elif RoomAlias.is_valid(room_identifier):
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = (
yield room_member_handler.lookup_room_alias(room_alias)
)
room_id = room_id.to_string()
else:
raise SynapseError(400, "%s was not legal room ID or room alias" % (
room_identifier,
))
yield room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
remote_room_hosts=remote_room_hosts,
action="join",
)

View File

@ -44,8 +44,9 @@ EMTPY_THIRD_PARTY_ID = ThirdPartyInstanceID(None, None)
class RoomListHandler(BaseHandler): class RoomListHandler(BaseHandler):
def __init__(self, hs): def __init__(self, hs):
super(RoomListHandler, self).__init__(hs) super(RoomListHandler, self).__init__(hs)
self.response_cache = ResponseCache(hs) self.response_cache = ResponseCache(hs, "room_list")
self.remote_response_cache = ResponseCache(hs, timeout_ms=30 * 1000) self.remote_response_cache = ResponseCache(hs, "remote_room_list",
timeout_ms=30 * 1000)
def get_local_public_room_list(self, limit=None, since_token=None, def get_local_public_room_list(self, limit=None, since_token=None,
search_filter=None, search_filter=None,

View File

@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import abc
import logging import logging
from signedjson.key import decode_verify_key_bytes from signedjson.key import decode_verify_key_bytes
@ -31,6 +31,7 @@ from synapse.types import UserID, RoomID
from synapse.util.async import Linearizer from synapse.util.async import Linearizer
from synapse.util.distributor import user_left_room, user_joined_room from synapse.util.distributor import user_left_room, user_joined_room
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
id_server_scheme = "https://" id_server_scheme = "https://"
@ -42,6 +43,8 @@ class RoomMemberHandler(object):
# API that takes ID strings and returns pagination chunks. These concerns # API that takes ID strings and returns pagination chunks. These concerns
# ought to be separated out a lot better. # ought to be separated out a lot better.
__metaclass__ = abc.ABCMeta
def __init__(self, hs): def __init__(self, hs):
self.hs = hs self.hs = hs
self.store = hs.get_datastore() self.store = hs.get_datastore()
@ -61,9 +64,87 @@ class RoomMemberHandler(object):
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.spam_checker = hs.get_spam_checker() self.spam_checker = hs.get_spam_checker()
self.distributor = hs.get_distributor() @abc.abstractmethod
self.distributor.declare("user_joined_room") def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
self.distributor.declare("user_left_room") """Try and join a room that this server is not in
Args:
requester (Requester)
remote_room_hosts (list[str]): List of servers that can be used
to join via.
room_id (str): Room that we are trying to join
user (UserID): User who is trying to join
content (dict): A dict that should be used as the content of the
join event.
Returns:
Deferred
"""
raise NotImplementedError()
@abc.abstractmethod
def _remote_reject_invite(self, remote_room_hosts, room_id, target):
"""Attempt to reject an invite for a room this server is not in. If we
fail to do so we locally mark the invite as rejected.
Args:
requester (Requester)
remote_room_hosts (list[str]): List of servers to use to try and
reject invite
room_id (str)
target (UserID): The user rejecting the invite
Returns:
Deferred[dict]: A dictionary to be returned to the client, may
include event_id etc, or nothing if we locally rejected
"""
raise NotImplementedError()
@abc.abstractmethod
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Get a guest access token for a 3PID, creating a guest account if
one doesn't already exist.
Args:
requester (Requester)
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
Returns:
Deferred[(str, str)]: A 2-tuple of `(user_id, access_token)` of the
3PID guest account.
"""
raise NotImplementedError()
@abc.abstractmethod
def _user_joined_room(self, target, room_id):
"""Notifies distributor on master process that the user has joined the
room.
Args:
target (UserID)
room_id (str)
Returns:
Deferred|None
"""
raise NotImplementedError()
@abc.abstractmethod
def _user_left_room(self, target, room_id):
"""Notifies distributor on master process that the user has left the
room.
Args:
target (UserID)
room_id (str)
Returns:
Deferred|None
"""
raise NotImplementedError()
@defer.inlineCallbacks @defer.inlineCallbacks
def _local_membership_update( def _local_membership_update(
@ -127,82 +208,15 @@ class RoomMemberHandler(object):
prev_member_event = yield self.store.get_event(prev_member_event_id) prev_member_event = yield self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined: if newly_joined:
yield user_joined_room(self.distributor, target, room_id) yield self._user_joined_room(target, room_id)
elif event.membership == Membership.LEAVE: elif event.membership == Membership.LEAVE:
if prev_member_event_id: if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id) prev_member_event = yield self.store.get_event(prev_member_event_id)
if prev_member_event.membership == Membership.JOIN: if prev_member_event.membership == Membership.JOIN:
user_left_room(self.distributor, target, room_id) yield self._user_left_room(target, room_id)
defer.returnValue(event) defer.returnValue(event)
@defer.inlineCallbacks
def _remote_join(self, remote_room_hosts, room_id, user, content):
"""Try and join a room that this server is not in
Args:
remote_room_hosts (list[str]): List of servers that can be used
to join via.
room_id (str): Room that we are trying to join
user (UserID): User who is trying to join
content (dict): A dict that should be used as the content of the
join event.
Returns:
Deferred
"""
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
# We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
yield self.federation_handler.do_invite_join(
remote_room_hosts,
room_id,
user.to_string(),
content,
)
yield user_joined_room(self.distributor, user, room_id)
@defer.inlineCallbacks
def _remote_reject_invite(self, remote_room_hosts, room_id, target):
"""Attempt to reject an invite for a room this server is not in. If we
fail to do so we locally mark the invite as rejected.
Args:
remote_room_hosts (list[str]): List of servers to use to try and
reject invite
room_id (str)
target (UserID): The user rejecting the invite
Returns:
Deferred[dict]: A dictionary to be returned to the client, may
include event_id etc, or nothing if we locally rejected
"""
fed_handler = self.federation_handler
try:
ret = yield fed_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
target.to_string(),
)
defer.returnValue(ret)
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
target.to_string(), room_id
)
defer.returnValue({})
@defer.inlineCallbacks @defer.inlineCallbacks
def update_membership( def update_membership(
self, self,
@ -356,7 +370,7 @@ class RoomMemberHandler(object):
content["kind"] = "guest" content["kind"] = "guest"
ret = yield self._remote_join( ret = yield self._remote_join(
remote_room_hosts, room_id, target, content requester, remote_room_hosts, room_id, target, content
) )
defer.returnValue(ret) defer.returnValue(ret)
@ -378,7 +392,7 @@ class RoomMemberHandler(object):
# send the rejection to the inviter's HS. # send the rejection to the inviter's HS.
remote_room_hosts = remote_room_hosts + [inviter.domain] remote_room_hosts = remote_room_hosts + [inviter.domain]
res = yield self._remote_reject_invite( res = yield self._remote_reject_invite(
remote_room_hosts, room_id, target, requester, remote_room_hosts, room_id, target,
) )
defer.returnValue(res) defer.returnValue(res)
@ -476,12 +490,12 @@ class RoomMemberHandler(object):
prev_member_event = yield self.store.get_event(prev_member_event_id) prev_member_event = yield self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined: if newly_joined:
yield user_joined_room(self.distributor, target_user, room_id) yield self._user_joined_room(target_user, room_id)
elif event.membership == Membership.LEAVE: elif event.membership == Membership.LEAVE:
if prev_member_event_id: if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id) prev_member_event = yield self.store.get_event(prev_member_event_id)
if prev_member_event.membership == Membership.JOIN: if prev_member_event.membership == Membership.JOIN:
user_left_room(self.distributor, target_user, room_id) yield self._user_left_room(target_user, room_id)
@defer.inlineCallbacks @defer.inlineCallbacks
def _can_guest_join(self, current_state_ids): def _can_guest_join(self, current_state_ids):
@ -672,6 +686,7 @@ class RoomMemberHandler(object):
token, public_keys, fallback_public_key, display_name = ( token, public_keys, fallback_public_key, display_name = (
yield self._ask_id_server_for_third_party_invite( yield self._ask_id_server_for_third_party_invite(
requester=requester,
id_server=id_server, id_server=id_server,
medium=medium, medium=medium,
address=address, address=address,
@ -708,6 +723,7 @@ class RoomMemberHandler(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _ask_id_server_for_third_party_invite( def _ask_id_server_for_third_party_invite(
self, self,
requester,
id_server, id_server,
medium, medium,
address, address,
@ -724,6 +740,7 @@ class RoomMemberHandler(object):
Asks an identity server for a third party invite. Asks an identity server for a third party invite.
Args: Args:
requester (Requester)
id_server (str): hostname + optional port for the identity server. id_server (str): hostname + optional port for the identity server.
medium (str): The literal string "email". medium (str): The literal string "email".
address (str): The third party address being invited. address (str): The third party address being invited.
@ -766,8 +783,8 @@ class RoomMemberHandler(object):
} }
if self.config.invite_3pid_guest: if self.config.invite_3pid_guest:
rh = self.registration_handler guest_access_token, guest_user_id = yield self.get_or_register_3pid_guest(
guest_user_id, guest_access_token = yield rh.get_or_register_3pid_guest( requester=requester,
medium=medium, medium=medium,
address=address, address=address,
inviter_user_id=inviter_user_id, inviter_user_id=inviter_user_id,
@ -800,27 +817,6 @@ class RoomMemberHandler(object):
display_name = data["display_name"] display_name = data["display_name"]
defer.returnValue((token, public_keys, fallback_public_key, display_name)) defer.returnValue((token, public_keys, fallback_public_key, display_name))
@defer.inlineCallbacks
def forget(self, user, room_id):
user_id = user.to_string()
member = yield self.state_handler.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
membership = member.membership if member else None
if membership is not None and membership not in [
Membership.LEAVE, Membership.BAN
]:
raise SynapseError(400, "User %s in room %s" % (
user_id, room_id
))
if membership:
yield self.store.forget(user_id, room_id)
@defer.inlineCallbacks @defer.inlineCallbacks
def _is_host_in_room(self, current_state_ids): def _is_host_in_room(self, current_state_ids):
# Have we just created the room, and is this about to be the very # Have we just created the room, and is this about to be the very
@ -842,3 +838,102 @@ class RoomMemberHandler(object):
defer.returnValue(True) defer.returnValue(True)
defer.returnValue(False) defer.returnValue(False)
class RoomMemberMasterHandler(RoomMemberHandler):
def __init__(self, hs):
super(RoomMemberMasterHandler, self).__init__(hs)
self.distributor = hs.get_distributor()
self.distributor.declare("user_joined_room")
self.distributor.declare("user_left_room")
@defer.inlineCallbacks
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Implements RoomMemberHandler._remote_join
"""
# filter ourselves out of remote_room_hosts: do_invite_join ignores it
# and if it is the only entry we'd like to return a 404 rather than a
# 500.
remote_room_hosts = [
host for host in remote_room_hosts if host != self.hs.hostname
]
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
# We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
yield self.federation_handler.do_invite_join(
remote_room_hosts,
room_id,
user.to_string(),
content,
)
yield self._user_joined_room(user, room_id)
@defer.inlineCallbacks
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
"""Implements RoomMemberHandler._remote_reject_invite
"""
fed_handler = self.federation_handler
try:
ret = yield fed_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
target.to_string(),
)
defer.returnValue(ret)
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
target.to_string(), room_id
)
defer.returnValue({})
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Implements RoomMemberHandler.get_or_register_3pid_guest
"""
rg = self.registration_handler
return rg.get_or_register_3pid_guest(medium, address, inviter_user_id)
def _user_joined_room(self, target, room_id):
"""Implements RoomMemberHandler._user_joined_room
"""
return user_joined_room(self.distributor, target, room_id)
def _user_left_room(self, target, room_id):
"""Implements RoomMemberHandler._user_left_room
"""
return user_left_room(self.distributor, target, room_id)
@defer.inlineCallbacks
def forget(self, user, room_id):
user_id = user.to_string()
member = yield self.state_handler.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
membership = member.membership if member else None
if membership is not None and membership not in [
Membership.LEAVE, Membership.BAN
]:
raise SynapseError(400, "User %s in room %s" % (
user_id, room_id
))
if membership:
yield self.store.forget(user_id, room_id)

View File

@ -0,0 +1,102 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.handlers.room_member import RoomMemberHandler
from synapse.replication.http.membership import (
remote_join, remote_reject_invite, get_or_register_3pid_guest,
notify_user_membership_change,
)
logger = logging.getLogger(__name__)
class RoomMemberWorkerHandler(RoomMemberHandler):
@defer.inlineCallbacks
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Implements RoomMemberHandler._remote_join
"""
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
ret = yield remote_join(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
requester=requester,
remote_room_hosts=remote_room_hosts,
room_id=room_id,
user_id=user.to_string(),
content=content,
)
yield self._user_joined_room(user, room_id)
defer.returnValue(ret)
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
"""Implements RoomMemberHandler._remote_reject_invite
"""
return remote_reject_invite(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
requester=requester,
remote_room_hosts=remote_room_hosts,
room_id=room_id,
user_id=target.to_string(),
)
def _user_joined_room(self, target, room_id):
"""Implements RoomMemberHandler._user_joined_room
"""
return notify_user_membership_change(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
user_id=target.to_string(),
room_id=room_id,
change="joined",
)
def _user_left_room(self, target, room_id):
"""Implements RoomMemberHandler._user_left_room
"""
return notify_user_membership_change(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
user_id=target.to_string(),
room_id=room_id,
change="left",
)
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Implements RoomMemberHandler.get_or_register_3pid_guest
"""
return get_or_register_3pid_guest(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
requester=requester,
medium=medium,
address=address,
inviter_user_id=inviter_user_id,
)

View File

@ -169,7 +169,7 @@ class SyncHandler(object):
self.presence_handler = hs.get_presence_handler() self.presence_handler = hs.get_presence_handler()
self.event_sources = hs.get_event_sources() self.event_sources = hs.get_event_sources()
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.response_cache = ResponseCache(hs) self.response_cache = ResponseCache(hs, "sync")
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
def wait_for_sync_for_user(self, sync_config, since_token=None, timeout=0, def wait_for_sync_for_user(self, sync_config, since_token=None, timeout=0,

View File

@ -12,8 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import socket
from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.internet.error import ConnectError from twisted.internet.error import ConnectError
@ -33,7 +31,7 @@ SERVER_CACHE = {}
# our record of an individual server which can be tried to reach a destination. # our record of an individual server which can be tried to reach a destination.
# #
# "host" is actually a dotted-quad or ipv6 address string. Except when there's # "host" is the hostname acquired from the SRV record. Except when there's
# no SRV record, in which case it is the original hostname. # no SRV record, in which case it is the original hostname.
_Server = collections.namedtuple( _Server = collections.namedtuple(
"_Server", "priority weight host port expires" "_Server", "priority weight host port expires"
@ -297,20 +295,13 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
payload = answer.payload payload = answer.payload
hosts = yield _get_hosts_for_srv_record( servers.append(_Server(
dns_client, str(payload.target) host=str(payload.target),
) port=int(payload.port),
priority=int(payload.priority),
for (ip, ttl) in hosts: weight=int(payload.weight),
host_ttl = min(answer.ttl, ttl) expires=int(clock.time()) + answer.ttl,
))
servers.append(_Server(
host=ip,
port=int(payload.port),
priority=int(payload.priority),
weight=int(payload.weight),
expires=int(clock.time()) + host_ttl,
))
servers.sort() servers.sort()
cache[service_name] = list(servers) cache[service_name] = list(servers)
@ -328,81 +319,3 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
raise e raise e
defer.returnValue(servers) defer.returnValue(servers)
@defer.inlineCallbacks
def _get_hosts_for_srv_record(dns_client, host):
"""Look up each of the hosts in a SRV record
Args:
dns_client (twisted.names.dns.IResolver):
host (basestring): host to look up
Returns:
Deferred[list[(str, int)]]: a list of (host, ttl) pairs
"""
ip4_servers = []
ip6_servers = []
def cb(res):
# lookupAddress and lookupIP6Address return a three-tuple
# giving the answer, authority, and additional sections of the
# response.
#
# we only care about the answers.
return res[0]
def eb(res, record_type):
if res.check(DNSNameError):
return []
logger.warn("Error looking up %s for %s: %s", record_type, host, res)
return res
# no logcontexts here, so we can safely fire these off and gatherResults
d1 = dns_client.lookupAddress(host).addCallbacks(
cb, eb, errbackArgs=("A", ))
d2 = dns_client.lookupIPV6Address(host).addCallbacks(
cb, eb, errbackArgs=("AAAA", ))
results = yield defer.DeferredList(
[d1, d2], consumeErrors=True)
# if all of the lookups failed, raise an exception rather than blowing out
# the cache with an empty result.
if results and all(s == defer.FAILURE for (s, _) in results):
defer.returnValue(results[0][1])
for (success, result) in results:
if success == defer.FAILURE:
continue
for answer in result:
if not answer.payload:
continue
try:
if answer.type == dns.A:
ip = answer.payload.dottedQuad()
ip4_servers.append((ip, answer.ttl))
elif answer.type == dns.AAAA:
ip = socket.inet_ntop(
socket.AF_INET6, answer.payload.address,
)
ip6_servers.append((ip, answer.ttl))
else:
# the most likely candidate here is a CNAME record.
# rfc2782 says srvs may not point to aliases.
logger.warn(
"Ignoring unexpected DNS record type %s for %s",
answer.type, host,
)
continue
except Exception as e:
logger.warn("Ignoring invalid DNS response for %s: %s",
host, e)
continue
# keep the ipv4 results before the ipv6 results, mostly to match historical
# behaviour.
defer.returnValue(ip4_servers + ip6_servers)

View File

@ -286,7 +286,8 @@ class MatrixFederationHttpClient(object):
headers_dict[b"Authorization"] = auth_headers headers_dict[b"Authorization"] = auth_headers
@defer.inlineCallbacks @defer.inlineCallbacks
def put_json(self, destination, path, data={}, json_data_callback=None, def put_json(self, destination, path, args={}, data={},
json_data_callback=None,
long_retries=False, timeout=None, long_retries=False, timeout=None,
ignore_backoff=False, ignore_backoff=False,
backoff_on_404=False): backoff_on_404=False):
@ -296,6 +297,7 @@ class MatrixFederationHttpClient(object):
destination (str): The remote server to send the HTTP request destination (str): The remote server to send the HTTP request
to. to.
path (str): The HTTP path. path (str): The HTTP path.
args (dict): query params
data (dict): A dict containing the data that will be used as data (dict): A dict containing the data that will be used as
the request body. This will be encoded as JSON. the request body. This will be encoded as JSON.
json_data_callback (callable): A callable returning the dict to json_data_callback (callable): A callable returning the dict to
@ -342,6 +344,7 @@ class MatrixFederationHttpClient(object):
path, path,
body_callback=body_callback, body_callback=body_callback,
headers_dict={"Content-Type": ["application/json"]}, headers_dict={"Content-Type": ["application/json"]},
query_bytes=encode_query_args(args),
long_retries=long_retries, long_retries=long_retries,
timeout=timeout, timeout=timeout,
ignore_backoff=ignore_backoff, ignore_backoff=ignore_backoff,
@ -373,6 +376,7 @@ class MatrixFederationHttpClient(object):
giving up. None indicates no timeout. giving up. None indicates no timeout.
ignore_backoff (bool): true to ignore the historical backoff data and ignore_backoff (bool): true to ignore the historical backoff data and
try the request anyway. try the request anyway.
args (dict): query params
Returns: Returns:
Deferred: Succeeds when we get a 2xx HTTP response. The result Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body. will be the decoded JSON body.

View File

@ -37,7 +37,7 @@ from twisted.web.util import redirectTo
import collections import collections
import logging import logging
import urllib import urllib
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -113,6 +113,11 @@ response_db_sched_duration = metrics.register_counter(
"response_db_sched_duration_seconds", labels=["method", "servlet", "tag"] "response_db_sched_duration_seconds", labels=["method", "servlet", "tag"]
) )
# size in bytes of the response written
response_size = metrics.register_counter(
"response_size", labels=["method", "servlet", "tag"]
)
_next_request_id = 0 _next_request_id = 0
@ -324,7 +329,7 @@ class JsonResource(HttpServer, resource.Resource):
register_paths, so will return (possibly via Deferred) either register_paths, so will return (possibly via Deferred) either
None, or a tuple of (http code, response body). None, or a tuple of (http code, response body).
""" """
if request.method == "OPTIONS": if request.method == b"OPTIONS":
return _options_handler, {} return _options_handler, {}
# Loop through all the registered callbacks to check if the method # Loop through all the registered callbacks to check if the method
@ -426,6 +431,8 @@ class RequestMetrics(object):
context.db_sched_duration_ms / 1000., request.method, self.name, tag context.db_sched_duration_ms / 1000., request.method, self.name, tag
) )
response_size.inc_by(request.sentLength, request.method, self.name, tag)
class RootRedirect(resource.Resource): class RootRedirect(resource.Resource):
"""Redirects the root '/' path to another path.""" """Redirects the root '/' path to another path."""
@ -461,8 +468,7 @@ def respond_with_json(request, code, json_object, send_cors=False,
if canonical_json or synapse.events.USE_FROZEN_DICTS: if canonical_json or synapse.events.USE_FROZEN_DICTS:
json_bytes = encode_canonical_json(json_object) json_bytes = encode_canonical_json(json_object)
else: else:
# ujson doesn't like frozen_dicts. json_bytes = simplejson.dumps(json_object)
json_bytes = ujson.dumps(json_object, ensure_ascii=False)
return respond_with_json_bytes( return respond_with_json_bytes(
request, code, json_bytes, request, code, json_bytes,
@ -489,6 +495,7 @@ def respond_with_json_bytes(request, code, json_bytes, send_cors=False,
request.setHeader(b"Content-Type", b"application/json") request.setHeader(b"Content-Type", b"application/json")
request.setHeader(b"Server", version_string) request.setHeader(b"Server", version_string)
request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),)) request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),))
request.setHeader(b"Cache-Control", b"no-cache, no-store, must-revalidate")
if send_cors: if send_cors:
set_cors_headers(request) set_cors_headers(request)
@ -536,7 +543,7 @@ def finish_request(request):
def _request_user_agent_is_curl(request): def _request_user_agent_is_curl(request):
user_agents = request.requestHeaders.getRawHeaders( user_agents = request.requestHeaders.getRawHeaders(
"User-Agent", default=[] b"User-Agent", default=[]
) )
for user_agent in user_agents: for user_agent in user_agents:
if "curl" in user_agent: if "curl" in user_agent:

View File

@ -20,7 +20,7 @@ import logging
import re import re
import time import time
ACCESS_TOKEN_RE = re.compile(r'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$') ACCESS_TOKEN_RE = re.compile(br'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
class SynapseRequest(Request): class SynapseRequest(Request):
@ -43,12 +43,12 @@ class SynapseRequest(Request):
def get_redacted_uri(self): def get_redacted_uri(self):
return ACCESS_TOKEN_RE.sub( return ACCESS_TOKEN_RE.sub(
r'\1<redacted>\3', br'\1<redacted>\3',
self.uri self.uri
) )
def get_user_agent(self): def get_user_agent(self):
return self.requestHeaders.getRawHeaders("User-Agent", [None])[-1] return self.requestHeaders.getRawHeaders(b"User-Agent", [None])[-1]
def started_processing(self): def started_processing(self):
self.site.access_logger.info( self.site.access_logger.info(

View File

@ -17,12 +17,13 @@ import logging
import functools import functools
import time import time
import gc import gc
import platform
from twisted.internet import reactor from twisted.internet import reactor
from .metric import ( from .metric import (
CounterMetric, CallbackMetric, DistributionMetric, CacheMetric, CounterMetric, CallbackMetric, DistributionMetric, CacheMetric,
MemoryUsageMetric, MemoryUsageMetric, GaugeMetric,
) )
from .process_collector import register_process_collector from .process_collector import register_process_collector
@ -30,6 +31,7 @@ from .process_collector import register_process_collector
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
running_on_pypy = platform.python_implementation() == 'PyPy'
all_metrics = [] all_metrics = []
all_collectors = [] all_collectors = []
@ -63,6 +65,13 @@ class Metrics(object):
""" """
return self._register(CounterMetric, *args, **kwargs) return self._register(CounterMetric, *args, **kwargs)
def register_gauge(self, *args, **kwargs):
"""
Returns:
GaugeMetric
"""
return self._register(GaugeMetric, *args, **kwargs)
def register_callback(self, *args, **kwargs): def register_callback(self, *args, **kwargs):
""" """
Returns: Returns:
@ -142,6 +151,32 @@ reactor_metrics = get_metrics_for("python.twisted.reactor")
tick_time = reactor_metrics.register_distribution("tick_time") tick_time = reactor_metrics.register_distribution("tick_time")
pending_calls_metric = reactor_metrics.register_distribution("pending_calls") pending_calls_metric = reactor_metrics.register_distribution("pending_calls")
synapse_metrics = get_metrics_for("synapse")
# Used to track where various components have processed in the event stream,
# e.g. federation sending, appservice sending, etc.
event_processing_positions = synapse_metrics.register_gauge(
"event_processing_positions", labels=["name"],
)
# Used to track the current max events stream position
event_persisted_position = synapse_metrics.register_gauge(
"event_persisted_position",
)
# Used to track the received_ts of the last event processed by various
# components
event_processing_last_ts = synapse_metrics.register_gauge(
"event_processing_last_ts", labels=["name"],
)
# Used to track the lag processing events. This is the time difference
# between the last processed event's received_ts and the time it was
# finished being processed.
event_processing_lag = synapse_metrics.register_gauge(
"event_processing_lag", labels=["name"],
)
def runUntilCurrentTimer(func): def runUntilCurrentTimer(func):
@ -174,6 +209,9 @@ def runUntilCurrentTimer(func):
tick_time.inc_by(end - start) tick_time.inc_by(end - start)
pending_calls_metric.inc_by(num_pending) pending_calls_metric.inc_by(num_pending)
if running_on_pypy:
return ret
# Check if we need to do a manual GC (since its been disabled), and do # Check if we need to do a manual GC (since its been disabled), and do
# one if necessary. # one if necessary.
threshold = gc.get_threshold() threshold = gc.get_threshold()
@ -206,6 +244,7 @@ try:
# We manually run the GC each reactor tick so that we can get some metrics # We manually run the GC each reactor tick so that we can get some metrics
# about time spent doing GC, # about time spent doing GC,
gc.disable() if not running_on_pypy:
gc.disable()
except AttributeError: except AttributeError:
pass pass

View File

@ -115,7 +115,7 @@ class CounterMetric(BaseMetric):
# dict[list[str]]: value for each set of label values. the keys are the # dict[list[str]]: value for each set of label values. the keys are the
# label values, in the same order as the labels in self.labels. # label values, in the same order as the labels in self.labels.
# #
# (if the metric is a scalar, the (single) key is the empty list). # (if the metric is a scalar, the (single) key is the empty tuple).
self.counts = {} self.counts = {}
# Scalar metrics are never empty # Scalar metrics are never empty
@ -145,6 +145,36 @@ class CounterMetric(BaseMetric):
) )
class GaugeMetric(BaseMetric):
"""A metric that can go up or down
"""
def __init__(self, *args, **kwargs):
super(GaugeMetric, self).__init__(*args, **kwargs)
# dict[list[str]]: value for each set of label values. the keys are the
# label values, in the same order as the labels in self.labels.
#
# (if the metric is a scalar, the (single) key is the empty tuple).
self.guages = {}
def set(self, v, *values):
if len(values) != self.dimension():
raise ValueError(
"Expected as many values to inc() as labels (%d)" % (self.dimension())
)
# TODO: should assert that the tag values are all strings
self.guages[values] = v
def render(self):
return flatten(
self._render_for_labels(k, self.guages[k])
for k in sorted(self.guages.keys())
)
class CallbackMetric(BaseMetric): class CallbackMetric(BaseMetric):
"""A metric that returns the numeric value returned by a callback whenever """A metric that returns the numeric value returned by a callback whenever
it is rendered. Typically this is used to implement gauges that yield the it is rendered. Typically this is used to implement gauges that yield the

View File

@ -34,9 +34,8 @@ REQUIREMENTS = {
"bcrypt": ["bcrypt>=3.1.0"], "bcrypt": ["bcrypt>=3.1.0"],
"pillow": ["PIL"], "pillow": ["PIL"],
"pydenticon": ["pydenticon"], "pydenticon": ["pydenticon"],
"ujson": ["ujson"],
"blist": ["blist"],
"pysaml2>=3.0.0": ["saml2>=3.0.0"], "pysaml2>=3.0.0": ["saml2>=3.0.0"],
"sortedcontainers": ["sortedcontainers"],
"pymacaroons-pynacl": ["pymacaroons"], "pymacaroons-pynacl": ["pymacaroons"],
"msgpack-python>=0.3.0": ["msgpack"], "msgpack-python>=0.3.0": ["msgpack"],
"phonenumbers>=8.2.0": ["phonenumbers"], "phonenumbers>=8.2.0": ["phonenumbers"],

View File

@ -13,10 +13,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import send_event
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.replication.http import membership, send_event
REPLICATION_PREFIX = "/_synapse/replication" REPLICATION_PREFIX = "/_synapse/replication"
@ -29,3 +27,4 @@ class ReplicationRestResource(JsonResource):
def register_servlets(self, hs): def register_servlets(self, hs):
send_event.register_servlets(hs, self) send_event.register_servlets(hs, self)
membership.register_servlets(hs, self)

View File

@ -0,0 +1,334 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import re
from twisted.internet import defer
from synapse.api.errors import SynapseError, MatrixCodeMessageException
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.types import Requester, UserID
from synapse.util.distributor import user_left_room, user_joined_room
logger = logging.getLogger(__name__)
@defer.inlineCallbacks
def remote_join(client, host, port, requester, remote_room_hosts,
room_id, user_id, content):
"""Ask the master to do a remote join for the given user to the given room
Args:
client (SimpleHttpClient)
host (str): host of master
port (int): port on master listening for HTTP replication
requester (Requester)
remote_room_hosts (list[str]): Servers to try and join via
room_id (str)
user_id (str)
content (dict): The event content to use for the join event
Returns:
Deferred
"""
uri = "http://%s:%s/_synapse/replication/remote_join" % (host, port)
payload = {
"requester": requester.serialize(),
"remote_room_hosts": remote_room_hosts,
"room_id": room_id,
"user_id": user_id,
"content": content,
}
try:
result = yield client.post_json_get_json(uri, payload)
except MatrixCodeMessageException as e:
# We convert to SynapseError as we know that it was a SynapseError
# on the master process that we should send to the client. (And
# importantly, not stack traces everywhere)
raise SynapseError(e.code, e.msg, e.errcode)
defer.returnValue(result)
@defer.inlineCallbacks
def remote_reject_invite(client, host, port, requester, remote_room_hosts,
room_id, user_id):
"""Ask master to reject the invite for the user and room.
Args:
client (SimpleHttpClient)
host (str): host of master
port (int): port on master listening for HTTP replication
requester (Requester)
remote_room_hosts (list[str]): Servers to try and reject via
room_id (str)
user_id (str)
Returns:
Deferred
"""
uri = "http://%s:%s/_synapse/replication/remote_reject_invite" % (host, port)
payload = {
"requester": requester.serialize(),
"remote_room_hosts": remote_room_hosts,
"room_id": room_id,
"user_id": user_id,
}
try:
result = yield client.post_json_get_json(uri, payload)
except MatrixCodeMessageException as e:
# We convert to SynapseError as we know that it was a SynapseError
# on the master process that we should send to the client. (And
# importantly, not stack traces everywhere)
raise SynapseError(e.code, e.msg, e.errcode)
defer.returnValue(result)
@defer.inlineCallbacks
def get_or_register_3pid_guest(client, host, port, requester,
medium, address, inviter_user_id):
"""Ask the master to get/create a guest account for given 3PID.
Args:
client (SimpleHttpClient)
host (str): host of master
port (int): port on master listening for HTTP replication
requester (Requester)
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
Returns:
Deferred[(str, str)]: A 2-tuple of `(user_id, access_token)` of the
3PID guest account.
"""
uri = "http://%s:%s/_synapse/replication/get_or_register_3pid_guest" % (host, port)
payload = {
"requester": requester.serialize(),
"medium": medium,
"address": address,
"inviter_user_id": inviter_user_id,
}
try:
result = yield client.post_json_get_json(uri, payload)
except MatrixCodeMessageException as e:
# We convert to SynapseError as we know that it was a SynapseError
# on the master process that we should send to the client. (And
# importantly, not stack traces everywhere)
raise SynapseError(e.code, e.msg, e.errcode)
defer.returnValue(result)
@defer.inlineCallbacks
def notify_user_membership_change(client, host, port, user_id, room_id, change):
"""Notify master that a user has joined or left the room
Args:
client (SimpleHttpClient)
host (str): host of master
port (int): port on master listening for HTTP replication.
user_id (str)
room_id (str)
change (str): Either "join" or "left"
Returns:
Deferred
"""
assert change in ("joined", "left")
uri = "http://%s:%s/_synapse/replication/user_%s_room" % (host, port, change)
payload = {
"user_id": user_id,
"room_id": room_id,
}
try:
result = yield client.post_json_get_json(uri, payload)
except MatrixCodeMessageException as e:
# We convert to SynapseError as we know that it was a SynapseError
# on the master process that we should send to the client. (And
# importantly, not stack traces everywhere)
raise SynapseError(e.code, e.msg, e.errcode)
defer.returnValue(result)
class ReplicationRemoteJoinRestServlet(RestServlet):
PATTERNS = [re.compile("^/_synapse/replication/remote_join$")]
def __init__(self, hs):
super(ReplicationRemoteJoinRestServlet, self).__init__()
self.federation_handler = hs.get_handlers().federation_handler
self.store = hs.get_datastore()
self.clock = hs.get_clock()
@defer.inlineCallbacks
def on_POST(self, request):
content = parse_json_object_from_request(request)
remote_room_hosts = content["remote_room_hosts"]
room_id = content["room_id"]
user_id = content["user_id"]
event_content = content["content"]
requester = Requester.deserialize(self.store, content["requester"])
if requester.user:
request.authenticated_entity = requester.user.to_string()
logger.info(
"remote_join: %s into room: %s",
user_id, room_id,
)
yield self.federation_handler.do_invite_join(
remote_room_hosts,
room_id,
user_id,
event_content,
)
defer.returnValue((200, {}))
class ReplicationRemoteRejectInviteRestServlet(RestServlet):
PATTERNS = [re.compile("^/_synapse/replication/remote_reject_invite$")]
def __init__(self, hs):
super(ReplicationRemoteRejectInviteRestServlet, self).__init__()
self.federation_handler = hs.get_handlers().federation_handler
self.store = hs.get_datastore()
self.clock = hs.get_clock()
@defer.inlineCallbacks
def on_POST(self, request):
content = parse_json_object_from_request(request)
remote_room_hosts = content["remote_room_hosts"]
room_id = content["room_id"]
user_id = content["user_id"]
requester = Requester.deserialize(self.store, content["requester"])
if requester.user:
request.authenticated_entity = requester.user.to_string()
logger.info(
"remote_reject_invite: %s out of room: %s",
user_id, room_id,
)
try:
event = yield self.federation_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
user_id,
)
ret = event.get_pdu_json()
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
user_id, room_id
)
ret = {}
defer.returnValue((200, ret))
class ReplicationRegister3PIDGuestRestServlet(RestServlet):
PATTERNS = [re.compile("^/_synapse/replication/get_or_register_3pid_guest$")]
def __init__(self, hs):
super(ReplicationRegister3PIDGuestRestServlet, self).__init__()
self.registeration_handler = hs.get_handlers().registration_handler
self.store = hs.get_datastore()
self.clock = hs.get_clock()
@defer.inlineCallbacks
def on_POST(self, request):
content = parse_json_object_from_request(request)
medium = content["medium"]
address = content["address"]
inviter_user_id = content["inviter_user_id"]
requester = Requester.deserialize(self.store, content["requester"])
if requester.user:
request.authenticated_entity = requester.user.to_string()
logger.info("get_or_register_3pid_guest: %r", content)
ret = yield self.registeration_handler.get_or_register_3pid_guest(
medium, address, inviter_user_id,
)
defer.returnValue((200, ret))
class ReplicationUserJoinedLeftRoomRestServlet(RestServlet):
PATTERNS = [re.compile("^/_synapse/replication/user_(?P<change>joined|left)_room$")]
def __init__(self, hs):
super(ReplicationUserJoinedLeftRoomRestServlet, self).__init__()
self.registeration_handler = hs.get_handlers().registration_handler
self.store = hs.get_datastore()
self.clock = hs.get_clock()
self.distributor = hs.get_distributor()
def on_POST(self, request, change):
content = parse_json_object_from_request(request)
user_id = content["user_id"]
room_id = content["room_id"]
logger.info("user membership change: %s in %s", user_id, room_id)
user = UserID.from_string(user_id)
if change == "joined":
user_joined_room(self.distributor, user, room_id)
elif change == "left":
user_left_room(self.distributor, user, room_id)
else:
raise Exception("Unrecognized change: %r", change)
return (200, {})
def register_servlets(hs, http_server):
ReplicationRemoteJoinRestServlet(hs).register(http_server)
ReplicationRemoteRejectInviteRestServlet(hs).register(http_server)
ReplicationRegister3PIDGuestRestServlet(hs).register(http_server)
ReplicationUserJoinedLeftRoomRestServlet(hs).register(http_server)

View File

@ -115,7 +115,7 @@ class ReplicationSendEventRestServlet(RestServlet):
self.clock = hs.get_clock() self.clock = hs.get_clock()
# The responses are tiny, so we may as well cache them for a while # The responses are tiny, so we may as well cache them for a while
self.response_cache = ResponseCache(hs, timeout_ms=30 * 60 * 1000) self.response_cache = ResponseCache(hs, "send_event", timeout_ms=30 * 60 * 1000)
def on_PUT(self, request, event_id): def on_PUT(self, request, event_id):
result = self.response_cache.get(event_id) result = self.response_cache.get(event_id)

View File

@ -0,0 +1,21 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.storage.profile import ProfileWorkerStore
class SlavedProfileStore(ProfileWorkerStore, BaseSlavedStore):
pass

View File

@ -19,11 +19,13 @@ allowed to be sent by which side.
""" """
import logging import logging
import ujson as json import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
_json_encoder = simplejson.JSONEncoder(namedtuple_as_object=False)
class Command(object): class Command(object):
"""The base command class. """The base command class.
@ -100,14 +102,14 @@ class RdataCommand(Command):
return cls( return cls(
stream_name, stream_name,
None if token == "batch" else int(token), None if token == "batch" else int(token),
json.loads(row_json) simplejson.loads(row_json)
) )
def to_line(self): def to_line(self):
return " ".join(( return " ".join((
self.stream_name, self.stream_name,
str(self.token) if self.token is not None else "batch", str(self.token) if self.token is not None else "batch",
json.dumps(self.row), _json_encoder.encode(self.row),
)) ))
@ -298,10 +300,12 @@ class InvalidateCacheCommand(Command):
def from_line(cls, line): def from_line(cls, line):
cache_func, keys_json = line.split(" ", 1) cache_func, keys_json = line.split(" ", 1)
return cls(cache_func, json.loads(keys_json)) return cls(cache_func, simplejson.loads(keys_json))
def to_line(self): def to_line(self):
return " ".join((self.cache_func, json.dumps(self.keys))) return " ".join((
self.cache_func, _json_encoder.encode(self.keys),
))
class UserIpCommand(Command): class UserIpCommand(Command):
@ -325,14 +329,14 @@ class UserIpCommand(Command):
def from_line(cls, line): def from_line(cls, line):
user_id, jsn = line.split(" ", 1) user_id, jsn = line.split(" ", 1)
access_token, ip, user_agent, device_id, last_seen = json.loads(jsn) access_token, ip, user_agent, device_id, last_seen = simplejson.loads(jsn)
return cls( return cls(
user_id, access_token, ip, user_agent, device_id, last_seen user_id, access_token, ip, user_agent, device_id, last_seen
) )
def to_line(self): def to_line(self):
return self.user_id + " " + json.dumps(( return self.user_id + " " + _json_encoder.encode((
self.access_token, self.ip, self.user_agent, self.device_id, self.access_token, self.ip, self.user_agent, self.device_id,
self.last_seen, self.last_seen,
)) ))

View File

@ -44,7 +44,10 @@ class LogoutRestServlet(ClientV1RestServlet):
requester = yield self.auth.get_user_by_req(request) requester = yield self.auth.get_user_by_req(request)
except AuthError: except AuthError:
# this implies the access token has already been deleted. # this implies the access token has already been deleted.
pass defer.returnValue((401, {
"errcode": "M_UNKNOWN_TOKEN",
"error": "Access Token unknown or expired"
}))
else: else:
if requester.device_id is None: if requester.device_id is None:
# the acccess token wasn't associated with a device. # the acccess token wasn't associated with a device.

View File

@ -348,9 +348,9 @@ class RegisterRestServlet(ClientV1RestServlet):
admin = register_json.get("admin", None) admin = register_json.get("admin", None)
# Its important to check as we use null bytes as HMAC field separators # Its important to check as we use null bytes as HMAC field separators
if "\x00" in user: if b"\x00" in user:
raise SynapseError(400, "Invalid user") raise SynapseError(400, "Invalid user")
if "\x00" in password: if b"\x00" in password:
raise SynapseError(400, "Invalid password") raise SynapseError(400, "Invalid password")
# str() because otherwise hmac complains that 'unicode' does not # str() because otherwise hmac complains that 'unicode' does not

View File

@ -30,7 +30,7 @@ from synapse.http.servlet import (
import logging import logging
import urllib import urllib
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -165,17 +165,12 @@ class RoomStateEventRestServlet(ClientV1RestServlet):
content=content, content=content,
) )
else: else:
event, context = yield self.event_creation_hander.create_event( event = yield self.event_creation_hander.create_and_send_nonmember_event(
requester, requester,
event_dict, event_dict,
token_id=requester.access_token_id,
txn_id=txn_id, txn_id=txn_id,
) )
yield self.event_creation_hander.send_nonmember_event(
requester, event, context,
)
ret = {} ret = {}
if event: if event:
ret = {"event_id": event.event_id} ret = {"event_id": event.event_id}
@ -655,7 +650,12 @@ class RoomMembershipRestServlet(ClientV1RestServlet):
content=event_content, content=event_content,
) )
defer.returnValue((200, {})) return_value = {}
if membership_action == "join":
return_value["room_id"] = room_id
defer.returnValue((200, return_value))
def _has_3pid_invite_keys(self, content): def _has_3pid_invite_keys(self, content):
for key in {"id_server", "medium", "address"}: for key in {"id_server", "medium", "address"}:

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -401,6 +402,32 @@ class GroupInvitedUsersServlet(RestServlet):
defer.returnValue((200, result)) defer.returnValue((200, result))
class GroupSettingJoinPolicyServlet(RestServlet):
"""Set group join policy
"""
PATTERNS = client_v2_patterns("/groups/(?P<group_id>[^/]*)/settings/m.join_policy$")
def __init__(self, hs):
super(GroupSettingJoinPolicyServlet, self).__init__()
self.auth = hs.get_auth()
self.groups_handler = hs.get_groups_local_handler()
@defer.inlineCallbacks
def on_PUT(self, request, group_id):
requester = yield self.auth.get_user_by_req(request)
requester_user_id = requester.user.to_string()
content = parse_json_object_from_request(request)
result = yield self.groups_handler.set_group_join_policy(
group_id,
requester_user_id,
content,
)
defer.returnValue((200, result))
class GroupCreateServlet(RestServlet): class GroupCreateServlet(RestServlet):
"""Create a group """Create a group
""" """
@ -738,6 +765,7 @@ def register_servlets(hs, http_server):
GroupInvitedUsersServlet(hs).register(http_server) GroupInvitedUsersServlet(hs).register(http_server)
GroupUsersServlet(hs).register(http_server) GroupUsersServlet(hs).register(http_server)
GroupRoomServlet(hs).register(http_server) GroupRoomServlet(hs).register(http_server)
GroupSettingJoinPolicyServlet(hs).register(http_server)
GroupCreateServlet(hs).register(http_server) GroupCreateServlet(hs).register(http_server)
GroupAdminRoomsServlet(hs).register(http_server) GroupAdminRoomsServlet(hs).register(http_server)
GroupAdminRoomsConfigServlet(hs).register(http_server) GroupAdminRoomsConfigServlet(hs).register(http_server)

View File

@ -20,7 +20,6 @@ import synapse
import synapse.types import synapse.types
from synapse.api.auth import get_access_token_from_request, has_access_token from synapse.api.auth import get_access_token_from_request, has_access_token
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.types import RoomID, RoomAlias
from synapse.api.errors import SynapseError, Codes, UnrecognizedRequestError from synapse.api.errors import SynapseError, Codes, UnrecognizedRequestError
from synapse.http.servlet import ( from synapse.http.servlet import (
RestServlet, parse_json_object_from_request, assert_params_in_request, parse_string RestServlet, parse_json_object_from_request, assert_params_in_request, parse_string
@ -405,14 +404,6 @@ class RegisterRestServlet(RestServlet):
generate_token=False, generate_token=False,
) )
# auto-join the user to any rooms we're supposed to dump them into
fake_requester = synapse.types.create_requester(registered_user_id)
for r in self.hs.config.auto_join_rooms:
try:
yield self._join_user_to_room(fake_requester, r)
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
# remember that we've now registered that user account, and with # remember that we've now registered that user account, and with
# what user ID (since the user may not have specified) # what user ID (since the user may not have specified)
self.auth_handler.set_session_data( self.auth_handler.set_session_data(
@ -445,29 +436,6 @@ class RegisterRestServlet(RestServlet):
def on_OPTIONS(self, _): def on_OPTIONS(self, _):
return 200, {} return 200, {}
@defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier):
room_id = None
if RoomID.is_valid(room_identifier):
room_id = room_identifier
elif RoomAlias.is_valid(room_identifier):
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = (
yield self.room_member_handler.lookup_room_alias(room_alias)
)
room_id = room_id.to_string()
else:
raise SynapseError(400, "%s was not legal room ID or room alias" % (
room_identifier,
))
yield self.room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
action="join",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def _do_appservice_registration(self, username, as_token, body): def _do_appservice_registration(self, username, as_token, body):
user_id = yield self.registration_handler.appservice_register( user_id = yield self.registration_handler.appservice_register(

View File

@ -33,7 +33,7 @@ from ._base import set_timeline_upper_limit
import itertools import itertools
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -16,6 +16,8 @@
from twisted.internet import defer, threads from twisted.internet import defer, threads
from twisted.protocols.basic import FileSender from twisted.protocols.basic import FileSender
import six
from ._base import Responder from ._base import Responder
from synapse.util.file_consumer import BackgroundFileConsumer from synapse.util.file_consumer import BackgroundFileConsumer
@ -119,7 +121,7 @@ class MediaStorage(object):
os.remove(fname) os.remove(fname)
except Exception: except Exception:
pass pass
raise t, v, tb six.reraise(t, v, tb)
if not finished_called: if not finished_called:
raise Exception("Finished callback not called") raise Exception("Finished callback not called")

View File

@ -23,7 +23,7 @@ import re
import shutil import shutil
import sys import sys
import traceback import traceback
import ujson as json import simplejson as json
import urlparse import urlparse
from twisted.web.server import NOT_DONE_YET from twisted.web.server import NOT_DONE_YET

View File

@ -47,7 +47,8 @@ from synapse.handlers.device import DeviceHandler
from synapse.handlers.e2e_keys import E2eKeysHandler from synapse.handlers.e2e_keys import E2eKeysHandler
from synapse.handlers.presence import PresenceHandler from synapse.handlers.presence import PresenceHandler
from synapse.handlers.room_list import RoomListHandler from synapse.handlers.room_list import RoomListHandler
from synapse.handlers.room_member import RoomMemberHandler from synapse.handlers.room_member import RoomMemberMasterHandler
from synapse.handlers.room_member_worker import RoomMemberWorkerHandler
from synapse.handlers.set_password import SetPasswordHandler from synapse.handlers.set_password import SetPasswordHandler
from synapse.handlers.sync import SyncHandler from synapse.handlers.sync import SyncHandler
from synapse.handlers.typing import TypingHandler from synapse.handlers.typing import TypingHandler
@ -392,7 +393,9 @@ class HomeServer(object):
return SpamChecker(self) return SpamChecker(self)
def build_room_member_handler(self): def build_room_member_handler(self):
return RoomMemberHandler(self) if self.config.worker_app:
return RoomMemberWorkerHandler(self)
return RoomMemberMasterHandler(self)
def build_federation_registry(self): def build_federation_registry(self):
return FederationHandlerRegistry() return FederationHandlerRegistry()

View File

@ -132,7 +132,7 @@ class StateHandler(object):
state_map = yield self.store.get_events(state.values(), get_prev_content=False) state_map = yield self.store.get_events(state.values(), get_prev_content=False)
state = { state = {
key: state_map[e_id] for key, e_id in state.items() if e_id in state_map key: state_map[e_id] for key, e_id in state.iteritems() if e_id in state_map
} }
defer.returnValue(state) defer.returnValue(state)
@ -378,7 +378,7 @@ class StateHandler(object):
new_state = resolve_events_with_state_map(state_set_ids, state_map) new_state = resolve_events_with_state_map(state_set_ids, state_map)
new_state = { new_state = {
key: state_map[ev_id] for key, ev_id in new_state.items() key: state_map[ev_id] for key, ev_id in new_state.iteritems()
} }
return new_state return new_state
@ -458,15 +458,15 @@ class StateResolutionHandler(object):
# build a map from state key to the event_ids which set that state. # build a map from state key to the event_ids which set that state.
# dict[(str, str), set[str]) # dict[(str, str), set[str])
state = {} state = {}
for st in state_groups_ids.values(): for st in state_groups_ids.itervalues():
for key, e_id in st.items(): for key, e_id in st.iteritems():
state.setdefault(key, set()).add(e_id) state.setdefault(key, set()).add(e_id)
# build a map from state key to the event_ids which set that state, # build a map from state key to the event_ids which set that state,
# including only those where there are state keys in conflict. # including only those where there are state keys in conflict.
conflicted_state = { conflicted_state = {
k: list(v) k: list(v)
for k, v in state.items() for k, v in state.iteritems()
if len(v) > 1 if len(v) > 1
} }
@ -480,36 +480,37 @@ class StateResolutionHandler(object):
) )
else: else:
new_state = { new_state = {
key: e_ids.pop() for key, e_ids in state.items() key: e_ids.pop() for key, e_ids in state.iteritems()
} }
# if the new state matches any of the input state groups, we can with Measure(self.clock, "state.create_group_ids"):
# use that state group again. Otherwise we will generate a state_id # if the new state matches any of the input state groups, we can
# which will be used as a cache key for future resolutions, but # use that state group again. Otherwise we will generate a state_id
# not get persisted. # which will be used as a cache key for future resolutions, but
state_group = None # not get persisted.
new_state_event_ids = frozenset(new_state.values()) state_group = None
for sg, events in state_groups_ids.items(): new_state_event_ids = frozenset(new_state.itervalues())
if new_state_event_ids == frozenset(e_id for e_id in events): for sg, events in state_groups_ids.iteritems():
state_group = sg if new_state_event_ids == frozenset(e_id for e_id in events):
break state_group = sg
break
# TODO: We want to create a state group for this set of events, to # TODO: We want to create a state group for this set of events, to
# increase cache hits, but we need to make sure that it doesn't # increase cache hits, but we need to make sure that it doesn't
# end up as a prev_group without being added to the database # end up as a prev_group without being added to the database
prev_group = None prev_group = None
delta_ids = None delta_ids = None
for old_group, old_ids in state_groups_ids.iteritems(): for old_group, old_ids in state_groups_ids.iteritems():
if not set(new_state) - set(old_ids): if not set(new_state) - set(old_ids):
n_delta_ids = { n_delta_ids = {
k: v k: v
for k, v in new_state.iteritems() for k, v in new_state.iteritems()
if old_ids.get(k) != v if old_ids.get(k) != v
} }
if not delta_ids or len(n_delta_ids) < len(delta_ids): if not delta_ids or len(n_delta_ids) < len(delta_ids):
prev_group = old_group prev_group = old_group
delta_ids = n_delta_ids delta_ids = n_delta_ids
cache = _StateCacheEntry( cache = _StateCacheEntry(
state=new_state, state=new_state,
@ -702,7 +703,7 @@ def _resolve_with_state(unconflicted_state_ids, conflicted_state_ds, auth_event_
auth_events = { auth_events = {
key: state_map[ev_id] key: state_map[ev_id]
for key, ev_id in auth_event_ids.items() for key, ev_id in auth_event_ids.iteritems()
if ev_id in state_map if ev_id in state_map
} }
@ -740,7 +741,7 @@ def _resolve_state_events(conflicted_state, auth_events):
auth_events.update(resolved_state) auth_events.update(resolved_state)
for key, events in conflicted_state.items(): for key, events in conflicted_state.iteritems():
if key[0] == EventTypes.JoinRules: if key[0] == EventTypes.JoinRules:
logger.debug("Resolving conflicted join rules %r", events) logger.debug("Resolving conflicted join rules %r", events)
resolved_state[key] = _resolve_auth_events( resolved_state[key] = _resolve_auth_events(
@ -750,7 +751,7 @@ def _resolve_state_events(conflicted_state, auth_events):
auth_events.update(resolved_state) auth_events.update(resolved_state)
for key, events in conflicted_state.items(): for key, events in conflicted_state.iteritems():
if key[0] == EventTypes.Member: if key[0] == EventTypes.Member:
logger.debug("Resolving conflicted member lists %r", events) logger.debug("Resolving conflicted member lists %r", events)
resolved_state[key] = _resolve_auth_events( resolved_state[key] = _resolve_auth_events(
@ -760,7 +761,7 @@ def _resolve_state_events(conflicted_state, auth_events):
auth_events.update(resolved_state) auth_events.update(resolved_state)
for key, events in conflicted_state.items(): for key, events in conflicted_state.iteritems():
if key not in resolved_state: if key not in resolved_state:
logger.debug("Resolving conflicted state %r:%r", key, events) logger.debug("Resolving conflicted state %r:%r", key, events)
resolved_state[key] = _resolve_normal_events( resolved_state[key] = _resolve_normal_events(

View File

@ -14,8 +14,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import defer
from synapse.storage.devices import DeviceStore from synapse.storage.devices import DeviceStore
from .appservice import ( from .appservice import (
ApplicationServiceStore, ApplicationServiceTransactionStore ApplicationServiceStore, ApplicationServiceTransactionStore
@ -244,13 +242,12 @@ class DataStore(RoomMemberStore, RoomStore,
return [UserPresenceState(**row) for row in rows] return [UserPresenceState(**row) for row in rows]
@defer.inlineCallbacks
def count_daily_users(self): def count_daily_users(self):
""" """
Counts the number of users who used this homeserver in the last 24 hours. Counts the number of users who used this homeserver in the last 24 hours.
""" """
def _count_users(txn): def _count_users(txn):
yesterday = int(self._clock.time_msec()) - (1000 * 60 * 60 * 24), yesterday = int(self._clock.time_msec()) - (1000 * 60 * 60 * 24)
sql = """ sql = """
SELECT COALESCE(count(*), 0) FROM ( SELECT COALESCE(count(*), 0) FROM (
@ -264,8 +261,91 @@ class DataStore(RoomMemberStore, RoomStore,
count, = txn.fetchone() count, = txn.fetchone()
return count return count
ret = yield self.runInteraction("count_users", _count_users) return self.runInteraction("count_users", _count_users)
defer.returnValue(ret)
def count_r30_users(self):
"""
Counts the number of 30 day retained users, defined as:-
* Users who have created their accounts more than 30 days ago
* Where last seen at most 30 days ago
* Where account creation and last_seen are > 30 days apart
Returns counts globaly for a given user as well as breaking
by platform
"""
def _count_r30_users(txn):
thirty_days_in_secs = 86400 * 30
now = int(self._clock.time())
thirty_days_ago_in_secs = now - thirty_days_in_secs
sql = """
SELECT platform, COALESCE(count(*), 0) FROM (
SELECT
users.name, platform, users.creation_ts * 1000,
MAX(uip.last_seen)
FROM users
INNER JOIN (
SELECT
user_id,
last_seen,
CASE
WHEN user_agent LIKE '%%Android%%' THEN 'android'
WHEN user_agent LIKE '%%iOS%%' THEN 'ios'
WHEN user_agent LIKE '%%Electron%%' THEN 'electron'
WHEN user_agent LIKE '%%Mozilla%%' THEN 'web'
WHEN user_agent LIKE '%%Gecko%%' THEN 'web'
ELSE 'unknown'
END
AS platform
FROM user_ips
) uip
ON users.name = uip.user_id
AND users.appservice_id is NULL
AND users.creation_ts < ?
AND uip.last_seen/1000 > ?
AND (uip.last_seen/1000) - users.creation_ts > 86400 * 30
GROUP BY users.name, platform, users.creation_ts
) u GROUP BY platform
"""
results = {}
txn.execute(sql, (thirty_days_ago_in_secs,
thirty_days_ago_in_secs))
for row in txn:
if row[0] is 'unknown':
pass
results[row[0]] = row[1]
sql = """
SELECT COALESCE(count(*), 0) FROM (
SELECT users.name, users.creation_ts * 1000,
MAX(uip.last_seen)
FROM users
INNER JOIN (
SELECT
user_id,
last_seen
FROM user_ips
) uip
ON users.name = uip.user_id
AND appservice_id is NULL
AND users.creation_ts < ?
AND uip.last_seen/1000 > ?
AND (uip.last_seen/1000) - users.creation_ts > 86400 * 30
GROUP BY users.name, users.creation_ts
) u
"""
txn.execute(sql, (thirty_days_ago_in_secs,
thirty_days_ago_in_secs))
count, = txn.fetchone()
results['all'] = count
return results
return self.runInteraction("count_r30_users", _count_r30_users)
def get_users(self): def get_users(self):
"""Function to reterive a list of users in users table. """Function to reterive a list of users in users table.

View File

@ -376,7 +376,7 @@ class SQLBaseStore(object):
Returns: Returns:
A list of dicts where the key is the column header. A list of dicts where the key is the column header.
""" """
col_headers = list(intern(column[0]) for column in cursor.description) col_headers = list(intern(str(column[0])) for column in cursor.description)
results = list( results = list(
dict(zip(col_headers, row)) for row in cursor dict(zip(col_headers, row)) for row in cursor
) )

View File

@ -23,7 +23,7 @@ from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.caches.descriptors import cached, cachedList, cachedInlineCallbacks from synapse.util.caches.descriptors import cached, cachedList, cachedInlineCallbacks
import abc import abc
import ujson as json import simplejson as json
import logging import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -19,7 +19,7 @@ from . import engines
from twisted.internet import defer from twisted.internet import defer
import ujson as json import simplejson as json
import logging import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -48,6 +48,13 @@ class ClientIpStore(background_updates.BackgroundUpdateStore):
columns=["user_id", "device_id", "last_seen"], columns=["user_id", "device_id", "last_seen"],
) )
self.register_background_index_update(
"user_ips_last_seen_index",
index_name="user_ips_last_seen",
table="user_ips",
columns=["user_id", "last_seen"],
)
# (user_id, access_token, ip) -> (user_agent, device_id, last_seen) # (user_id, access_token, ip) -> (user_agent, device_id, last_seen)
self._batch_row_update = {} self._batch_row_update = {}

View File

@ -14,7 +14,7 @@
# limitations under the License. # limitations under the License.
import logging import logging
import ujson import simplejson
from twisted.internet import defer from twisted.internet import defer
@ -85,7 +85,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
) )
rows = [] rows = []
for destination, edu in remote_messages_by_destination.items(): for destination, edu in remote_messages_by_destination.items():
edu_json = ujson.dumps(edu) edu_json = simplejson.dumps(edu)
rows.append((destination, stream_id, now_ms, edu_json)) rows.append((destination, stream_id, now_ms, edu_json))
txn.executemany(sql, rows) txn.executemany(sql, rows)
@ -177,7 +177,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
" WHERE user_id = ?" " WHERE user_id = ?"
) )
txn.execute(sql, (user_id,)) txn.execute(sql, (user_id,))
message_json = ujson.dumps(messages_by_device["*"]) message_json = simplejson.dumps(messages_by_device["*"])
for row in txn: for row in txn:
# Add the message for all devices for this user on this # Add the message for all devices for this user on this
# server. # server.
@ -199,7 +199,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
# Only insert into the local inbox if the device exists on # Only insert into the local inbox if the device exists on
# this server # this server
device = row[0] device = row[0]
message_json = ujson.dumps(messages_by_device[device]) message_json = simplejson.dumps(messages_by_device[device])
messages_json_for_user[device] = message_json messages_json_for_user[device] = message_json
if messages_json_for_user: if messages_json_for_user:
@ -253,7 +253,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
messages = [] messages = []
for row in txn: for row in txn:
stream_pos = row[0] stream_pos = row[0]
messages.append(ujson.loads(row[1])) messages.append(simplejson.loads(row[1]))
if len(messages) < limit: if len(messages) < limit:
stream_pos = current_stream_id stream_pos = current_stream_id
return (messages, stream_pos) return (messages, stream_pos)
@ -389,7 +389,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
messages = [] messages = []
for row in txn: for row in txn:
stream_pos = row[0] stream_pos = row[0]
messages.append(ujson.loads(row[1])) messages.append(simplejson.loads(row[1]))
if len(messages) < limit: if len(messages) < limit:
stream_pos = current_stream_id stream_pos = current_stream_id
return (messages, stream_pos) return (messages, stream_pos)

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging import logging
import ujson as json import simplejson as json
from twisted.internet import defer from twisted.internet import defer

View File

@ -17,7 +17,7 @@ from twisted.internet import defer
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
import ujson as json import simplejson as json
from ._base import SQLBaseStore from ._base import SQLBaseStore

View File

@ -18,6 +18,7 @@ from .postgres import PostgresEngine
from .sqlite3 import Sqlite3Engine from .sqlite3 import Sqlite3Engine
import importlib import importlib
import platform
SUPPORTED_MODULE = { SUPPORTED_MODULE = {
@ -31,6 +32,10 @@ def create_engine(database_config):
engine_class = SUPPORTED_MODULE.get(name, None) engine_class = SUPPORTED_MODULE.get(name, None)
if engine_class: if engine_class:
# pypy requires psycopg2cffi rather than psycopg2
if (name == "psycopg2" and
platform.python_implementation() == "PyPy"):
name = "psycopg2cffi"
module = importlib.import_module(name) module = importlib.import_module(name)
return engine_class(module, database_config) return engine_class(module, database_config)

View File

@ -22,7 +22,7 @@ from synapse.types import RoomStreamToken
from .stream import lower_bound from .stream import lower_bound
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -14,15 +14,19 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from synapse.storage.events_worker import EventsWorkerStore from collections import OrderedDict, deque, namedtuple
from functools import wraps
import logging
import simplejson as json
from twisted.internet import defer from twisted.internet import defer
from synapse.events import USE_FROZEN_DICTS
from synapse.storage.events_worker import EventsWorkerStore
from synapse.util.async import ObservableDeferred from synapse.util.async import ObservableDeferred
from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.logcontext import ( from synapse.util.logcontext import (
PreserveLoggingContext, make_deferred_yieldable PreserveLoggingContext, make_deferred_yieldable,
) )
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.metrics import Measure from synapse.util.metrics import Measure
@ -30,16 +34,8 @@ from synapse.api.constants import EventTypes
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks from synapse.util.caches.descriptors import cached, cachedInlineCallbacks
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
from canonicaljson import encode_canonical_json
from collections import deque, namedtuple, OrderedDict
from functools import wraps
import synapse.metrics import synapse.metrics
import logging
import ujson as json
# these are only included to make the type annotations work # these are only included to make the type annotations work
from synapse.events import EventBase # noqa: F401 from synapse.events import EventBase # noqa: F401
from synapse.events.snapshot import EventContext # noqa: F401 from synapse.events.snapshot import EventContext # noqa: F401
@ -53,13 +49,25 @@ event_counter = metrics.register_counter(
"persisted_events_sep", labels=["type", "origin_type", "origin_entity"] "persisted_events_sep", labels=["type", "origin_type", "origin_entity"]
) )
# The number of times we are recalculating the current state
state_delta_counter = metrics.register_counter(
"state_delta",
)
# The number of times we are recalculating state when there is only a
# single forward extremity
state_delta_single_event_counter = metrics.register_counter(
"state_delta_single_event",
)
# The number of times we are reculating state when we could have resonably
# calculated the delta when we calculated the state for an event we were
# persisting.
state_delta_reuse_delta_counter = metrics.register_counter(
"state_delta_reuse_delta",
)
def encode_json(json_object): def encode_json(json_object):
if USE_FROZEN_DICTS: return frozendict_json_encoder.encode(json_object)
# ujson doesn't like frozen_dicts
return encode_canonical_json(json_object)
else:
return json.dumps(json_object, ensure_ascii=False)
class _EventPeristenceQueue(object): class _EventPeristenceQueue(object):
@ -369,7 +377,8 @@ class EventsStore(EventsWorkerStore):
room_id, ev_ctx_rm, latest_event_ids room_id, ev_ctx_rm, latest_event_ids
) )
if new_latest_event_ids == set(latest_event_ids): latest_event_ids = set(latest_event_ids)
if new_latest_event_ids == latest_event_ids:
# No change in extremities, so no change in state # No change in extremities, so no change in state
continue continue
@ -390,6 +399,26 @@ class EventsStore(EventsWorkerStore):
if all_single_prev_not_state: if all_single_prev_not_state:
continue continue
state_delta_counter.inc()
if len(new_latest_event_ids) == 1:
state_delta_single_event_counter.inc()
# This is a fairly handwavey check to see if we could
# have guessed what the delta would have been when
# processing one of these events.
# What we're interested in is if the latest extremities
# were the same when we created the event as they are
# now. When this server creates a new event (as opposed
# to receiving it over federation) it will use the
# forward extremities as the prev_events, so we can
# guess this by looking at the prev_events and checking
# if they match the current forward extremities.
for ev, _ in ev_ctx_rm:
prev_event_ids = set(e for e, _ in ev.prev_events)
if latest_event_ids == prev_event_ids:
state_delta_reuse_delta_counter.inc()
break
logger.info( logger.info(
"Calculating state delta for room %s", room_id, "Calculating state delta for room %s", room_id,
) )
@ -415,6 +444,9 @@ class EventsStore(EventsWorkerStore):
new_forward_extremeties=new_forward_extremeties, new_forward_extremeties=new_forward_extremeties,
) )
persist_event_counter.inc_by(len(chunk)) persist_event_counter.inc_by(len(chunk))
synapse.metrics.event_persisted_position.set(
chunk[-1][0].internal_metadata.stream_ordering,
)
for event, context in chunk: for event, context in chunk:
if context.app_service: if context.app_service:
origin_type = "local" origin_type = "local"

View File

@ -28,7 +28,7 @@ from synapse.api.errors import SynapseError
from collections import namedtuple from collections import namedtuple
import logging import logging
import ujson as json import simplejson as json
# these are only included to make the type annotations work # these are only included to make the type annotations work
from synapse.events import EventBase # noqa: F401 from synapse.events import EventBase # noqa: F401
@ -51,6 +51,26 @@ _EventCacheEntry = namedtuple("_EventCacheEntry", ("event", "redacted_event"))
class EventsWorkerStore(SQLBaseStore): class EventsWorkerStore(SQLBaseStore):
def get_received_ts(self, event_id):
"""Get received_ts (when it was persisted) for the event.
Raises an exception for unknown events.
Args:
event_id (str)
Returns:
Deferred[int|None]: Timestamp in milliseconds, or None for events
that were persisted before received_ts was implemented.
"""
return self._simple_select_one_onecol(
table="events",
keyvalues={
"event_id": event_id,
},
retcol="received_ts",
desc="get_received_ts",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_event(self, event_id, check_redacted=True, def get_event(self, event_id, check_redacted=True,

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@ -19,7 +20,7 @@ from synapse.api.errors import SynapseError
from ._base import SQLBaseStore from ._base import SQLBaseStore
import ujson as json import simplejson as json
# The category ID for the "default" category. We don't store as null in the # The category ID for the "default" category. We don't store as null in the
@ -29,6 +30,24 @@ _DEFAULT_ROLE_ID = ""
class GroupServerStore(SQLBaseStore): class GroupServerStore(SQLBaseStore):
def set_group_join_policy(self, group_id, join_policy):
"""Set the join policy of a group.
join_policy can be one of:
* "invite"
* "open"
"""
return self._simple_update_one(
table="groups",
keyvalues={
"group_id": group_id,
},
updatevalues={
"join_policy": join_policy,
},
desc="set_group_join_policy",
)
def get_group(self, group_id): def get_group(self, group_id):
return self._simple_select_one( return self._simple_select_one(
table="groups", table="groups",
@ -36,10 +55,11 @@ class GroupServerStore(SQLBaseStore):
"group_id": group_id, "group_id": group_id,
}, },
retcols=( retcols=(
"name", "short_description", "long_description", "avatar_url", "is_public" "name", "short_description", "long_description",
"avatar_url", "is_public", "join_policy",
), ),
allow_none=True, allow_none=True,
desc="is_user_in_group", desc="get_group",
) )
def get_users_in_group(self, group_id, include_private=False): def get_users_in_group(self, group_id, include_private=False):

View File

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd # Copyright 2014 - 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.

View File

@ -24,14 +24,7 @@ from ._base import SQLBaseStore
BATCH_SIZE = 100 BATCH_SIZE = 100
class ProfileStore(SQLBaseStore): class ProfileWorkerStore(SQLBaseStore):
def create_profile(self, user_localpart):
return self._simple_insert(
table="profiles",
values={"user_id": user_localpart},
desc="create_profile",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_profileinfo(self, user_localpart): def get_profileinfo(self, user_localpart):
try: try:
@ -64,17 +57,6 @@ class ProfileStore(SQLBaseStore):
desc="get_profile_displayname", desc="get_profile_displayname",
) )
def set_profile_displayname(self, user_localpart, new_displayname, batchnum):
return self._simple_update_one(
table="profiles",
keyvalues={"user_id": user_localpart},
updatevalues={
"displayname": new_displayname,
"batch": batchnum,
},
desc="set_profile_displayname",
)
def get_profile_avatar_url(self, user_localpart): def get_profile_avatar_url(self, user_localpart):
return self._simple_select_one_onecol( return self._simple_select_one_onecol(
table="profiles", table="profiles",
@ -83,17 +65,6 @@ class ProfileStore(SQLBaseStore):
desc="get_profile_avatar_url", desc="get_profile_avatar_url",
) )
def set_profile_avatar_url(self, user_localpart, new_avatar_url, batchnum):
return self._simple_update_one(
table="profiles",
keyvalues={"user_id": user_localpart},
updatevalues={
"avatar_url": new_avatar_url,
"batch": batchnum,
},
desc="set_profile_avatar_url",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_latest_profile_replication_batch_number(self): def get_latest_profile_replication_batch_number(self):
def f(txn): def f(txn):
@ -156,6 +127,37 @@ class ProfileStore(SQLBaseStore):
desc="get_from_remote_profile_cache", desc="get_from_remote_profile_cache",
) )
class ProfileStore(ProfileWorkerStore):
def create_profile(self, user_localpart):
return self._simple_insert(
table="profiles",
values={"user_id": user_localpart},
desc="create_profile",
)
def set_profile_displayname(self, user_localpart, new_displayname, batchnum):
return self._simple_update_one(
table="profiles",
keyvalues={"user_id": user_localpart},
updatevalues={
"displayname": new_displayname,
"batch": batchnum,
},
desc="set_profile_displayname",
)
def set_profile_avatar_url(self, user_localpart, new_avatar_url, batchnum):
return self._simple_update_one(
table="profiles",
keyvalues={"user_id": user_localpart},
updatevalues={
"avatar_url": new_avatar_url,
"batch": batchnum,
},
desc="set_profile_avatar_url",
)
def add_remote_profile_cache(self, user_id, displayname, avatar_url): def add_remote_profile_cache(self, user_id, displayname, avatar_url):
"""Ensure we are caching the remote user's profiles. """Ensure we are caching the remote user's profiles.

View File

@ -23,7 +23,7 @@ from twisted.internet import defer
import abc import abc
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -460,14 +460,12 @@ class RegistrationStore(RegistrationWorkerStore,
""" """
def _find_next_generated_user_id(txn): def _find_next_generated_user_id(txn):
txn.execute("SELECT name FROM users") txn.execute("SELECT name FROM users")
rows = self.cursor_to_dict(txn)
regex = re.compile("^@(\d+):") regex = re.compile("^@(\d+):")
found = set() found = set()
for r in rows: for user_id, in txn:
user_id = r["name"]
match = regex.search(user_id) match = regex.search(user_id)
if match: if match:
found.add(int(match.group(1))) found.add(int(match.group(1)))

View File

@ -22,7 +22,7 @@ from synapse.util.caches.descriptors import cached, cachedInlineCallbacks
import collections import collections
import logging import logging
import ujson as json import simplejson as json
import re import re
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -157,6 +157,18 @@ class RoomWorkerStore(SQLBaseStore):
"get_public_room_changes", get_public_room_changes_txn "get_public_room_changes", get_public_room_changes_txn
) )
@cached(max_entries=10000)
def is_room_blocked(self, room_id):
return self._simple_select_one_onecol(
table="blocked_rooms",
keyvalues={
"room_id": room_id,
},
retcol="1",
allow_none=True,
desc="is_room_blocked",
)
class RoomStore(RoomWorkerStore, SearchStore): class RoomStore(RoomWorkerStore, SearchStore):
@ -485,18 +497,6 @@ class RoomStore(RoomWorkerStore, SearchStore):
else: else:
defer.returnValue(None) defer.returnValue(None)
@cached(max_entries=10000)
def is_room_blocked(self, room_id):
return self._simple_select_one_onecol(
table="blocked_rooms",
keyvalues={
"room_id": room_id,
},
retcol="1",
allow_none=True,
desc="is_room_blocked",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def block_room(self, room_id, user_id): def block_room(self, room_id, user_id):
yield self._simple_insert( yield self._simple_insert(
@ -507,7 +507,11 @@ class RoomStore(RoomWorkerStore, SearchStore):
}, },
desc="block_room", desc="block_room",
) )
self.is_room_blocked.invalidate((room_id,)) yield self.runInteraction(
"block_room_invalidation",
self._invalidate_cache_and_stream,
self.is_room_blocked, (room_id,),
)
def get_media_mxcs_in_room(self, room_id): def get_media_mxcs_in_room(self, room_id):
"""Retrieves all the local and remote media MXC URIs in a given room """Retrieves all the local and remote media MXC URIs in a given room
@ -590,7 +594,8 @@ class RoomStore(RoomWorkerStore, SearchStore):
while next_token: while next_token:
sql = """ sql = """
SELECT stream_ordering, content FROM events SELECT stream_ordering, json FROM events
JOIN event_json USING (event_id)
WHERE room_id = ? WHERE room_id = ?
AND stream_ordering < ? AND stream_ordering < ?
AND contains_url = ? AND outlier = ? AND contains_url = ? AND outlier = ?
@ -602,8 +607,8 @@ class RoomStore(RoomWorkerStore, SearchStore):
next_token = None next_token = None
for stream_ordering, content_json in txn: for stream_ordering, content_json in txn:
next_token = stream_ordering next_token = stream_ordering
content = json.loads(content_json) event_json = json.loads(content_json)
content = event_json["content"]
content_url = content.get("url") content_url = content.get("url")
thumbnail_url = content.get("info", {}).get("thumbnail_url") thumbnail_url = content.get("info", {}).get("thumbnail_url")

View File

@ -28,7 +28,7 @@ from synapse.api.constants import Membership, EventTypes
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -645,8 +645,9 @@ class RoomMemberStore(RoomMemberWorkerStore):
def add_membership_profile_txn(txn): def add_membership_profile_txn(txn):
sql = (""" sql = ("""
SELECT stream_ordering, event_id, events.room_id, content SELECT stream_ordering, event_id, events.room_id, event_json.json
FROM events FROM events
INNER JOIN event_json USING (event_id)
INNER JOIN room_memberships USING (event_id) INNER JOIN room_memberships USING (event_id)
WHERE ? <= stream_ordering AND stream_ordering < ? WHERE ? <= stream_ordering AND stream_ordering < ?
AND type = 'm.room.member' AND type = 'm.room.member'
@ -667,7 +668,8 @@ class RoomMemberStore(RoomMemberWorkerStore):
event_id = row["event_id"] event_id = row["event_id"]
room_id = row["room_id"] room_id = row["room_id"]
try: try:
content = json.loads(row["content"]) event_json = json.loads(row["json"])
content = event_json['content']
except Exception: except Exception:
continue continue

View File

@ -12,9 +12,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import json
import logging import logging
import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -17,7 +17,7 @@ import logging
from synapse.storage.prepare_database import get_statements from synapse.storage.prepare_database import get_statements
from synapse.storage.engines import PostgresEngine, Sqlite3Engine from synapse.storage.engines import PostgresEngine, Sqlite3Engine
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -66,7 +66,7 @@ def run_create(cur, database_engine, *args, **kwargs):
"max_stream_id_exclusive": max_stream_id + 1, "max_stream_id_exclusive": max_stream_id + 1,
"rows_inserted": 0, "rows_inserted": 0,
} }
progress_json = ujson.dumps(progress) progress_json = simplejson.dumps(progress)
sql = ( sql = (
"INSERT into background_updates (update_name, progress_json)" "INSERT into background_updates (update_name, progress_json)"

View File

@ -16,7 +16,7 @@ import logging
from synapse.storage.prepare_database import get_statements from synapse.storage.prepare_database import get_statements
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -45,7 +45,7 @@ def run_create(cur, database_engine, *args, **kwargs):
"max_stream_id_exclusive": max_stream_id + 1, "max_stream_id_exclusive": max_stream_id + 1,
"rows_inserted": 0, "rows_inserted": 0,
} }
progress_json = ujson.dumps(progress) progress_json = simplejson.dumps(progress)
sql = ( sql = (
"INSERT into background_updates (update_name, progress_json)" "INSERT into background_updates (update_name, progress_json)"

View File

@ -16,7 +16,7 @@ from synapse.storage.engines import PostgresEngine
from synapse.storage.prepare_database import get_statements from synapse.storage.prepare_database import get_statements
import logging import logging
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -49,7 +49,7 @@ def run_create(cur, database_engine, *args, **kwargs):
"rows_inserted": 0, "rows_inserted": 0,
"have_added_indexes": False, "have_added_indexes": False,
} }
progress_json = ujson.dumps(progress) progress_json = simplejson.dumps(progress)
sql = ( sql = (
"INSERT into background_updates (update_name, progress_json)" "INSERT into background_updates (update_name, progress_json)"

View File

@ -15,7 +15,7 @@
from synapse.storage.prepare_database import get_statements from synapse.storage.prepare_database import get_statements
import logging import logging
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -44,7 +44,7 @@ def run_create(cur, database_engine, *args, **kwargs):
"max_stream_id_exclusive": max_stream_id + 1, "max_stream_id_exclusive": max_stream_id + 1,
"rows_inserted": 0, "rows_inserted": 0,
} }
progress_json = ujson.dumps(progress) progress_json = simplejson.dumps(progress)
sql = ( sql = (
"INSERT into background_updates (update_name, progress_json)" "INSERT into background_updates (update_name, progress_json)"

View File

@ -0,0 +1,17 @@
/* Copyright 2018 New Vector Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
INSERT into background_updates (update_name, progress_json)
VALUES ('user_ips_last_seen_index', '{}');

View File

@ -0,0 +1,22 @@
/* Copyright 2018 New Vector Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* This isn't a real ENUM because sqlite doesn't support it
* and we use a default of NULL for inserted rows and interpret
* NULL at the python store level as necessary so that existing
* rows are given the correct default policy.
*/
ALTER TABLE groups ADD COLUMN join_policy TEXT NOT NULL DEFAULT 'invite';

View File

@ -16,7 +16,7 @@
from collections import namedtuple from collections import namedtuple
import logging import logging
import re import re
import ujson as json import simplejson as json
from twisted.internet import defer from twisted.internet import defer
@ -75,8 +75,9 @@ class SearchStore(BackgroundUpdateStore):
def reindex_search_txn(txn): def reindex_search_txn(txn):
sql = ( sql = (
"SELECT stream_ordering, event_id, room_id, type, content, " "SELECT stream_ordering, event_id, room_id, type, json, "
" origin_server_ts FROM events" " origin_server_ts FROM events"
" JOIN event_json USING (event_id)"
" WHERE ? <= stream_ordering AND stream_ordering < ?" " WHERE ? <= stream_ordering AND stream_ordering < ?"
" AND (%s)" " AND (%s)"
" ORDER BY stream_ordering DESC" " ORDER BY stream_ordering DESC"
@ -104,7 +105,8 @@ class SearchStore(BackgroundUpdateStore):
stream_ordering = row["stream_ordering"] stream_ordering = row["stream_ordering"]
origin_server_ts = row["origin_server_ts"] origin_server_ts = row["origin_server_ts"]
try: try:
content = json.loads(row["content"]) event_json = json.loads(row["json"])
content = event_json["content"]
except Exception: except Exception:
continue continue

View File

@ -240,6 +240,9 @@ class StateGroupWorkerStore(SQLBaseStore):
( (
"AND type = ? AND state_key = ?", "AND type = ? AND state_key = ?",
(etype, state_key) (etype, state_key)
) if state_key is not None else (
"AND type = ?",
(etype,)
) )
for etype, state_key in types for etype, state_key in types
] ]
@ -259,10 +262,19 @@ class StateGroupWorkerStore(SQLBaseStore):
key = (typ, state_key) key = (typ, state_key)
results[group][key] = event_id results[group][key] = event_id
else: else:
where_args = []
where_clauses = []
wildcard_types = False
if types is not None: if types is not None:
where_clause = "AND (%s)" % ( for typ in types:
" OR ".join(["(type = ? AND state_key = ?)"] * len(types)), if typ[1] is None:
) where_clauses.append("(type = ?)")
where_args.extend(typ[0])
wildcard_types = True
else:
where_clauses.append("(type = ? AND state_key = ?)")
where_args.extend([typ[0], typ[1]])
where_clause = "AND (%s)" % (" OR ".join(where_clauses))
else: else:
where_clause = "" where_clause = ""
@ -279,7 +291,7 @@ class StateGroupWorkerStore(SQLBaseStore):
# after we finish deduping state, which requires this func) # after we finish deduping state, which requires this func)
args = [next_group] args = [next_group]
if types: if types:
args.extend(i for typ in types for i in typ) args.extend(where_args)
txn.execute( txn.execute(
"SELECT type, state_key, event_id FROM state_groups_state" "SELECT type, state_key, event_id FROM state_groups_state"
@ -292,9 +304,17 @@ class StateGroupWorkerStore(SQLBaseStore):
if (typ, state_key) not in results[group] if (typ, state_key) not in results[group]
) )
# If the lengths match then we must have all the types, # If the number of entries in the (type,state_key)->event_id dict
# so no need to go walk further down the tree. # matches the number of (type,state_keys) types we were searching
if types is not None and len(results[group]) == len(types): # for, then we must have found them all, so no need to go walk
# further down the tree... UNLESS our types filter contained
# wildcards (i.e. Nones) in which case we have to do an exhaustive
# search
if (
types is not None and
not wildcard_types and
len(results[group]) == len(types)
):
break break
next_group = self._simple_select_one_onecol_txn( next_group = self._simple_select_one_onecol_txn(

Some files were not shown because too many files have changed in this diff Show More