Merge branch 'develop' of into allow_auto_join_rooms

This commit is contained in:
Krombel 2018-03-28 14:45:28 +02:00
commit 6152e253d8
53 changed files with 264 additions and 111 deletions

View File

@ -1,11 +1,89 @@
Unreleased Changes in synapse v0.27.2 (2018-03-26)
========== =======================================
synctl no longer starts the main synapse when using ``-a`` option with workers. Bug fixes:
A new worker file should be added with ``worker_app: synapse.app.homeserver``.
* Fix bug which broke TCP replication between workers (PR #3015)
Changes in synapse v0.27.1 (2018-03-26)
=======================================
Meta release as v0.27.0 temporarily pointed to the wrong commit
Changes in synapse v0.27.0 (2018-03-26)
=======================================
No changes since v0.27.0-rc2
Changes in synapse v0.27.0-rc2 (2018-03-19)
===========================================
Pulls in v0.26.1
Bug fixes:
* Fix bug introduced in v0.27.0-rc1 that causes much increased memory usage in state cache (PR #3005)
Changes in synapse v0.26.1 (2018-03-15)
=======================================
Bug fixes:
* Fix bug where an invalid event caused server to stop functioning correctly,
due to parsing and serializing bugs in ujson library (PR #3008)
Changes in synapse v0.27.0-rc1 (2018-03-14)
===========================================
The common case for running Synapse is not to run separate workers, but for those that do, be aware that synctl no longer starts the main synapse when using ``-a`` option with workers. A new worker file should be added with ``worker_app: synapse.app.homeserver``.
This release also begins the process of renaming a number of the metrics This release also begins the process of renaming a number of the metrics
reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_. reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_.
Note that the v0.28.0 release will remove the deprecated metric names.
Features:
* Add ability for ASes to override message send time (PR #2754)
* Add support for custom storage providers for media repository (PR #2867, #2777, #2783, #2789, #2791, #2804, #2812, #2814, #2857, #2868, #2767)
* Add purge API features, see `docs/admin_api/purge_history_api.rst <docs/admin_api/purge_history_api.rst>`_ for full details (PR #2858, #2867, #2882, #2946, #2962, #2943)
* Add support for whitelisting 3PIDs that users can register. (PR #2813)
* Add ``/room/{id}/event/{id}`` API (PR #2766)
* Add an admin API to get all the media in a room (PR #2818) Thanks to @turt2live!
* Add ``federation_domain_whitelist`` option (PR #2820, #2821)
Changes:
* Continue to factor out processing from main process and into worker processes. See updated `docs/workers.rst <docs/workers.rst>`_ (PR #2892 - #2904, #2913, #2920 - #2926, #2947, #2847, #2854, #2872, #2873, #2874, #2928, #2929, #2934, #2856, #2976 - #2984, #2987 - #2989, #2991 - #2993, #2995, #2784)
* Ensure state cache is used when persisting events (PR #2864, #2871, #2802, #2835, #2836, #2841, #2842, #2849)
* Change the default config to bind on both IPv4 and IPv6 on all platforms (PR #2435) Thanks to @silkeh!
* No longer require a specific version of saml2 (PR #2695) Thanks to @okurz!
* Remove ``verbosity``/``log_file`` from generated config (PR #2755)
* Add and improve metrics and logging (PR #2770, #2778, #2785, #2786, #2787, #2793, #2794, #2795, #2809, #2810, #2833, #2834, #2844, #2965, #2927, #2975, #2790, #2796, #2838)
* When using synctl with workers, don't start the main synapse automatically (PR #2774)
* Minor performance improvements (PR #2773, #2792)
* Use a connection pool for non-federation outbound connections (PR #2817)
* Make it possible to run unit tests against postgres (PR #2829)
* Update pynacl dependency to 1.2.1 or higher (PR #2888) Thanks to @bachp!
* Remove ability for AS users to call /events and /sync (PR #2948)
* Use bcrypt.checkpw (PR #2949) Thanks to @krombel!
Bug fixes:
* Fix broken ``ldap_config`` config option (PR #2683) Thanks to @seckrv!
* Fix error message when user is not allowed to unban (PR #2761) Thanks to @turt2live!
* Fix publicised groups GET API (singular) over federation (PR #2772)
* Fix user directory when using ``user_directory_search_all_users`` config option (PR #2803, #2831)
* Fix error on ``/publicRooms`` when no rooms exist (PR #2827)
* Fix bug in quarantine_media (PR #2837)
* Fix url_previews when no Content-Type is returned from URL (PR #2845)
* Fix rare race in sync API when joining room (PR #2944)
* Fix slow event search, switch back from GIST to GIN indexes (PR #2769, #2848)
Changes in synapse v0.26.0 (2018-01-05) Changes in synapse v0.26.0 (2018-01-05)

View File

@ -30,8 +30,12 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release. changes will then land on master when we next do a release.
We use Jenkins for continuous integration (http://matrix.org/jenkins), and We use `Jenkins <http://matrix.org/jenkins>`_ and
typically all pull requests get automatically tested Jenkins: if your change breaks the build, Jenkins will yell about it in #matrix-dev:matrix.org so please lurk there and keep an eye open. `Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
Code style Code style
~~~~~~~~~~ ~~~~~~~~~~

View File

@ -354,6 +354,10 @@ https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one
Fedora Fedora
------ ------
Synapse is in the Fedora repositories as ``matrix-synapse``::
sudo dnf install matrix-synapse
Oleg Girko provides Fedora RPMs at Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse https://obs.infoserver.lv/project/monitor/matrix-synapse
@ -890,6 +894,17 @@ This should end with a 'PASSED' result::
PASSED (successes=143) PASSED (successes=143)
Running the Integration Tests
=============================
Synapse is accompanied by `SyTest <https://github.com/matrix-org/sytest>`_,
a Matrix homeserver integration testing suite, which uses HTTP requests to
access the API as a Matrix client would. It is able to run Synapse directly from
the source tree, so installation of the server is not required.
Testing with SyTest is recommended for verifying that changes related to the
Client-Server API are functioning correctly. See the `installation instructions
<https://github.com/matrix-org/sytest#installing>`_ for details.
Building Internal API Documentation Building Internal API Documentation
=================================== ===================================

View File

@ -16,9 +16,11 @@ including an ``access_token`` of a server admin.
By default, events sent by local users are not deleted, as they may represent By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are the only copies of this content in existence. (Events sent by remote users are
deleted, and room state data before the cutoff is always removed). deleted.)
To delete local events as well, set ``delete_local_events`` in the body: Room state data (such as joins, leaves, topic) is always preserved.
To delete local message events as well, set ``delete_local_events`` in the body:
.. code:: json .. code:: json

View File

@ -230,7 +230,7 @@ file. For example::
``synapse.app.event_creator`` ``synapse.app.event_creator``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles non-state event creation. It can handle REST endpoints matching: Handles non-state event creation. It can handle REST endpoints matching::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send

View File

@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server. """ This is a reference implementation of a Matrix home server.
""" """
__version__ = "0.26.0" __version__ = "0.27.2"

View File

@ -17,7 +17,7 @@ from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID from synapse.types import UserID, RoomID
from twisted.internet import defer from twisted.internet import defer
import ujson as json import simplejson as json
import jsonschema import jsonschema
from jsonschema import FormatChecker from jsonschema import FormatChecker

View File

@ -36,7 +36,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.appservice") logger = logging.getLogger("synapse.app.appservice")
@ -64,7 +64,7 @@ class AppserviceServer(HomeServer):
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -44,7 +44,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.client_reader") logger = logging.getLogger("synapse.app.client_reader")
@ -88,7 +88,7 @@ class ClientReaderServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -52,7 +52,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.event_creator") logger = logging.getLogger("synapse.app.event_creator")
@ -104,7 +104,7 @@ class EventCreatorServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -41,7 +41,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_reader") logger = logging.getLogger("synapse.app.federation_reader")
@ -77,7 +77,7 @@ class FederationReaderServer(HomeServer):
FEDERATION_PREFIX: TransportLayerServer(self), FEDERATION_PREFIX: TransportLayerServer(self),
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -42,7 +42,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_sender") logger = logging.getLogger("synapse.app.federation_sender")
@ -91,7 +91,7 @@ class FederationSenderServer(HomeServer):
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -44,7 +44,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.frontend_proxy") logger = logging.getLogger("synapse.app.frontend_proxy")
@ -142,7 +142,7 @@ class FrontendProxyServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -56,7 +56,7 @@ from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.application import service from twisted.application import service
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, Resource from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File from twisted.web.static import File
@ -126,7 +126,7 @@ class SynapseHomeServer(HomeServer):
if WEB_CLIENT_PREFIX in resources: if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX) root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else: else:
root_resource = Resource() root_resource = NoResource()
root_resource = create_resource_tree(resources, root_resource) root_resource = create_resource_tree(resources, root_resource)

View File

@ -43,7 +43,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.media_repository") logger = logging.getLogger("synapse.app.media_repository")
@ -84,7 +84,7 @@ class MediaRepositoryServer(HomeServer):
), ),
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -37,7 +37,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.pusher") logger = logging.getLogger("synapse.app.pusher")
@ -94,7 +94,7 @@ class PusherServer(HomeServer):
if name == "metrics": if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self) resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -56,7 +56,7 @@ from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.synchrotron") logger = logging.getLogger("synapse.app.synchrotron")
@ -269,7 +269,7 @@ class SynchrotronServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -43,7 +43,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string from synapse.util.versionstring import get_version_string
from twisted.internet import reactor from twisted.internet import reactor
from twisted.web.resource import Resource from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.user_dir") logger = logging.getLogger("synapse.app.user_dir")
@ -116,7 +116,7 @@ class UserDirectoryServer(HomeServer):
"/_matrix/client/api/v1": resource, "/_matrix/client/api/v1": resource,
}) })
root_resource = create_resource_tree(resources, Resource()) root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp( _base.listen_tcp(
bind_addresses, bind_addresses,

View File

@ -77,7 +77,9 @@ class RegistrationConfig(Config):
# Set the number of bcrypt rounds used to generate password hash. # Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash. # Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12. # The default number is 12 (which equates to 2^12 rounds).
# N.B. that increasing this will exponentially increase the time required
# to register or login - e.g. 24 => 2^24 rounds which will take >20 mins.
bcrypt_rounds: 12 bcrypt_rounds: 12
# Allows users to register as guests without a password/email/etc, and # Allows users to register as guests without a password/email/etc, and

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import ujson as json import simplejson as json
import logging import logging
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json

View File

@ -38,7 +38,7 @@ from canonicaljson import encode_canonical_json
import logging import logging
import random import random
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -678,8 +678,8 @@ class EventCreationHandler(object):
# Ensure that we can round trip before trying to persist in db # Ensure that we can round trip before trying to persist in db
try: try:
dump = ujson.dumps(unfreeze(event.content)) dump = simplejson.dumps(unfreeze(event.content))
ujson.loads(dump) simplejson.loads(dump)
except Exception: except Exception:
logger.exception("Failed to encode content: %r", event.content) logger.exception("Failed to encode content: %r", event.content)
raise raise

View File

@ -24,7 +24,7 @@ from synapse.api.errors import (
from synapse.http.client import CaptchaServerHttpClient from synapse.http.client import CaptchaServerHttpClient
from synapse import types from synapse import types
from synapse.types import UserID, create_requester, RoomID, RoomAlias from synapse.types import UserID, create_requester, RoomID, RoomAlias
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor, Linearizer
from synapse.util.threepids import check_3pid_allowed from synapse.util.threepids import check_3pid_allowed
from ._base import BaseHandler from ._base import BaseHandler
@ -46,6 +46,10 @@ class RegistrationHandler(BaseHandler):
self.macaroon_gen = hs.get_macaroon_generator() self.macaroon_gen = hs.get_macaroon_generator()
self._generate_user_id_linearizer = Linearizer(
name="_generate_user_id_linearizer",
)
@defer.inlineCallbacks @defer.inlineCallbacks
def check_username(self, localpart, guest_access_token=None, def check_username(self, localpart, guest_access_token=None,
assigned_user_id=None): assigned_user_id=None):
@ -351,6 +355,8 @@ class RegistrationHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def _generate_user_id(self, reseed=False): def _generate_user_id(self, reseed=False):
if reseed or self._next_generated_user_id is None:
with (yield self._generate_user_id_linearizer.queue(())):
if reseed or self._next_generated_user_id is None: if reseed or self._next_generated_user_id is None:
self._next_generated_user_id = ( self._next_generated_user_id = (
yield self.store.find_next_generated_user_id_localpart() yield self.store.find_next_generated_user_id_localpart()

View File

@ -37,7 +37,7 @@ from twisted.web.util import redirectTo
import collections import collections
import logging import logging
import urllib import urllib
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -461,8 +461,7 @@ def respond_with_json(request, code, json_object, send_cors=False,
if canonical_json or synapse.events.USE_FROZEN_DICTS: if canonical_json or synapse.events.USE_FROZEN_DICTS:
json_bytes = encode_canonical_json(json_object) json_bytes = encode_canonical_json(json_object)
else: else:
# ujson doesn't like frozen_dicts. json_bytes = simplejson.dumps(json_object)
json_bytes = ujson.dumps(json_object, ensure_ascii=False)
return respond_with_json_bytes( return respond_with_json_bytes(
request, code, json_bytes, request, code, json_bytes,
@ -489,6 +488,7 @@ def respond_with_json_bytes(request, code, json_bytes, send_cors=False,
request.setHeader(b"Content-Type", b"application/json") request.setHeader(b"Content-Type", b"application/json")
request.setHeader(b"Server", version_string) request.setHeader(b"Server", version_string)
request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),)) request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),))
request.setHeader(b"Cache-Control", b"no-cache, no-store, must-revalidate")
if send_cors: if send_cors:
set_cors_headers(request) set_cors_headers(request)

View File

@ -34,7 +34,6 @@ REQUIREMENTS = {
"bcrypt": ["bcrypt>=3.1.0"], "bcrypt": ["bcrypt>=3.1.0"],
"pillow": ["PIL"], "pillow": ["PIL"],
"pydenticon": ["pydenticon"], "pydenticon": ["pydenticon"],
"ujson": ["ujson"],
"blist": ["blist"], "blist": ["blist"],
"pysaml2>=3.0.0": ["saml2>=3.0.0"], "pysaml2>=3.0.0": ["saml2>=3.0.0"],
"pymacaroons-pynacl": ["pymacaroons"], "pymacaroons-pynacl": ["pymacaroons"],

View File

@ -19,7 +19,7 @@ allowed to be sent by which side.
""" """
import logging import logging
import ujson as json import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -100,14 +100,14 @@ class RdataCommand(Command):
return cls( return cls(
stream_name, stream_name,
None if token == "batch" else int(token), None if token == "batch" else int(token),
json.loads(row_json) simplejson.loads(row_json)
) )
def to_line(self): def to_line(self):
return " ".join(( return " ".join((
self.stream_name, self.stream_name,
str(self.token) if self.token is not None else "batch", str(self.token) if self.token is not None else "batch",
json.dumps(self.row), simplejson.dumps(self.row, namedtuple_as_object=False),
)) ))
@ -298,10 +298,12 @@ class InvalidateCacheCommand(Command):
def from_line(cls, line): def from_line(cls, line):
cache_func, keys_json = line.split(" ", 1) cache_func, keys_json = line.split(" ", 1)
return cls(cache_func, json.loads(keys_json)) return cls(cache_func, simplejson.loads(keys_json))
def to_line(self): def to_line(self):
return " ".join((self.cache_func, json.dumps(self.keys))) return " ".join((
self.cache_func, simplejson.dumps(self.keys, namedtuple_as_object=False)
))
class UserIpCommand(Command): class UserIpCommand(Command):
@ -325,14 +327,14 @@ class UserIpCommand(Command):
def from_line(cls, line): def from_line(cls, line):
user_id, jsn = line.split(" ", 1) user_id, jsn = line.split(" ", 1)
access_token, ip, user_agent, device_id, last_seen = json.loads(jsn) access_token, ip, user_agent, device_id, last_seen = simplejson.loads(jsn)
return cls( return cls(
user_id, access_token, ip, user_agent, device_id, last_seen user_id, access_token, ip, user_agent, device_id, last_seen
) )
def to_line(self): def to_line(self):
return self.user_id + " " + json.dumps(( return self.user_id + " " + simplejson.dumps((
self.access_token, self.ip, self.user_agent, self.device_id, self.access_token, self.ip, self.user_agent, self.device_id,
self.last_seen, self.last_seen,
)) ))

View File

@ -30,7 +30,7 @@ from synapse.http.servlet import (
import logging import logging
import urllib import urllib
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -33,7 +33,7 @@ from ._base import set_timeline_upper_limit
import itertools import itertools
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -23,7 +23,7 @@ import re
import shutil import shutil
import sys import sys
import traceback import traceback
import ujson as json import simplejson as json
import urlparse import urlparse
from twisted.web.server import NOT_DONE_YET from twisted.web.server import NOT_DONE_YET

View File

@ -132,7 +132,7 @@ class StateHandler(object):
state_map = yield self.store.get_events(state.values(), get_prev_content=False) state_map = yield self.store.get_events(state.values(), get_prev_content=False)
state = { state = {
key: state_map[e_id] for key, e_id in state.items() if e_id in state_map key: state_map[e_id] for key, e_id in state.iteritems() if e_id in state_map
} }
defer.returnValue(state) defer.returnValue(state)
@ -378,7 +378,7 @@ class StateHandler(object):
new_state = resolve_events_with_state_map(state_set_ids, state_map) new_state = resolve_events_with_state_map(state_set_ids, state_map)
new_state = { new_state = {
key: state_map[ev_id] for key, ev_id in new_state.items() key: state_map[ev_id] for key, ev_id in new_state.iteritems()
} }
return new_state return new_state
@ -458,15 +458,15 @@ class StateResolutionHandler(object):
# build a map from state key to the event_ids which set that state. # build a map from state key to the event_ids which set that state.
# dict[(str, str), set[str]) # dict[(str, str), set[str])
state = {} state = {}
for st in state_groups_ids.values(): for st in state_groups_ids.itervalues():
for key, e_id in st.items(): for key, e_id in st.iteritems():
state.setdefault(key, set()).add(e_id) state.setdefault(key, set()).add(e_id)
# build a map from state key to the event_ids which set that state, # build a map from state key to the event_ids which set that state,
# including only those where there are state keys in conflict. # including only those where there are state keys in conflict.
conflicted_state = { conflicted_state = {
k: list(v) k: list(v)
for k, v in state.items() for k, v in state.iteritems()
if len(v) > 1 if len(v) > 1
} }
@ -480,7 +480,7 @@ class StateResolutionHandler(object):
) )
else: else:
new_state = { new_state = {
key: e_ids.pop() for key, e_ids in state.items() key: e_ids.pop() for key, e_ids in state.iteritems()
} }
# if the new state matches any of the input state groups, we can # if the new state matches any of the input state groups, we can
@ -488,8 +488,8 @@ class StateResolutionHandler(object):
# which will be used as a cache key for future resolutions, but # which will be used as a cache key for future resolutions, but
# not get persisted. # not get persisted.
state_group = None state_group = None
new_state_event_ids = frozenset(new_state.values()) new_state_event_ids = frozenset(new_state.itervalues())
for sg, events in state_groups_ids.items(): for sg, events in state_groups_ids.iteritems():
if new_state_event_ids == frozenset(e_id for e_id in events): if new_state_event_ids == frozenset(e_id for e_id in events):
state_group = sg state_group = sg
break break
@ -702,7 +702,7 @@ def _resolve_with_state(unconflicted_state_ids, conflicted_state_ds, auth_event_
auth_events = { auth_events = {
key: state_map[ev_id] key: state_map[ev_id]
for key, ev_id in auth_event_ids.items() for key, ev_id in auth_event_ids.iteritems()
if ev_id in state_map if ev_id in state_map
} }
@ -740,7 +740,7 @@ def _resolve_state_events(conflicted_state, auth_events):
auth_events.update(resolved_state) auth_events.update(resolved_state)
for key, events in conflicted_state.items(): for key, events in conflicted_state.iteritems():
if key[0] == EventTypes.JoinRules: if key[0] == EventTypes.JoinRules:
logger.debug("Resolving conflicted join rules %r", events) logger.debug("Resolving conflicted join rules %r", events)
resolved_state[key] = _resolve_auth_events( resolved_state[key] = _resolve_auth_events(
@ -750,7 +750,7 @@ def _resolve_state_events(conflicted_state, auth_events):
auth_events.update(resolved_state) auth_events.update(resolved_state)
for key, events in conflicted_state.items(): for key, events in conflicted_state.iteritems():
if key[0] == EventTypes.Member: if key[0] == EventTypes.Member:
logger.debug("Resolving conflicted member lists %r", events) logger.debug("Resolving conflicted member lists %r", events)
resolved_state[key] = _resolve_auth_events( resolved_state[key] = _resolve_auth_events(
@ -760,7 +760,7 @@ def _resolve_state_events(conflicted_state, auth_events):
auth_events.update(resolved_state) auth_events.update(resolved_state)
for key, events in conflicted_state.items(): for key, events in conflicted_state.iteritems():
if key not in resolved_state: if key not in resolved_state:
logger.debug("Resolving conflicted state %r:%r", key, events) logger.debug("Resolving conflicted state %r:%r", key, events)
resolved_state[key] = _resolve_normal_events( resolved_state[key] = _resolve_normal_events(

View File

@ -23,7 +23,7 @@ from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.caches.descriptors import cached, cachedList, cachedInlineCallbacks from synapse.util.caches.descriptors import cached, cachedList, cachedInlineCallbacks
import abc import abc
import ujson as json import simplejson as json
import logging import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -19,7 +19,7 @@ from . import engines
from twisted.internet import defer from twisted.internet import defer
import ujson as json import simplejson as json
import logging import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -14,7 +14,7 @@
# limitations under the License. # limitations under the License.
import logging import logging
import ujson import simplejson
from twisted.internet import defer from twisted.internet import defer
@ -85,7 +85,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
) )
rows = [] rows = []
for destination, edu in remote_messages_by_destination.items(): for destination, edu in remote_messages_by_destination.items():
edu_json = ujson.dumps(edu) edu_json = simplejson.dumps(edu)
rows.append((destination, stream_id, now_ms, edu_json)) rows.append((destination, stream_id, now_ms, edu_json))
txn.executemany(sql, rows) txn.executemany(sql, rows)
@ -177,7 +177,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
" WHERE user_id = ?" " WHERE user_id = ?"
) )
txn.execute(sql, (user_id,)) txn.execute(sql, (user_id,))
message_json = ujson.dumps(messages_by_device["*"]) message_json = simplejson.dumps(messages_by_device["*"])
for row in txn: for row in txn:
# Add the message for all devices for this user on this # Add the message for all devices for this user on this
# server. # server.
@ -199,7 +199,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
# Only insert into the local inbox if the device exists on # Only insert into the local inbox if the device exists on
# this server # this server
device = row[0] device = row[0]
message_json = ujson.dumps(messages_by_device[device]) message_json = simplejson.dumps(messages_by_device[device])
messages_json_for_user[device] = message_json messages_json_for_user[device] = message_json
if messages_json_for_user: if messages_json_for_user:
@ -253,7 +253,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
messages = [] messages = []
for row in txn: for row in txn:
stream_pos = row[0] stream_pos = row[0]
messages.append(ujson.loads(row[1])) messages.append(simplejson.loads(row[1]))
if len(messages) < limit: if len(messages) < limit:
stream_pos = current_stream_id stream_pos = current_stream_id
return (messages, stream_pos) return (messages, stream_pos)
@ -389,7 +389,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
messages = [] messages = []
for row in txn: for row in txn:
stream_pos = row[0] stream_pos = row[0]
messages.append(ujson.loads(row[1])) messages.append(simplejson.loads(row[1]))
if len(messages) < limit: if len(messages) < limit:
stream_pos = current_stream_id stream_pos = current_stream_id
return (messages, stream_pos) return (messages, stream_pos)

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import logging import logging
import ujson as json import simplejson as json
from twisted.internet import defer from twisted.internet import defer

View File

@ -17,7 +17,7 @@ from twisted.internet import defer
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
from canonicaljson import encode_canonical_json from canonicaljson import encode_canonical_json
import ujson as json import simplejson as json
from ._base import SQLBaseStore from ._base import SQLBaseStore

View File

@ -22,7 +22,7 @@ from synapse.types import RoomStreamToken
from .stream import lower_bound from .stream import lower_bound
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -38,7 +38,7 @@ from functools import wraps
import synapse.metrics import synapse.metrics
import logging import logging
import ujson as json import simplejson as json
# these are only included to make the type annotations work # these are only included to make the type annotations work
from synapse.events import EventBase # noqa: F401 from synapse.events import EventBase # noqa: F401
@ -53,10 +53,25 @@ event_counter = metrics.register_counter(
"persisted_events_sep", labels=["type", "origin_type", "origin_entity"] "persisted_events_sep", labels=["type", "origin_type", "origin_entity"]
) )
# The number of times we are recalculating the current state
state_delta_counter = metrics.register_counter(
"state_delta",
)
# The number of times we are recalculating state when there is only a
# single forward extremity
state_delta_single_event_counter = metrics.register_counter(
"state_delta_single_event",
)
# The number of times we are reculating state when we could have resonably
# calculated the delta when we calculated the state for an event we were
# persisting.
state_delta_reuse_delta_counter = metrics.register_counter(
"state_delta_reuse_delta",
)
def encode_json(json_object): def encode_json(json_object):
if USE_FROZEN_DICTS: if USE_FROZEN_DICTS:
# ujson doesn't like frozen_dicts
return encode_canonical_json(json_object) return encode_canonical_json(json_object)
else: else:
return json.dumps(json_object, ensure_ascii=False) return json.dumps(json_object, ensure_ascii=False)
@ -369,7 +384,8 @@ class EventsStore(EventsWorkerStore):
room_id, ev_ctx_rm, latest_event_ids room_id, ev_ctx_rm, latest_event_ids
) )
if new_latest_event_ids == set(latest_event_ids): latest_event_ids = set(latest_event_ids)
if new_latest_event_ids == latest_event_ids:
# No change in extremities, so no change in state # No change in extremities, so no change in state
continue continue
@ -390,6 +406,26 @@ class EventsStore(EventsWorkerStore):
if all_single_prev_not_state: if all_single_prev_not_state:
continue continue
state_delta_counter.inc()
if len(new_latest_event_ids) == 1:
state_delta_single_event_counter.inc()
# This is a fairly handwavey check to see if we could
# have guessed what the delta would have been when
# processing one of these events.
# What we're interested in is if the latest extremities
# were the same when we created the event as they are
# now. When this server creates a new event (as opposed
# to receiving it over federation) it will use the
# forward extremities as the prev_events, so we can
# guess this by looking at the prev_events and checking
# if they match the current forward extremities.
for ev, _ in ev_ctx_rm:
prev_event_ids = set(e for e, _ in ev.prev_events)
if latest_event_ids == prev_event_ids:
state_delta_reuse_delta_counter.inc()
break
logger.info( logger.info(
"Calculating state delta for room %s", room_id, "Calculating state delta for room %s", room_id,
) )

View File

@ -28,7 +28,7 @@ from synapse.api.errors import SynapseError
from collections import namedtuple from collections import namedtuple
import logging import logging
import ujson as json import simplejson as json
# these are only included to make the type annotations work # these are only included to make the type annotations work
from synapse.events import EventBase # noqa: F401 from synapse.events import EventBase # noqa: F401

View File

@ -19,7 +19,7 @@ from synapse.api.errors import SynapseError
from ._base import SQLBaseStore from ._base import SQLBaseStore
import ujson as json import simplejson as json
# The category ID for the "default" category. We don't store as null in the # The category ID for the "default" category. We don't store as null in the

View File

@ -23,7 +23,7 @@ from twisted.internet import defer
import abc import abc
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -460,14 +460,12 @@ class RegistrationStore(RegistrationWorkerStore,
""" """
def _find_next_generated_user_id(txn): def _find_next_generated_user_id(txn):
txn.execute("SELECT name FROM users") txn.execute("SELECT name FROM users")
rows = self.cursor_to_dict(txn)
regex = re.compile("^@(\d+):") regex = re.compile("^@(\d+):")
found = set() found = set()
for r in rows: for user_id, in txn:
user_id = r["name"]
match = regex.search(user_id) match = regex.search(user_id)
if match: if match:
found.add(int(match.group(1))) found.add(int(match.group(1)))

View File

@ -22,7 +22,7 @@ from synapse.util.caches.descriptors import cached, cachedInlineCallbacks
import collections import collections
import logging import logging
import ujson as json import simplejson as json
import re import re
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -28,7 +28,7 @@ from synapse.api.constants import Membership, EventTypes
from synapse.types import get_domain_from_id from synapse.types import get_domain_from_id
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -17,7 +17,7 @@ import logging
from synapse.storage.prepare_database import get_statements from synapse.storage.prepare_database import get_statements
from synapse.storage.engines import PostgresEngine, Sqlite3Engine from synapse.storage.engines import PostgresEngine, Sqlite3Engine
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -66,7 +66,7 @@ def run_create(cur, database_engine, *args, **kwargs):
"max_stream_id_exclusive": max_stream_id + 1, "max_stream_id_exclusive": max_stream_id + 1,
"rows_inserted": 0, "rows_inserted": 0,
} }
progress_json = ujson.dumps(progress) progress_json = simplejson.dumps(progress)
sql = ( sql = (
"INSERT into background_updates (update_name, progress_json)" "INSERT into background_updates (update_name, progress_json)"

View File

@ -16,7 +16,7 @@ import logging
from synapse.storage.prepare_database import get_statements from synapse.storage.prepare_database import get_statements
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -45,7 +45,7 @@ def run_create(cur, database_engine, *args, **kwargs):
"max_stream_id_exclusive": max_stream_id + 1, "max_stream_id_exclusive": max_stream_id + 1,
"rows_inserted": 0, "rows_inserted": 0,
} }
progress_json = ujson.dumps(progress) progress_json = simplejson.dumps(progress)
sql = ( sql = (
"INSERT into background_updates (update_name, progress_json)" "INSERT into background_updates (update_name, progress_json)"

View File

@ -16,7 +16,7 @@ from synapse.storage.engines import PostgresEngine
from synapse.storage.prepare_database import get_statements from synapse.storage.prepare_database import get_statements
import logging import logging
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -49,7 +49,7 @@ def run_create(cur, database_engine, *args, **kwargs):
"rows_inserted": 0, "rows_inserted": 0,
"have_added_indexes": False, "have_added_indexes": False,
} }
progress_json = ujson.dumps(progress) progress_json = simplejson.dumps(progress)
sql = ( sql = (
"INSERT into background_updates (update_name, progress_json)" "INSERT into background_updates (update_name, progress_json)"

View File

@ -15,7 +15,7 @@
from synapse.storage.prepare_database import get_statements from synapse.storage.prepare_database import get_statements
import logging import logging
import ujson import simplejson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -44,7 +44,7 @@ def run_create(cur, database_engine, *args, **kwargs):
"max_stream_id_exclusive": max_stream_id + 1, "max_stream_id_exclusive": max_stream_id + 1,
"rows_inserted": 0, "rows_inserted": 0,
} }
progress_json = ujson.dumps(progress) progress_json = simplejson.dumps(progress)
sql = ( sql = (
"INSERT into background_updates (update_name, progress_json)" "INSERT into background_updates (update_name, progress_json)"

View File

@ -16,7 +16,7 @@
from collections import namedtuple from collections import namedtuple
import logging import logging
import re import re
import ujson as json import simplejson as json
from twisted.internet import defer from twisted.internet import defer

View File

@ -19,7 +19,7 @@ from synapse.storage.account_data import AccountDataWorkerStore
from synapse.util.caches.descriptors import cached from synapse.util.caches.descriptors import cached
from twisted.internet import defer from twisted.internet import defer
import ujson as json import simplejson as json
import logging import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -23,7 +23,7 @@ from canonicaljson import encode_canonical_json
from collections import namedtuple from collections import namedtuple
import logging import logging
import ujson as json import simplejson as json
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -667,7 +667,7 @@ class UserDirectoryStore(SQLBaseStore):
# The array of numbers are the weights for the various part of the # The array of numbers are the weights for the various part of the
# search: (domain, _, display name, localpart) # search: (domain, _, display name, localpart)
sql = """ sql = """
SELECT d.user_id, display_name, avatar_url SELECT d.user_id AS user_id, display_name, avatar_url
FROM user_directory_search FROM user_directory_search
INNER JOIN user_directory AS d USING (user_id) INNER JOIN user_directory AS d USING (user_id)
%s %s
@ -702,7 +702,7 @@ class UserDirectoryStore(SQLBaseStore):
search_query = _parse_query_sqlite(search_term) search_query = _parse_query_sqlite(search_term)
sql = """ sql = """
SELECT d.user_id, display_name, avatar_url SELECT d.user_id AS user_id, display_name, avatar_url
FROM user_directory_search FROM user_directory_search
INNER JOIN user_directory AS d USING (user_id) INNER JOIN user_directory AS d USING (user_id)
%s %s

View File

@ -132,9 +132,13 @@ class DictionaryCache(object):
self._update_or_insert(key, value, known_absent) self._update_or_insert(key, value, known_absent)
def _update_or_insert(self, key, value, known_absent): def _update_or_insert(self, key, value, known_absent):
entry = self.cache.setdefault(key, DictionaryEntry(False, set(), {})) # We pop and reinsert as we need to tell the cache the size may have
# changed
entry = self.cache.pop(key, DictionaryEntry(False, set(), {}))
entry.value.update(value) entry.value.update(value)
entry.known_absent.update(known_absent) entry.known_absent.update(known_absent)
self.cache[key] = entry
def _insert(self, key, value, known_absent): def _insert(self, key, value, known_absent):
self.cache[key] = DictionaryEntry(True, known_absent, value) self.cache[key] = DictionaryEntry(True, known_absent, value)

View File

@ -154,11 +154,18 @@ class LruCache(object):
def cache_set(key, value, callbacks=[]): def cache_set(key, value, callbacks=[]):
node = cache.get(key, None) node = cache.get(key, None)
if node is not None: if node is not None:
if value != node.value: # We sometimes store large objects, e.g. dicts, which cause
# the inequality check to take a long time. So let's only do
# the check if we have some callbacks to call.
if node.callbacks and value != node.value:
for cb in node.callbacks: for cb in node.callbacks:
cb() cb()
node.callbacks.clear() node.callbacks.clear()
# We don't bother to protect this by value != node.value as
# generally size_callback will be cheap compared with equality
# checks. (For example, taking the size of two dicts is quicker
# than comparing them for equality.)
if size_callback: if size_callback:
cached_cache_len[0] -= size_callback(node.value) cached_cache_len[0] -= size_callback(node.value)
cached_cache_len[0] += size_callback(value) cached_cache_len[0] += size_callback(value)

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.web.resource import Resource from twisted.web.resource import NoResource
import logging import logging
@ -45,7 +45,7 @@ def create_resource_tree(desired_tree, root_resource):
for path_seg in full_path.split('/')[1:-1]: for path_seg in full_path.split('/')[1:-1]:
if path_seg not in last_resource.listNames(): if path_seg not in last_resource.listNames():
# resource doesn't exist, so make a "dummy resource" # resource doesn't exist, so make a "dummy resource"
child_resource = Resource() child_resource = NoResource()
last_resource.putChild(path_seg, child_resource) last_resource.putChild(path_seg, child_resource)
res_id = _resource_id(last_resource, path_seg) res_id = _resource_id(last_resource, path_seg)
resource_mappings[res_id] = child_resource resource_mappings[res_id] = child_resource