mirror of
https://mau.dev/maunium/synapse.git
synced 2024-10-01 01:36:05 -04:00
Reference Matrix Home Server
This commit is contained in:
commit
4f475c7697
177
LICENSE
Normal file
177
LICENSE
Normal file
@ -0,0 +1,177 @@
|
|||||||
|
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
3
MANIFEST.in
Normal file
3
MANIFEST.in
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
recursive-include docs *
|
||||||
|
recursive-include tests *.py
|
||||||
|
recursive-include synapse/persistence/schema *.sql
|
126
README.rst
Normal file
126
README.rst
Normal file
@ -0,0 +1,126 @@
|
|||||||
|
Installation
|
||||||
|
============
|
||||||
|
|
||||||
|
[TODO(kegan): I also needed libffi-dev, which I don't think is included in build-essential.]
|
||||||
|
|
||||||
|
First, the dependencies need to be installed. Start by installing 'python-dev'
|
||||||
|
and the various tools of the compiler toolchain:
|
||||||
|
|
||||||
|
Installing prerequisites on ubuntu::
|
||||||
|
|
||||||
|
$ sudo apt-get install build-essential python-dev
|
||||||
|
|
||||||
|
Installing prerequisites on Mac OS X::
|
||||||
|
|
||||||
|
$ xcode-select --install
|
||||||
|
|
||||||
|
The homeserver has a number of external dependencies, that are easiest
|
||||||
|
to install by making setup.py do so, in --user mode::
|
||||||
|
|
||||||
|
$ python setup.py develop --user
|
||||||
|
|
||||||
|
This will run a process of downloading and installing into your
|
||||||
|
user's .local/lib directory all of the required dependencies that are
|
||||||
|
missing.
|
||||||
|
|
||||||
|
Once this is done, you may wish to run the homeserver's unit tests, to
|
||||||
|
check that everything is installed as it should be::
|
||||||
|
|
||||||
|
$ python setup.py test
|
||||||
|
|
||||||
|
This should end with a 'PASSED' result::
|
||||||
|
|
||||||
|
Ran 143 tests in 0.601s
|
||||||
|
|
||||||
|
PASSED (successes=143)
|
||||||
|
|
||||||
|
|
||||||
|
Running The Home Server
|
||||||
|
=======================
|
||||||
|
|
||||||
|
In order for other home servers to send messages to your server, they will need
|
||||||
|
to know its host name. You have two choices here, which will influence the form
|
||||||
|
of your user IDs:
|
||||||
|
|
||||||
|
1) Use the machine's own hostname as available on public DNS in the form of its
|
||||||
|
A or AAAA records. This is easier to set up initially, perhaps for testing,
|
||||||
|
but lacks the flexibility of SRV.
|
||||||
|
|
||||||
|
2) Set up a SRV record for your domain name. This requires you create a SRV
|
||||||
|
record in DNS, but gives the flexibility to run the server on your own
|
||||||
|
choice of TCP port, on a machine that might not be the same name as the
|
||||||
|
domain name.
|
||||||
|
|
||||||
|
For the first form, simply pass the required hostname (of the machine) as the
|
||||||
|
--host parameter::
|
||||||
|
|
||||||
|
$ python synapse/app/homeserver.py --host machine.my.domain.name
|
||||||
|
|
||||||
|
For the second form, first create your SRV record and publish it in DNS. This
|
||||||
|
needs to be named _matrix._tcp.YOURDOMAIN, and point at at least one hostname
|
||||||
|
and port where the server is running. (At the current time we only support a
|
||||||
|
single server, but we may at some future point support multiple servers, for
|
||||||
|
backup failover or load-balancing purposes). The DNS record would then look
|
||||||
|
something like::
|
||||||
|
|
||||||
|
_matrix._tcp IN SRV 10 0 8448 machine.my.domain.name.
|
||||||
|
|
||||||
|
At this point, you should then run the homeserver with the hostname of this
|
||||||
|
SRV record, as that is the name other machines will expect it to have::
|
||||||
|
|
||||||
|
$ python synapse/app/homeserver.py --host my.domain.name --port 8448
|
||||||
|
|
||||||
|
You may additionally want to pass one or more "-v" options, in order to
|
||||||
|
increase the verbosity of logging output; at least for initial testing.
|
||||||
|
|
||||||
|
|
||||||
|
Running The Web Client
|
||||||
|
======================
|
||||||
|
|
||||||
|
At the present time, the web client is not directly served by the homeserver's
|
||||||
|
HTTP server. To serve this in a form the web browser can reach, arrange for the
|
||||||
|
'webclient' sub-directory to be made available by any sort of HTTP server that
|
||||||
|
can serve static files. For example, python's SimpleHTTPServer will suffice::
|
||||||
|
|
||||||
|
$ cd webclient
|
||||||
|
$ python -m SimpleHTTPServer
|
||||||
|
|
||||||
|
You can now point your browser at http://localhost:8000/ to find the client.
|
||||||
|
|
||||||
|
If this is the first time you have used the client from that browser (it uses
|
||||||
|
HTML5 local storage to remember its config), you will need to log in to your
|
||||||
|
account. If you don't yet have an account, because you've just started the
|
||||||
|
homeserver for the first time, then you'll need to register one.
|
||||||
|
|
||||||
|
Registering A New Account
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
Your new user name will be formed partly from the hostname your server is
|
||||||
|
running as, and partly from a localpart you specify when you create the
|
||||||
|
account. Your name will take the form of::
|
||||||
|
|
||||||
|
@localpart:my.domain.here
|
||||||
|
(pronounced "at localpart on my dot domain dot here")
|
||||||
|
|
||||||
|
Specify your desired localpart in the topmost box of the "Register for an
|
||||||
|
account" form, and click the "Register" button.
|
||||||
|
|
||||||
|
Logging In To An Existing Account
|
||||||
|
---------------------------------
|
||||||
|
|
||||||
|
[[TODO(paul): It seems the current web client still requests an access_token -
|
||||||
|
I suspect this part will need updating before we can point people at how to
|
||||||
|
perform e.g. user+password or 3PID authenticated login]]
|
||||||
|
|
||||||
|
|
||||||
|
Building Documentation
|
||||||
|
======================
|
||||||
|
|
||||||
|
Before building documentation install spinx and sphinxcontrib-napoleon::
|
||||||
|
|
||||||
|
$ pip install sphinx
|
||||||
|
$ pip install sphinxcontrib-napoleon
|
||||||
|
|
||||||
|
Building documentation::
|
||||||
|
|
||||||
|
$ python setup.py build_sphinx
|
699
cmdclient/console.py
Executable file
699
cmdclient/console.py
Executable file
@ -0,0 +1,699 @@
|
|||||||
|
#! /usr/bin/env python
|
||||||
|
""" Starts a synapse client console. """
|
||||||
|
|
||||||
|
from twisted.internet import reactor, defer, threads
|
||||||
|
from http import TwistedHttpClient
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import cmd
|
||||||
|
import getpass
|
||||||
|
import json
|
||||||
|
import shlex
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import urllib
|
||||||
|
import urlparse
|
||||||
|
|
||||||
|
import nacl.signing
|
||||||
|
import nacl.encoding
|
||||||
|
|
||||||
|
from syutil.crypto.jsonsign import verify_signed_json, SignatureVerifyException
|
||||||
|
|
||||||
|
CONFIG_JSON = "cmdclient_config.json"
|
||||||
|
|
||||||
|
TRUSTED_ID_SERVERS = [
|
||||||
|
'localhost:8001'
|
||||||
|
]
|
||||||
|
|
||||||
|
class SynapseCmd(cmd.Cmd):
|
||||||
|
|
||||||
|
"""Basic synapse command-line processor.
|
||||||
|
|
||||||
|
This processes commands from the user and calls the relevant HTTP methods.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, http_client, server_url, identity_server_url, username, token):
|
||||||
|
cmd.Cmd.__init__(self)
|
||||||
|
self.http_client = http_client
|
||||||
|
self.http_client.verbose = True
|
||||||
|
self.config = {
|
||||||
|
"url": server_url,
|
||||||
|
"identityServerUrl": identity_server_url,
|
||||||
|
"user": username,
|
||||||
|
"token": token,
|
||||||
|
"verbose": "on",
|
||||||
|
"complete_usernames": "on",
|
||||||
|
"send_delivery_receipts": "on"
|
||||||
|
}
|
||||||
|
self.path_prefix = "/matrix/client/api/v1"
|
||||||
|
self.event_stream_token = "START"
|
||||||
|
self.prompt = ">>> "
|
||||||
|
|
||||||
|
def do_EOF(self, line): # allows CTRL+D quitting
|
||||||
|
return True
|
||||||
|
|
||||||
|
def emptyline(self):
|
||||||
|
pass # else it repeats the previous command
|
||||||
|
|
||||||
|
def _usr(self):
|
||||||
|
return self.config["user"]
|
||||||
|
|
||||||
|
def _tok(self):
|
||||||
|
return self.config["token"]
|
||||||
|
|
||||||
|
def _url(self):
|
||||||
|
return self.config["url"] + self.path_prefix
|
||||||
|
|
||||||
|
def _identityServerUrl(self):
|
||||||
|
return self.config["identityServerUrl"]
|
||||||
|
|
||||||
|
def _is_on(self, config_name):
|
||||||
|
if config_name in self.config:
|
||||||
|
return self.config[config_name] == "on"
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _domain(self):
|
||||||
|
return self.config["user"].split(":")[1]
|
||||||
|
|
||||||
|
def do_config(self, line):
|
||||||
|
""" Show the config for this client: "config"
|
||||||
|
Edit a key value mapping: "config key value" e.g. "config token 1234"
|
||||||
|
Config variables:
|
||||||
|
user: The username to auth with.
|
||||||
|
token: The access token to auth with.
|
||||||
|
url: The url of the server.
|
||||||
|
verbose: [on|off] The verbosity of requests/responses.
|
||||||
|
complete_usernames: [on|off] Auto complete partial usernames by
|
||||||
|
assuming they are on the same homeserver as you.
|
||||||
|
E.g. name >> @name:yourhost
|
||||||
|
send_delivery_receipts: [on|off] Automatically send receipts to
|
||||||
|
messages when performing a 'stream' command.
|
||||||
|
Additional key/values can be added and can be substituted into requests
|
||||||
|
by using $. E.g. 'config roomid room1' then 'raw get /rooms/$roomid'.
|
||||||
|
"""
|
||||||
|
if len(line) == 0:
|
||||||
|
print json.dumps(self.config, indent=4)
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
args = self._parse(line, ["key", "val"], force_keys=True)
|
||||||
|
|
||||||
|
# make sure restricted config values are checked
|
||||||
|
config_rules = [ # key, valid_values
|
||||||
|
("verbose", ["on", "off"]),
|
||||||
|
("complete_usernames", ["on", "off"]),
|
||||||
|
("send_delivery_receipts", ["on", "off"])
|
||||||
|
]
|
||||||
|
for key, valid_vals in config_rules:
|
||||||
|
if key == args["key"] and args["val"] not in valid_vals:
|
||||||
|
print "%s value must be one of %s" % (args["key"],
|
||||||
|
valid_vals)
|
||||||
|
return
|
||||||
|
|
||||||
|
# toggle the http client verbosity
|
||||||
|
if args["key"] == "verbose":
|
||||||
|
self.http_client.verbose = "on" == args["val"]
|
||||||
|
|
||||||
|
# assign the new config
|
||||||
|
self.config[args["key"]] = args["val"]
|
||||||
|
print json.dumps(self.config, indent=4)
|
||||||
|
|
||||||
|
save_config(self.config)
|
||||||
|
except Exception as e:
|
||||||
|
print e
|
||||||
|
|
||||||
|
def do_register(self, line):
|
||||||
|
"""Registers for a new account: "register <userid> <noupdate>"
|
||||||
|
<userid> : The desired user ID
|
||||||
|
<noupdate> : Do not automatically clobber config values.
|
||||||
|
"""
|
||||||
|
args = self._parse(line, ["userid", "noupdate"])
|
||||||
|
path = "/register"
|
||||||
|
|
||||||
|
password = None
|
||||||
|
pwd = None
|
||||||
|
pwd2 = "_"
|
||||||
|
while pwd != pwd2:
|
||||||
|
pwd = getpass.getpass("(Optional) Type a password for this user: ")
|
||||||
|
if len(pwd) == 0:
|
||||||
|
print "Not using a password for this user."
|
||||||
|
break
|
||||||
|
pwd2 = getpass.getpass("Retype the password: ")
|
||||||
|
if pwd != pwd2:
|
||||||
|
print "Password mismatch."
|
||||||
|
else:
|
||||||
|
password = pwd
|
||||||
|
|
||||||
|
body = {}
|
||||||
|
if "userid" in args:
|
||||||
|
body["user_id"] = args["userid"]
|
||||||
|
if password:
|
||||||
|
body["password"] = password
|
||||||
|
|
||||||
|
reactor.callFromThread(self._do_register, "POST", path, body,
|
||||||
|
"noupdate" not in args)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _do_register(self, method, path, data, update_config):
|
||||||
|
url = self._url() + path
|
||||||
|
json_res = yield self.http_client.do_request(method, url, data=data)
|
||||||
|
print json.dumps(json_res, indent=4)
|
||||||
|
if update_config and "user_id" in json_res:
|
||||||
|
self.config["user"] = json_res["user_id"]
|
||||||
|
self.config["token"] = json_res["access_token"]
|
||||||
|
save_config(self.config)
|
||||||
|
|
||||||
|
def do_login(self, line):
|
||||||
|
"""Login as a specific user: "login @bob:localhost"
|
||||||
|
You MAY be prompted for a password, or instructed to visit a URL.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
args = self._parse(line, ["user_id"], force_keys=True)
|
||||||
|
can_login = threads.blockingCallFromThread(
|
||||||
|
reactor,
|
||||||
|
self._check_can_login)
|
||||||
|
if can_login:
|
||||||
|
p = getpass.getpass("Enter your password: ")
|
||||||
|
user = args["user_id"]
|
||||||
|
if self._is_on("complete_usernames") and not user.startswith("@"):
|
||||||
|
user = "@" + user + ":" + self._domain()
|
||||||
|
|
||||||
|
reactor.callFromThread(self._do_login, user, p)
|
||||||
|
print " got %s " % p
|
||||||
|
except Exception as e:
|
||||||
|
print e
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _do_login(self, user, password):
|
||||||
|
path = "/login"
|
||||||
|
data = {
|
||||||
|
"user": user,
|
||||||
|
"password": password,
|
||||||
|
"type": "m.login.password"
|
||||||
|
}
|
||||||
|
url = self._url() + path
|
||||||
|
json_res = yield self.http_client.do_request("POST", url, data=data)
|
||||||
|
print json_res
|
||||||
|
|
||||||
|
if "access_token" in json_res:
|
||||||
|
self.config["user"] = user
|
||||||
|
self.config["token"] = json_res["access_token"]
|
||||||
|
save_config(self.config)
|
||||||
|
print "Login successful."
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _check_can_login(self):
|
||||||
|
path = "/login"
|
||||||
|
# ALWAYS check that the home server can handle the login request before
|
||||||
|
# submitting!
|
||||||
|
url = self._url() + path
|
||||||
|
json_res = yield self.http_client.do_request("GET", url)
|
||||||
|
print json_res
|
||||||
|
|
||||||
|
if ("type" not in json_res or "m.login.password" != json_res["type"] or
|
||||||
|
"stages" in json_res):
|
||||||
|
fallback_url = self._url() + "/login/fallback"
|
||||||
|
print ("Unable to login via the command line client. Please visit "
|
||||||
|
"%s to login." % fallback_url)
|
||||||
|
defer.returnValue(False)
|
||||||
|
defer.returnValue(True)
|
||||||
|
|
||||||
|
def do_3pidrequest(self, line):
|
||||||
|
"""Requests the association of a third party identifier
|
||||||
|
<medium> The medium of the identifer (currently only 'email')
|
||||||
|
<address> The address of the identifer (ie. the email address)
|
||||||
|
"""
|
||||||
|
args = self._parse(line, ['medium', 'address'])
|
||||||
|
|
||||||
|
if not args['medium'] == 'email':
|
||||||
|
print "Only email is supported currently"
|
||||||
|
return
|
||||||
|
|
||||||
|
postArgs = {'email': args['address'], 'clientSecret': '____'}
|
||||||
|
|
||||||
|
reactor.callFromThread(self._do_3pidrequest, postArgs)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _do_3pidrequest(self, args):
|
||||||
|
url = self._identityServerUrl()+"/matrix/identity/api/v1/validate/email/requestToken"
|
||||||
|
|
||||||
|
json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False,
|
||||||
|
headers={'Content-Type': ['application/x-www-form-urlencoded']})
|
||||||
|
print json_res
|
||||||
|
if 'tokenId' in json_res:
|
||||||
|
print "Token ID %s sent" % (json_res['tokenId'])
|
||||||
|
|
||||||
|
def do_3pidvalidate(self, line):
|
||||||
|
"""Validate and associate a third party ID
|
||||||
|
<medium> The medium of the identifer (currently only 'email')
|
||||||
|
<tokenId> The identifier iof the token given in 3pidrequest
|
||||||
|
<token> The token sent to your third party identifier address
|
||||||
|
"""
|
||||||
|
args = self._parse(line, ['medium', 'tokenId', 'token'])
|
||||||
|
|
||||||
|
if not args['medium'] == 'email':
|
||||||
|
print "Only email is supported currently"
|
||||||
|
return
|
||||||
|
|
||||||
|
postArgs = { 'tokenId' : args['tokenId'], 'token' : args['token'] }
|
||||||
|
postArgs['mxId'] = self.config["user"]
|
||||||
|
|
||||||
|
reactor.callFromThread(self._do_3pidvalidate, postArgs)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _do_3pidvalidate(self, args):
|
||||||
|
url = self._identityServerUrl()+"/matrix/identity/api/v1/validate/email/submitToken"
|
||||||
|
|
||||||
|
json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False,
|
||||||
|
headers={'Content-Type': ['application/x-www-form-urlencoded']})
|
||||||
|
print json_res
|
||||||
|
|
||||||
|
def do_join(self, line):
|
||||||
|
"""Joins a room: "join <roomid>" """
|
||||||
|
try:
|
||||||
|
args = self._parse(line, ["roomid"], force_keys=True)
|
||||||
|
self._do_membership_change(args["roomid"], "join", self._usr())
|
||||||
|
except Exception as e:
|
||||||
|
print e
|
||||||
|
|
||||||
|
def do_joinalias(self, line):
|
||||||
|
try:
|
||||||
|
args = self._parse(line, ["roomname"], force_keys=True)
|
||||||
|
path = "/join/%s" % urllib.quote(args["roomname"])
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "PUT", path, {})
|
||||||
|
except Exception as e:
|
||||||
|
print e
|
||||||
|
|
||||||
|
def do_topic(self, line):
|
||||||
|
""""topic [set|get] <roomid> [<newtopic>]"
|
||||||
|
Set the topic for a room: topic set <roomid> <newtopic>
|
||||||
|
Get the topic for a room: topic get <roomid>
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
args = self._parse(line, ["action", "roomid", "topic"])
|
||||||
|
if "action" not in args or "roomid" not in args:
|
||||||
|
print "Must specify set|get and a room ID."
|
||||||
|
return
|
||||||
|
if args["action"].lower() not in ["set", "get"]:
|
||||||
|
print "Must specify set|get, not %s" % args["action"]
|
||||||
|
return
|
||||||
|
|
||||||
|
path = "/rooms/%s/topic" % urllib.quote(args["roomid"])
|
||||||
|
|
||||||
|
if args["action"].lower() == "set":
|
||||||
|
if "topic" not in args:
|
||||||
|
print "Must specify a new topic."
|
||||||
|
return
|
||||||
|
body = {
|
||||||
|
"topic": args["topic"]
|
||||||
|
}
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "PUT", path, body)
|
||||||
|
elif args["action"].lower() == "get":
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "GET", path)
|
||||||
|
except Exception as e:
|
||||||
|
print e
|
||||||
|
|
||||||
|
def do_invite(self, line):
|
||||||
|
"""Invite a user to a room: "invite <userid> <roomid>" """
|
||||||
|
try:
|
||||||
|
args = self._parse(line, ["userid", "roomid"], force_keys=True)
|
||||||
|
|
||||||
|
user_id = args["userid"]
|
||||||
|
|
||||||
|
reactor.callFromThread(self._do_invite, args["roomid"], user_id)
|
||||||
|
except Exception as e:
|
||||||
|
print e
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _do_invite(self, roomid, userstring):
|
||||||
|
if (not userstring.startswith('@') and
|
||||||
|
self._is_on("complete_usernames")):
|
||||||
|
url = self._identityServerUrl()+"/matrix/identity/api/v1/lookup"
|
||||||
|
|
||||||
|
json_res = yield self.http_client.do_request("GET", url, qparams={'medium':'email','address':userstring})
|
||||||
|
|
||||||
|
mxid = None
|
||||||
|
|
||||||
|
if 'mxid' in json_res and 'signatures' in json_res:
|
||||||
|
url = self._identityServerUrl()+"/matrix/identity/api/v1/pubkey/ed25519"
|
||||||
|
|
||||||
|
pubKey = None
|
||||||
|
pubKeyObj = yield self.http_client.do_request("GET", url)
|
||||||
|
if 'public_key' in pubKeyObj:
|
||||||
|
pubKey = nacl.signing.VerifyKey(pubKeyObj['public_key'], encoder=nacl.encoding.HexEncoder)
|
||||||
|
else:
|
||||||
|
print "No public key found in pubkey response!"
|
||||||
|
|
||||||
|
sigValid = False
|
||||||
|
|
||||||
|
if pubKey:
|
||||||
|
for signame in json_res['signatures']:
|
||||||
|
if signame not in TRUSTED_ID_SERVERS:
|
||||||
|
print "Ignoring signature from untrusted server %s" % (signame)
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
verify_signed_json(json_res, signame, pubKey)
|
||||||
|
sigValid = True
|
||||||
|
print "Mapping %s -> %s correctly signed by %s" % (userstring, json_res['mxid'], signame)
|
||||||
|
break
|
||||||
|
except SignatureVerifyException as e:
|
||||||
|
print "Invalid signature from %s" % (signame)
|
||||||
|
print e
|
||||||
|
|
||||||
|
if sigValid:
|
||||||
|
print "Resolved 3pid %s to %s" % (userstring, json_res['mxid'])
|
||||||
|
mxid = json_res['mxid']
|
||||||
|
else:
|
||||||
|
print "Got association for %s but couldn't verify signature" % (userstring)
|
||||||
|
|
||||||
|
if not mxid:
|
||||||
|
mxid = "@" + userstring + ":" + self._domain()
|
||||||
|
|
||||||
|
self._do_membership_change(roomid, "invite", mxid)
|
||||||
|
|
||||||
|
def do_leave(self, line):
|
||||||
|
"""Leaves a room: "leave <roomid>" """
|
||||||
|
try:
|
||||||
|
args = self._parse(line, ["roomid"], force_keys=True)
|
||||||
|
path = ("/rooms/%s/members/%s/state" %
|
||||||
|
(urllib.quote(args["roomid"]), self._usr()))
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "DELETE", path)
|
||||||
|
except Exception as e:
|
||||||
|
print e
|
||||||
|
|
||||||
|
def do_send(self, line):
|
||||||
|
"""Sends a message. "send <roomid> <body>" """
|
||||||
|
args = self._parse(line, ["roomid", "body"])
|
||||||
|
msg_id = "m%s" % int(time.time())
|
||||||
|
path = "/rooms/%s/messages/%s/%s" % (urllib.quote(args["roomid"]),
|
||||||
|
self._usr(),
|
||||||
|
msg_id)
|
||||||
|
body_json = {
|
||||||
|
"msgtype": "m.text",
|
||||||
|
"body": args["body"]
|
||||||
|
}
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "PUT", path, body_json)
|
||||||
|
|
||||||
|
def do_list(self, line):
|
||||||
|
"""List data about a room.
|
||||||
|
"list members <roomid> [query]" - List all the members in this room.
|
||||||
|
"list messages <roomid> [query]" - List all the messages in this room.
|
||||||
|
|
||||||
|
Where [query] will be directly applied as query parameters, allowing
|
||||||
|
you to use the pagination API. E.g. the last 3 messages in this room:
|
||||||
|
"list messages <roomid> from=END&to=START&limit=3"
|
||||||
|
"""
|
||||||
|
args = self._parse(line, ["type", "roomid", "qp"])
|
||||||
|
if not "type" in args or not "roomid" in args:
|
||||||
|
print "Must specify type and room ID."
|
||||||
|
return
|
||||||
|
if args["type"] not in ["members", "messages"]:
|
||||||
|
print "Unrecognised type: %s" % args["type"]
|
||||||
|
return
|
||||||
|
room_id = args["roomid"]
|
||||||
|
path = "/rooms/%s/%s/list" % (urllib.quote(room_id), args["type"])
|
||||||
|
|
||||||
|
qp = {"access_token": self._tok()}
|
||||||
|
if "qp" in args:
|
||||||
|
for key_value_str in args["qp"].split("&"):
|
||||||
|
try:
|
||||||
|
key_value = key_value_str.split("=")
|
||||||
|
qp[key_value[0]] = key_value[1]
|
||||||
|
except:
|
||||||
|
print "Bad query param: %s" % key_value
|
||||||
|
return
|
||||||
|
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "GET", path,
|
||||||
|
query_params=qp)
|
||||||
|
|
||||||
|
def do_create(self, line):
|
||||||
|
"""Creates a room.
|
||||||
|
"create [public|private] <roomname>" - Create a room <roomname> with the
|
||||||
|
specified visibility.
|
||||||
|
"create <roomname>" - Create a room <roomname> with default visibility.
|
||||||
|
"create [public|private]" - Create a room with specified visibility.
|
||||||
|
"create" - Create a room with default visibility.
|
||||||
|
"""
|
||||||
|
args = self._parse(line, ["vis", "roomname"])
|
||||||
|
# fixup args depending on which were set
|
||||||
|
body = {}
|
||||||
|
if "vis" in args and args["vis"] in ["public", "private"]:
|
||||||
|
body["visibility"] = args["vis"]
|
||||||
|
|
||||||
|
if "roomname" in args:
|
||||||
|
room_name = args["roomname"]
|
||||||
|
body["room_alias_name"] = room_name
|
||||||
|
elif "vis" in args and args["vis"] not in ["public", "private"]:
|
||||||
|
room_name = args["vis"]
|
||||||
|
body["room_alias_name"] = room_name
|
||||||
|
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "POST", "/rooms", body)
|
||||||
|
|
||||||
|
def do_raw(self, line):
|
||||||
|
"""Directly send a JSON object: "raw <method> <path> <data> <notoken>"
|
||||||
|
<method>: Required. One of "PUT", "GET", "POST", "xPUT", "xGET",
|
||||||
|
"xPOST". Methods with 'x' prefixed will not automatically append the
|
||||||
|
access token.
|
||||||
|
<path>: Required. E.g. "/events"
|
||||||
|
<data>: Optional. E.g. "{ "msgtype":"custom.text", "body":"abc123"}"
|
||||||
|
"""
|
||||||
|
args = self._parse(line, ["method", "path", "data"])
|
||||||
|
# sanity check
|
||||||
|
if "method" not in args or "path" not in args:
|
||||||
|
print "Must specify path and method."
|
||||||
|
return
|
||||||
|
|
||||||
|
args["method"] = args["method"].upper()
|
||||||
|
valid_methods = ["PUT", "GET", "POST", "DELETE",
|
||||||
|
"XPUT", "XGET", "XPOST", "XDELETE"]
|
||||||
|
if args["method"] not in valid_methods:
|
||||||
|
print "Unsupported method: %s" % args["method"]
|
||||||
|
return
|
||||||
|
|
||||||
|
if "data" not in args:
|
||||||
|
args["data"] = None
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
args["data"] = json.loads(args["data"])
|
||||||
|
except Exception as e:
|
||||||
|
print "Data is not valid JSON. %s" % e
|
||||||
|
return
|
||||||
|
|
||||||
|
qp = {"access_token": self._tok()}
|
||||||
|
if args["method"].startswith("X"):
|
||||||
|
qp = {} # remove access token
|
||||||
|
args["method"] = args["method"][1:] # snip the X
|
||||||
|
else:
|
||||||
|
# append any query params the user has set
|
||||||
|
try:
|
||||||
|
parsed_url = urlparse.urlparse(args["path"])
|
||||||
|
qp.update(urlparse.parse_qs(parsed_url.query))
|
||||||
|
args["path"] = parsed_url.path
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
reactor.callFromThread(self._run_and_pprint, args["method"],
|
||||||
|
args["path"],
|
||||||
|
args["data"],
|
||||||
|
query_params=qp)
|
||||||
|
|
||||||
|
def do_stream(self, line):
|
||||||
|
"""Stream data from the server: "stream <longpoll timeout ms>" """
|
||||||
|
args = self._parse(line, ["timeout"])
|
||||||
|
timeout = 5000
|
||||||
|
if "timeout" in args:
|
||||||
|
try:
|
||||||
|
timeout = int(args["timeout"])
|
||||||
|
except ValueError:
|
||||||
|
print "Timeout must be in milliseconds."
|
||||||
|
return
|
||||||
|
reactor.callFromThread(self._do_event_stream, timeout)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _do_event_stream(self, timeout):
|
||||||
|
res = yield self.http_client.get_json(
|
||||||
|
self._url() + "/events",
|
||||||
|
{
|
||||||
|
"access_token": self._tok(),
|
||||||
|
"timeout": str(timeout),
|
||||||
|
"from": self.event_stream_token
|
||||||
|
})
|
||||||
|
print json.dumps(res, indent=4)
|
||||||
|
|
||||||
|
if "chunk" in res:
|
||||||
|
for event in res["chunk"]:
|
||||||
|
if (event["type"] == "m.room.message" and
|
||||||
|
self._is_on("send_delivery_receipts") and
|
||||||
|
event["user_id"] != self._usr()): # not sent by us
|
||||||
|
self._send_receipt(event, "d")
|
||||||
|
|
||||||
|
# update the position in the stram
|
||||||
|
if "end" in res:
|
||||||
|
self.event_stream_token = res["end"]
|
||||||
|
|
||||||
|
def _send_receipt(self, event, feedback_type):
|
||||||
|
path = ("/rooms/%s/messages/%s/%s/feedback/%s/%s" %
|
||||||
|
(urllib.quote(event["room_id"]), event["user_id"], event["msg_id"],
|
||||||
|
self._usr(), feedback_type))
|
||||||
|
data = {}
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data,
|
||||||
|
alt_text="Sent receipt for %s" % event["msg_id"])
|
||||||
|
|
||||||
|
def _do_membership_change(self, roomid, membership, userid):
|
||||||
|
path = "/rooms/%s/members/%s/state" % (urllib.quote(roomid), userid)
|
||||||
|
data = {
|
||||||
|
"membership": membership
|
||||||
|
}
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data)
|
||||||
|
|
||||||
|
def do_displayname(self, line):
|
||||||
|
"""Get or set my displayname: "displayname [new_name]" """
|
||||||
|
args = self._parse(line, ["name"])
|
||||||
|
path = "/profile/%s/displayname" % (self.config["user"])
|
||||||
|
|
||||||
|
if "name" in args:
|
||||||
|
data = {"displayname": args["name"]}
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data)
|
||||||
|
else:
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "GET", path)
|
||||||
|
|
||||||
|
def _do_presence_state(self, state, line):
|
||||||
|
args = self._parse(line, ["msgstring"])
|
||||||
|
path = "/presence/%s/status" % (self.config["user"])
|
||||||
|
data = {"state": state}
|
||||||
|
if "msgstring" in args:
|
||||||
|
data["status_msg"] = args["msgstring"]
|
||||||
|
|
||||||
|
reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data)
|
||||||
|
|
||||||
|
def do_offline(self, line):
|
||||||
|
"""Set my presence state to OFFLINE"""
|
||||||
|
self._do_presence_state(0, line)
|
||||||
|
|
||||||
|
def do_away(self, line):
|
||||||
|
"""Set my presence state to AWAY"""
|
||||||
|
self._do_presence_state(1, line)
|
||||||
|
|
||||||
|
def do_online(self, line):
|
||||||
|
"""Set my presence state to ONLINE"""
|
||||||
|
self._do_presence_state(2, line)
|
||||||
|
|
||||||
|
def _parse(self, line, keys, force_keys=False):
|
||||||
|
""" Parses the given line.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
line : The line to parse
|
||||||
|
keys : A list of keys to map onto the args
|
||||||
|
force_keys : True to enforce that the line has a value for every key
|
||||||
|
Returns:
|
||||||
|
A dict of key:arg
|
||||||
|
"""
|
||||||
|
line_args = shlex.split(line)
|
||||||
|
if force_keys and len(line_args) != len(keys):
|
||||||
|
raise IndexError("Must specify all args: %s" % keys)
|
||||||
|
|
||||||
|
# do $ substitutions
|
||||||
|
for i, arg in enumerate(line_args):
|
||||||
|
for config_key in self.config:
|
||||||
|
if ("$" + config_key) in arg:
|
||||||
|
arg = arg.replace("$" + config_key,
|
||||||
|
self.config[config_key])
|
||||||
|
line_args[i] = arg
|
||||||
|
|
||||||
|
return dict(zip(keys, line_args))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _run_and_pprint(self, method, path, data=None,
|
||||||
|
query_params={"access_token": None}, alt_text=None):
|
||||||
|
""" Runs an HTTP request and pretty prints the output.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
method: HTTP method
|
||||||
|
path: Relative path
|
||||||
|
data: Raw JSON data if any
|
||||||
|
query_params: dict of query parameters to add to the url
|
||||||
|
"""
|
||||||
|
url = self._url() + path
|
||||||
|
if "access_token" in query_params:
|
||||||
|
query_params["access_token"] = self._tok()
|
||||||
|
|
||||||
|
json_res = yield self.http_client.do_request(method, url,
|
||||||
|
data=data,
|
||||||
|
qparams=query_params)
|
||||||
|
if alt_text:
|
||||||
|
print alt_text
|
||||||
|
else:
|
||||||
|
print json.dumps(json_res, indent=4)
|
||||||
|
|
||||||
|
|
||||||
|
def save_config(config):
|
||||||
|
with open(CONFIG_JSON, 'w') as out:
|
||||||
|
json.dump(config, out)
|
||||||
|
|
||||||
|
|
||||||
|
def main(server_url, identity_server_url, username, token, config_path):
|
||||||
|
print "Synapse command line client"
|
||||||
|
print "==========================="
|
||||||
|
print "Server: %s" % server_url
|
||||||
|
print "Type 'help' to get started."
|
||||||
|
print "Close this console with CTRL+C then CTRL+D."
|
||||||
|
if not username or not token:
|
||||||
|
print "- 'register <username>' - Register an account"
|
||||||
|
print "- 'stream' - Connect to the event stream"
|
||||||
|
print "- 'create <roomid>' - Create a room"
|
||||||
|
print "- 'send <roomid> <message>' - Send a message"
|
||||||
|
http_client = TwistedHttpClient()
|
||||||
|
|
||||||
|
# the command line client
|
||||||
|
syn_cmd = SynapseCmd(http_client, server_url, identity_server_url, username, token)
|
||||||
|
|
||||||
|
# load synapse.json config from a previous session
|
||||||
|
global CONFIG_JSON
|
||||||
|
CONFIG_JSON = config_path # bit cheeky, but just overwrite the global
|
||||||
|
try:
|
||||||
|
with open(config_path, 'r') as config:
|
||||||
|
syn_cmd.config = json.load(config)
|
||||||
|
try:
|
||||||
|
http_client.verbose = "on" == syn_cmd.config["verbose"]
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
print "Loaded config from %s" % config_path
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Twisted-specific: Runs the command processor in Twisted's event loop
|
||||||
|
# to maintain a single thread for both commands and event processing.
|
||||||
|
# If using another HTTP client, just call syn_cmd.cmdloop()
|
||||||
|
reactor.callInThread(syn_cmd.cmdloop)
|
||||||
|
reactor.run()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
parser = argparse.ArgumentParser("Starts a synapse client.")
|
||||||
|
parser.add_argument(
|
||||||
|
"-s", "--server", dest="server", default="http://localhost:8080",
|
||||||
|
help="The URL of the home server to talk to.")
|
||||||
|
parser.add_argument(
|
||||||
|
"-i", "--identity-server", dest="identityserver", default="http://localhost:8090",
|
||||||
|
help="The URL of the identity server to talk to.")
|
||||||
|
parser.add_argument(
|
||||||
|
"-u", "--username", dest="username",
|
||||||
|
help="Your username on the server.")
|
||||||
|
parser.add_argument(
|
||||||
|
"-t", "--token", dest="token",
|
||||||
|
help="Your access token.")
|
||||||
|
parser.add_argument(
|
||||||
|
"-c", "--config", dest="config", default=CONFIG_JSON,
|
||||||
|
help="The location of the config.json file to read from.")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if not args.server:
|
||||||
|
print "You must supply a server URL to communicate with."
|
||||||
|
parser.print_help()
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
server = args.server
|
||||||
|
if not server.startswith("http://"):
|
||||||
|
server = "http://" + args.server
|
||||||
|
|
||||||
|
main(server, args.identityserver, args.username, args.token, args.config)
|
203
cmdclient/http.py
Normal file
203
cmdclient/http.py
Normal file
@ -0,0 +1,203 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
from twisted.web.client import Agent, readBody
|
||||||
|
from twisted.web.http_headers import Headers
|
||||||
|
from twisted.internet import defer, reactor
|
||||||
|
|
||||||
|
from pprint import pformat
|
||||||
|
|
||||||
|
import json
|
||||||
|
import urllib
|
||||||
|
|
||||||
|
|
||||||
|
class HttpClient(object):
|
||||||
|
""" Interface for talking json over http
|
||||||
|
"""
|
||||||
|
|
||||||
|
def put_json(self, url, data):
|
||||||
|
""" Sends the specifed json data using PUT
|
||||||
|
|
||||||
|
Args:
|
||||||
|
url (str): The URL to PUT data to.
|
||||||
|
data (dict): A dict containing the data that will be used as
|
||||||
|
the request body. This will be encoded as JSON.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred: Succeeds when we get *any* HTTP response.
|
||||||
|
|
||||||
|
The result of the deferred is a tuple of `(code, response)`,
|
||||||
|
where `response` is a dict representing the decoded JSON body.
|
||||||
|
"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_json(self, url, args=None):
|
||||||
|
""" Get's some json from the given host homeserver and path
|
||||||
|
|
||||||
|
Args:
|
||||||
|
url (str): The URL to GET data from.
|
||||||
|
args (dict): A dictionary used to create query strings, defaults to
|
||||||
|
None.
|
||||||
|
**Note**: The value of each key is assumed to be an iterable
|
||||||
|
and *not* a string.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Deferred: Succeeds when we get *any* HTTP response.
|
||||||
|
|
||||||
|
The result of the deferred is a tuple of `(code, response)`,
|
||||||
|
where `response` is a dict representing the decoded JSON body.
|
||||||
|
"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class TwistedHttpClient(HttpClient):
|
||||||
|
""" Wrapper around the twisted HTTP client api.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
agent (twisted.web.client.Agent): The twisted Agent used to send the
|
||||||
|
requests.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.agent = Agent(reactor)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def put_json(self, url, data):
|
||||||
|
response = yield self._create_put_request(
|
||||||
|
url,
|
||||||
|
data,
|
||||||
|
headers_dict={"Content-Type": ["application/json"]}
|
||||||
|
)
|
||||||
|
body = yield readBody(response)
|
||||||
|
defer.returnValue((response.code, body))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def get_json(self, url, args=None):
|
||||||
|
if args:
|
||||||
|
# generates a list of strings of form "k=v".
|
||||||
|
qs = urllib.urlencode(args, True)
|
||||||
|
url = "%s?%s" % (url, qs)
|
||||||
|
response = yield self._create_get_request(url)
|
||||||
|
body = yield readBody(response)
|
||||||
|
defer.returnValue(json.loads(body))
|
||||||
|
|
||||||
|
def _create_put_request(self, url, json_data, headers_dict={}):
|
||||||
|
""" Wrapper of _create_request to issue a PUT request
|
||||||
|
"""
|
||||||
|
|
||||||
|
if "Content-Type" not in headers_dict:
|
||||||
|
raise defer.error(
|
||||||
|
RuntimeError("Must include Content-Type header for PUTs"))
|
||||||
|
|
||||||
|
return self._create_request(
|
||||||
|
"PUT",
|
||||||
|
url,
|
||||||
|
producer=_JsonProducer(json_data),
|
||||||
|
headers_dict=headers_dict
|
||||||
|
)
|
||||||
|
|
||||||
|
def _create_get_request(self, url, headers_dict={}):
|
||||||
|
""" Wrapper of _create_request to issue a GET request
|
||||||
|
"""
|
||||||
|
return self._create_request(
|
||||||
|
"GET",
|
||||||
|
url,
|
||||||
|
headers_dict=headers_dict
|
||||||
|
)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def do_request(self, method, url, data=None, qparams=None, jsonreq=True, headers={}):
|
||||||
|
if qparams:
|
||||||
|
url = "%s?%s" % (url, urllib.urlencode(qparams, True))
|
||||||
|
|
||||||
|
if jsonreq:
|
||||||
|
prod = _JsonProducer(data)
|
||||||
|
headers['Content-Type'] = ["application/json"];
|
||||||
|
else:
|
||||||
|
prod = _RawProducer(data)
|
||||||
|
|
||||||
|
if method in ["POST", "PUT"]:
|
||||||
|
response = yield self._create_request(method, url,
|
||||||
|
producer=prod,
|
||||||
|
headers_dict=headers)
|
||||||
|
else:
|
||||||
|
response = yield self._create_request(method, url)
|
||||||
|
|
||||||
|
body = yield readBody(response)
|
||||||
|
defer.returnValue(json.loads(body))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _create_request(self, method, url, producer=None, headers_dict={}):
|
||||||
|
""" Creates and sends a request to the given url
|
||||||
|
"""
|
||||||
|
headers_dict["User-Agent"] = ["Synapse Cmd Client"]
|
||||||
|
|
||||||
|
retries_left = 5
|
||||||
|
print "%s to %s with headers %s" % (method, url, headers_dict)
|
||||||
|
if self.verbose and producer:
|
||||||
|
if "password" in producer.data:
|
||||||
|
temp = producer.data["password"]
|
||||||
|
producer.data["password"] = "[REDACTED]"
|
||||||
|
print json.dumps(producer.data, indent=4)
|
||||||
|
producer.data["password"] = temp
|
||||||
|
else:
|
||||||
|
print json.dumps(producer.data, indent=4)
|
||||||
|
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
response = yield self.agent.request(
|
||||||
|
method,
|
||||||
|
url.encode("UTF8"),
|
||||||
|
Headers(headers_dict),
|
||||||
|
producer
|
||||||
|
)
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
print "uh oh: %s" % e
|
||||||
|
if retries_left:
|
||||||
|
yield self.sleep(2 ** (5 - retries_left))
|
||||||
|
retries_left -= 1
|
||||||
|
else:
|
||||||
|
raise e
|
||||||
|
|
||||||
|
if self.verbose:
|
||||||
|
print "Status %s %s" % (response.code, response.phrase)
|
||||||
|
print pformat(list(response.headers.getAllRawHeaders()))
|
||||||
|
defer.returnValue(response)
|
||||||
|
|
||||||
|
def sleep(self, seconds):
|
||||||
|
d = defer.Deferred()
|
||||||
|
reactor.callLater(seconds, d.callback, seconds)
|
||||||
|
return d
|
||||||
|
|
||||||
|
class _RawProducer(object):
|
||||||
|
def __init__(self, data):
|
||||||
|
self.data = data
|
||||||
|
self.body = data
|
||||||
|
self.length = len(self.body)
|
||||||
|
|
||||||
|
def startProducing(self, consumer):
|
||||||
|
consumer.write(self.body)
|
||||||
|
return defer.succeed(None)
|
||||||
|
|
||||||
|
def pauseProducing(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def stopProducing(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class _JsonProducer(object):
|
||||||
|
""" Used by the twisted http client to create the HTTP body from json
|
||||||
|
"""
|
||||||
|
def __init__(self, jsn):
|
||||||
|
self.data = jsn
|
||||||
|
self.body = json.dumps(jsn).encode("utf8")
|
||||||
|
self.length = len(self.body)
|
||||||
|
|
||||||
|
def startProducing(self, consumer):
|
||||||
|
consumer.write(self.body)
|
||||||
|
return defer.succeed(None)
|
||||||
|
|
||||||
|
def pauseProducing(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def stopProducing(self):
|
||||||
|
pass
|
16
database-save.sh
Executable file
16
database-save.sh
Executable file
@ -0,0 +1,16 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
# This script will write a dump file of local user state if you want to splat
|
||||||
|
# your entire server database and start again but preserve the identity of
|
||||||
|
# local users and their access tokens.
|
||||||
|
#
|
||||||
|
# To restore it, use
|
||||||
|
#
|
||||||
|
# $ sqlite3 homeserver.db < table-save.sql
|
||||||
|
|
||||||
|
sqlite3 homeserver.db <<'EOF' >table-save.sql
|
||||||
|
.dump users
|
||||||
|
.dump access_tokens
|
||||||
|
.dump presence
|
||||||
|
.dump profiles
|
||||||
|
EOF
|
22
demo/README
Normal file
22
demo/README
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
Requires you to have done:
|
||||||
|
python setup.py develop
|
||||||
|
|
||||||
|
|
||||||
|
The demo start.sh will start three synapse servers on ports 8080, 8081 and 8082, with host names localhost:$port. This can be easily changed to `hostname`:$port in start.sh if required.
|
||||||
|
It will also start a web server on port 8000 pointed at the webclient.
|
||||||
|
|
||||||
|
stop.sh will stop the synapse servers and the webclient.
|
||||||
|
|
||||||
|
clean.sh will delete the databases and log files.
|
||||||
|
|
||||||
|
To start a completely new set of servers, run:
|
||||||
|
|
||||||
|
./demo/stop.sh; ./demo/clean.sh && ./demo/start.sh
|
||||||
|
|
||||||
|
|
||||||
|
Logs and sqlitedb will be stored in demo/808{0,1,2}.{log,db}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Also note that when joining a public room on a differnt HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name.
|
||||||
|
|
16
demo/clean.sh
Executable file
16
demo/clean.sh
Executable file
@ -0,0 +1,16 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
DIR="$( cd "$( dirname "$0" )" && pwd )"
|
||||||
|
|
||||||
|
PID_FILE="$DIR/servers.pid"
|
||||||
|
|
||||||
|
if [ -f $PID_FILE ]; then
|
||||||
|
echo "servers.pid exists!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
find "$DIR" -name "*.log" -delete
|
||||||
|
find "$DIR" -name "*.db" -delete
|
||||||
|
|
24
demo/start.sh
Executable file
24
demo/start.sh
Executable file
@ -0,0 +1,24 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
DIR="$( cd "$( dirname "$0" )" && pwd )"
|
||||||
|
|
||||||
|
CWD=$(pwd)
|
||||||
|
|
||||||
|
cd "$DIR/.."
|
||||||
|
|
||||||
|
for port in "8080" "8081" "8082"; do
|
||||||
|
echo "Starting server on port $port... "
|
||||||
|
|
||||||
|
python -m synapse.app.homeserver \
|
||||||
|
-p "$port" \
|
||||||
|
-H "localhost:$port" \
|
||||||
|
-f "$DIR/$port.log" \
|
||||||
|
-d "$DIR/$port.db" \
|
||||||
|
-vv \
|
||||||
|
-D --pid-file "$DIR/$port.pid"
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "Starting webclient on port 8000..."
|
||||||
|
python "demo/webserver.py" -p 8000 -P "$DIR/webserver.pid" "webclient"
|
||||||
|
|
||||||
|
cd "$CWD"
|
14
demo/stop.sh
Executable file
14
demo/stop.sh
Executable file
@ -0,0 +1,14 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
DIR="$( cd "$( dirname "$0" )" && pwd )"
|
||||||
|
|
||||||
|
FILES=$(find "$DIR" -name "*.pid" -type f);
|
||||||
|
|
||||||
|
for pid_file in $FILES; do
|
||||||
|
pid=$(cat "$pid_file")
|
||||||
|
if [[ $pid ]]; then
|
||||||
|
echo "Killing $pid_file with $pid"
|
||||||
|
kill $pid
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
39
demo/webserver.py
Normal file
39
demo/webserver.py
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
import argparse
|
||||||
|
import BaseHTTPServer
|
||||||
|
import os
|
||||||
|
import SimpleHTTPServer
|
||||||
|
|
||||||
|
from daemonize import Daemonize
|
||||||
|
|
||||||
|
|
||||||
|
def setup():
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument("directory")
|
||||||
|
parser.add_argument("-p", "--port", dest="port", type=int, default=8080)
|
||||||
|
parser.add_argument('-P', "--pid-file", dest="pid", default="web.pid")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Get absolute path to directory to serve, as daemonize changes to '/'
|
||||||
|
os.chdir(args.directory)
|
||||||
|
dr = os.getcwd()
|
||||||
|
|
||||||
|
httpd = BaseHTTPServer.HTTPServer(
|
||||||
|
('', args.port),
|
||||||
|
SimpleHTTPServer.SimpleHTTPRequestHandler
|
||||||
|
)
|
||||||
|
|
||||||
|
def run():
|
||||||
|
os.chdir(dr)
|
||||||
|
httpd.serve_forever()
|
||||||
|
|
||||||
|
daemon = Daemonize(
|
||||||
|
app="synapse-webclient",
|
||||||
|
pid=args.pid,
|
||||||
|
action=run,
|
||||||
|
auto_close_fds=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
daemon.start()
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
setup()
|
1264
docs/client-server/specification
Normal file
1264
docs/client-server/specification
Normal file
File diff suppressed because it is too large
Load Diff
92
docs/client-server/urls
Normal file
92
docs/client-server/urls
Normal file
@ -0,0 +1,92 @@
|
|||||||
|
=========================
|
||||||
|
Client-Server URL Summary
|
||||||
|
=========================
|
||||||
|
|
||||||
|
A brief overview of the URL scheme involved in the Synapse Client-Server API.
|
||||||
|
|
||||||
|
|
||||||
|
URLs
|
||||||
|
====
|
||||||
|
|
||||||
|
Fetch events:
|
||||||
|
GET /events
|
||||||
|
|
||||||
|
Registering an account
|
||||||
|
POST /register
|
||||||
|
|
||||||
|
Unregistering an account
|
||||||
|
POST /unregister
|
||||||
|
|
||||||
|
Rooms
|
||||||
|
-----
|
||||||
|
|
||||||
|
Creating a room by ID
|
||||||
|
PUT /rooms/$roomid
|
||||||
|
|
||||||
|
Creating an anonymous room
|
||||||
|
POST /rooms
|
||||||
|
|
||||||
|
Room topic
|
||||||
|
GET /rooms/$roomid/topic
|
||||||
|
PUT /rooms/$roomid/topic
|
||||||
|
|
||||||
|
List rooms
|
||||||
|
GET /rooms/list
|
||||||
|
|
||||||
|
Invite/Join/Leave
|
||||||
|
GET /rooms/$roomid/members/$userid/state
|
||||||
|
PUT /rooms/$roomid/members/$userid/state
|
||||||
|
DELETE /rooms/$roomid/members/$userid/state
|
||||||
|
|
||||||
|
List members
|
||||||
|
GET /rooms/$roomid/members/list
|
||||||
|
|
||||||
|
Sending/reading messages
|
||||||
|
PUT /rooms/$roomid/messages/$sender/$msgid
|
||||||
|
|
||||||
|
Feedback
|
||||||
|
GET /rooms/$roomid/messages/$sender/$msgid/feedback/$feedbackuser/$feedback
|
||||||
|
PUT /rooms/$roomid/messages/$sender/$msgid/feedback/$feedbackuser/$feedback
|
||||||
|
|
||||||
|
Paginating messages
|
||||||
|
GET /rooms/$roomid/messages/list
|
||||||
|
|
||||||
|
Profiles
|
||||||
|
--------
|
||||||
|
|
||||||
|
Display name
|
||||||
|
GET /profile/$userid/displayname
|
||||||
|
PUT /profile/$userid/displayname
|
||||||
|
|
||||||
|
Avatar URL
|
||||||
|
GET /profile/$userid/avatar_url
|
||||||
|
PUT /profile/$userid/avatar_url
|
||||||
|
|
||||||
|
Metadata
|
||||||
|
GET /profile/$userid/metadata
|
||||||
|
POST /profile/$userid/metadata
|
||||||
|
|
||||||
|
Presence
|
||||||
|
--------
|
||||||
|
|
||||||
|
My state or status message
|
||||||
|
GET /presence/$userid/status
|
||||||
|
PUT /presence/$userid/status
|
||||||
|
also 'GET' for fetching others
|
||||||
|
|
||||||
|
TODO(paul): per-device idle time, device type; similar to above
|
||||||
|
|
||||||
|
My presence list
|
||||||
|
GET /presence_list/$myuserid
|
||||||
|
POST /presence_list/$myuserid
|
||||||
|
body is JSON-encoded dict of keys:
|
||||||
|
invite: list of UserID strings to invite
|
||||||
|
drop: list of UserID strings to remove
|
||||||
|
TODO(paul): define other ops: accept, group management, ordering?
|
||||||
|
|
||||||
|
Presence polling start/stop
|
||||||
|
POST /presence_list/$myuserid?op=start
|
||||||
|
POST /presence_list/$myuserid?op=stop
|
||||||
|
|
||||||
|
Presence invite
|
||||||
|
POST /presence_list/$myuserid/invite/$targetuserid
|
18
docs/code_style
Normal file
18
docs/code_style
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
Basically, PEP8
|
||||||
|
|
||||||
|
- Max line width: 80 chars.
|
||||||
|
- Use camel case for class and type names
|
||||||
|
- Use underscores for functions and variables.
|
||||||
|
- Use double quotes.
|
||||||
|
- Use parentheses instead of '\' for line continuation where ever possible (which is pretty much everywhere)
|
||||||
|
- There should be max a single new line between:
|
||||||
|
- statements
|
||||||
|
- functions in a class
|
||||||
|
- There should be two new lines between:
|
||||||
|
- definitions in a module (e.g., between different classes)
|
||||||
|
- There should be spaces where spaces should be and not where there shouldn't be:
|
||||||
|
- a single space after a comma
|
||||||
|
- a single space before and after for '=' when used as assignment
|
||||||
|
- no spaces before and after for '=' for default values and keyword arguments.
|
||||||
|
|
||||||
|
Comments should follow the google code style. This is so that we can generate documentation with sphinx (http://sphinxcontrib-napoleon.readthedocs.org/en/latest/)
|
43
docs/documentation_style
Normal file
43
docs/documentation_style
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
===================
|
||||||
|
Documentation Style
|
||||||
|
===================
|
||||||
|
|
||||||
|
A brief single sentence to describe what this file contains; in this case a
|
||||||
|
description of the style to write documentation in.
|
||||||
|
|
||||||
|
|
||||||
|
Sections
|
||||||
|
========
|
||||||
|
|
||||||
|
Each section should be separated from the others by two blank lines. Headings
|
||||||
|
should be underlined using a row of equals signs (===). Paragraphs should be
|
||||||
|
separated by a single blank line, and wrap to no further than 80 columns.
|
||||||
|
|
||||||
|
[[TODO(username): if you want to leave some unanswered questions, notes for
|
||||||
|
further consideration, or other kinds of comment, use a TODO section. Make sure
|
||||||
|
to notate it with your name so we know who to ask about it!]]
|
||||||
|
|
||||||
|
Subsections
|
||||||
|
-----------
|
||||||
|
|
||||||
|
If required, subsections can use a row of dashes to underline their header. A
|
||||||
|
single blank line between subsections of a single section.
|
||||||
|
|
||||||
|
|
||||||
|
Bullet Lists
|
||||||
|
============
|
||||||
|
|
||||||
|
* Bullet lists can use asterisks with a single space either side.
|
||||||
|
|
||||||
|
* Another blank line between list elements.
|
||||||
|
|
||||||
|
|
||||||
|
Definition Lists
|
||||||
|
================
|
||||||
|
|
||||||
|
Terms:
|
||||||
|
Start in the first column, ending with a colon
|
||||||
|
|
||||||
|
Definitions:
|
||||||
|
Take a two space indent, following immediately from the term without a blank
|
||||||
|
line before it, but having a blank line afterwards.
|
249
docs/model/presence
Normal file
249
docs/model/presence
Normal file
@ -0,0 +1,249 @@
|
|||||||
|
========
|
||||||
|
Presence
|
||||||
|
========
|
||||||
|
|
||||||
|
A description of presence information and visibility between users.
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
Each user has the concept of Presence information. This encodes a sense of the
|
||||||
|
"availability" of that user, suitable for display on other user's clients.
|
||||||
|
|
||||||
|
|
||||||
|
Presence Information
|
||||||
|
====================
|
||||||
|
|
||||||
|
The basic piece of presence information is an enumeration of a small set of
|
||||||
|
state; such as "free to chat", "online", "busy", or "offline". The default state
|
||||||
|
unless the user changes it is "online". Lower states suggest some amount of
|
||||||
|
decreased availability from normal, which might have some client-side effect
|
||||||
|
like muting notification sounds and suggests to other users not to bother them
|
||||||
|
unless it is urgent. Equally, the "free to chat" state exists to let the user
|
||||||
|
announce their general willingness to receive messages moreso than default.
|
||||||
|
|
||||||
|
Home servers should also allow a user to set their state as "hidden" - a state
|
||||||
|
which behaves as offline, but allows the user to see the client state anyway and
|
||||||
|
generally interact with client features such as reading message history or
|
||||||
|
accessing contacts in the address book.
|
||||||
|
|
||||||
|
This basic state field applies to the user as a whole, regardless of how many
|
||||||
|
client devices they have connected. The home server should synchronise this
|
||||||
|
status choice among multiple devices to ensure the user gets a consistent
|
||||||
|
experience.
|
||||||
|
|
||||||
|
Idle Time
|
||||||
|
---------
|
||||||
|
|
||||||
|
As well as the basic state field, the presence information can also show a sense
|
||||||
|
of an "idle timer". This should be maintained individually by the user's
|
||||||
|
clients, and the homeserver can take the highest reported time as that to
|
||||||
|
report. Likely this should be presented in fairly coarse granularity; possibly
|
||||||
|
being limited to letting the home server automatically switch from a "free to
|
||||||
|
chat" or "online" mode into "idle".
|
||||||
|
|
||||||
|
When a user is offline, the Home Server can still report when the user was last
|
||||||
|
seen online, again perhaps in a somewhat coarse manner.
|
||||||
|
|
||||||
|
Device Type
|
||||||
|
-----------
|
||||||
|
|
||||||
|
Client devices that may limit the user experience somewhat (such as "mobile"
|
||||||
|
devices with limited ability to type on a real keyboard or read large amounts of
|
||||||
|
text) should report this to the home server, as this is also useful information
|
||||||
|
to report as "presence" if the user cannot be expected to provide a good typed
|
||||||
|
response to messages.
|
||||||
|
|
||||||
|
|
||||||
|
Presence List
|
||||||
|
=============
|
||||||
|
|
||||||
|
Each user's home server stores a "presence list" for that user. This stores a
|
||||||
|
list of other user IDs the user has chosen to add to it (remembering any ACL
|
||||||
|
Pointer if appropriate).
|
||||||
|
|
||||||
|
To be added to a contact list, the user being added must grant permission. Once
|
||||||
|
granted, both user's HS(es) store this information, as it allows the user who
|
||||||
|
has added the contact some more abilities; see below. Since such subscriptions
|
||||||
|
are likely to be bidirectional, HSes may wish to automatically accept requests
|
||||||
|
when a reverse subscription already exists.
|
||||||
|
|
||||||
|
As a convenience, presence lists should support the ability to collect users
|
||||||
|
into groups, which could allow things like inviting the entire group to a new
|
||||||
|
("ad-hoc") chat room, or easy interaction with the profile information ACL
|
||||||
|
implementation of the HS.
|
||||||
|
|
||||||
|
|
||||||
|
Presence and Permissions
|
||||||
|
========================
|
||||||
|
|
||||||
|
For a viewing user to be allowed to see the presence information of a target
|
||||||
|
user, either
|
||||||
|
|
||||||
|
* The target user has allowed the viewing user to add them to their presence
|
||||||
|
list, or
|
||||||
|
|
||||||
|
* The two users share at least one room in common
|
||||||
|
|
||||||
|
In the latter case, this allows for clients to display some minimal sense of
|
||||||
|
presence information in a user list for a room.
|
||||||
|
|
||||||
|
Home servers can also use the user's choice of presence state as a signal for
|
||||||
|
how to handle new private one-to-one chat message requests. For example, it
|
||||||
|
might decide:
|
||||||
|
|
||||||
|
"free to chat": accept anything
|
||||||
|
"online": accept from anyone in my addres book list
|
||||||
|
"busy": accept from anyone in this "important people" group in my address
|
||||||
|
book list
|
||||||
|
|
||||||
|
|
||||||
|
API Efficiency
|
||||||
|
==============
|
||||||
|
|
||||||
|
A simple implementation of presence messaging has the ability to cause a large
|
||||||
|
amount of Internet traffic relating to presence updates. In order to minimise
|
||||||
|
the impact of such a feature, the following observations can be made:
|
||||||
|
|
||||||
|
* There is no point in a Home Server polling status for peers in a user's
|
||||||
|
presence list if the user has no clients connected that care about it.
|
||||||
|
|
||||||
|
* It is highly likely that most presence subscriptions will be symmetric - a
|
||||||
|
given user watching another is likely to in turn be watched by that user.
|
||||||
|
|
||||||
|
* It is likely that most subscription pairings will be between users who share
|
||||||
|
at least one Room in common, and so their Home Servers are actively
|
||||||
|
exchanging message PDUs or transactions relating to that Room.
|
||||||
|
|
||||||
|
* Presence update messages do not need realtime guarantees. It is acceptable to
|
||||||
|
delay delivery of updates for some small amount of time (10 seconds to a
|
||||||
|
minute).
|
||||||
|
|
||||||
|
The general model of presence information is that of a HS registering its
|
||||||
|
interest in receiving presence status updates from other HSes, which then
|
||||||
|
promise to send them when required. Rather than actively polling for the
|
||||||
|
currentt state all the time, HSes can rely on their relative stability to only
|
||||||
|
push updates when required.
|
||||||
|
|
||||||
|
A Home Server should not rely on the longterm validity of this presence
|
||||||
|
information, however, as this would not cover such cases as a user's server
|
||||||
|
crashing and thus failing to inform their peers that users it used to host are
|
||||||
|
no longer available online. Therefore, each promise of future updates should
|
||||||
|
carry with a timeout value (whether explicit in the message, or implicit as some
|
||||||
|
defined default in the protocol), after which the receiving HS should consider
|
||||||
|
the information potentially stale and request it again.
|
||||||
|
|
||||||
|
However, because of the likelyhood that two home servers are exchanging messages
|
||||||
|
relating to chat traffic in a room common to both of them, the ongoing receipt
|
||||||
|
of these messages can be taken by each server as an implicit notification that
|
||||||
|
the sending server is still up and running, and therefore that no status changes
|
||||||
|
have happened; because if they had the server would have sent them. A second,
|
||||||
|
larger timeout should be applied to this implicit inference however, to protect
|
||||||
|
against implementation bugs or other reasons that the presence state cache may
|
||||||
|
become invalid; eventually the HS should re-enquire the current state of users
|
||||||
|
and update them with its own.
|
||||||
|
|
||||||
|
The following workflows can therefore be used to handle presence updates:
|
||||||
|
|
||||||
|
1 When a user first appears online their HS sends a message to each other HS
|
||||||
|
containing at least one user to be watched; each message carrying both a
|
||||||
|
notification of the sender's new online status, and a request to obtain and
|
||||||
|
watch the target users' presence information. This message implicitly
|
||||||
|
promises the sending HS will now push updates to the target HSes.
|
||||||
|
|
||||||
|
2 The target HSes then respond a single message each, containing the current
|
||||||
|
status of the requested user(s). These messages too implicitly promise the
|
||||||
|
target HSes will themselves push updates to the sending HS.
|
||||||
|
|
||||||
|
As these messages arrive at the sending user's HS they can be pushed to the
|
||||||
|
user's client(s), possibly batched again to ensure not too many small
|
||||||
|
messages which add extra protocol overheads.
|
||||||
|
|
||||||
|
At this point, all the user's clients now have the current presence status
|
||||||
|
information for this moment in time, and have promised to send each other
|
||||||
|
updates in future.
|
||||||
|
|
||||||
|
3 The HS maintains two watchdog timers per peer HS it is exchanging presence
|
||||||
|
information with. The first timer should have a relatively small expiry
|
||||||
|
(perhaps 1 minute), and the second timer should have a much longer time
|
||||||
|
(perhaps 1 hour).
|
||||||
|
|
||||||
|
4 Any time any kind of message is received from a peer HS, the short-term
|
||||||
|
presence timer associated with it is reset.
|
||||||
|
|
||||||
|
5 Whenever either of these timers expires, an HS should push a status reminder
|
||||||
|
to the target HS whose timer has now expired, and request again from that
|
||||||
|
server the status of the subscribed users.
|
||||||
|
|
||||||
|
6 On receipt of one of these presence status reminders, an HS can reset both
|
||||||
|
of its presence watchdog timers.
|
||||||
|
|
||||||
|
To avoid bursts of traffic, implementations should attempt to stagger the expiry
|
||||||
|
of the longer-term watchdog timers for different peer HSes.
|
||||||
|
|
||||||
|
When individual users actively change their status (either by explicit requests
|
||||||
|
from clients, or inferred changes due to idle timers or client timeouts), the HS
|
||||||
|
should batch up any status changes for some reasonable amount of time (10
|
||||||
|
seconds to a minute). This allows for reduced protocol overheads in the case of
|
||||||
|
multiple messages needing to be sent to the same peer HS; as is the likely
|
||||||
|
scenario in many cases, such as a given human user having multiple user
|
||||||
|
accounts.
|
||||||
|
|
||||||
|
|
||||||
|
API Requirements
|
||||||
|
================
|
||||||
|
|
||||||
|
The data model presented here puts the following requirements on the APIs:
|
||||||
|
|
||||||
|
Client-Server
|
||||||
|
-------------
|
||||||
|
|
||||||
|
Requests that a client can make to its Home Server
|
||||||
|
|
||||||
|
* get/set current presence state
|
||||||
|
Basic enumeration + ability to set a custom piece of text
|
||||||
|
|
||||||
|
* report per-device idle time
|
||||||
|
After some (configurable?) idle time the device should send a single message
|
||||||
|
to set the idle duration. The HS can then infer a "start of idle" instant and
|
||||||
|
use that to keep the device idleness up to date. At some later point the
|
||||||
|
device can cancel this idleness.
|
||||||
|
|
||||||
|
* report per-device type
|
||||||
|
Inform the server that this device is a "mobile" device, or perhaps some
|
||||||
|
other to-be-defined category of reduced capability that could be presented to
|
||||||
|
other users.
|
||||||
|
|
||||||
|
* start/stop presence polling for my presence list
|
||||||
|
It is likely that these messages could be implicitly inferred by other
|
||||||
|
messages, though having explicit control is always useful.
|
||||||
|
|
||||||
|
* get my presence list
|
||||||
|
[implicit poll start?]
|
||||||
|
It is possible that the HS doesn't yet have current presence information when
|
||||||
|
the client requests this. There should be a "don't know" type too.
|
||||||
|
|
||||||
|
* add/remove a user to my presence list
|
||||||
|
|
||||||
|
Server-Server
|
||||||
|
-------------
|
||||||
|
|
||||||
|
Requests that Home Servers make to others
|
||||||
|
|
||||||
|
* request permission to add a user to presence list
|
||||||
|
|
||||||
|
* allow/deny a request to add to a presence list
|
||||||
|
|
||||||
|
* perform a combined presence state push and subscription request
|
||||||
|
For each sending user ID, the message contains their new status.
|
||||||
|
For each receiving user ID, the message should contain an indication on
|
||||||
|
whether the sending server is also interested in receiving status from that
|
||||||
|
user; either as an immediate update response now, or as a promise to send
|
||||||
|
future updates.
|
||||||
|
|
||||||
|
Server to Client
|
||||||
|
----------------
|
||||||
|
|
||||||
|
[[TODO(paul): There also needs to be some way for a user's HS to push status
|
||||||
|
updates of the presence list to clients, but the general server-client event
|
||||||
|
model currently lacks a space to do that.]]
|
232
docs/model/profiles
Normal file
232
docs/model/profiles
Normal file
@ -0,0 +1,232 @@
|
|||||||
|
========
|
||||||
|
Profiles
|
||||||
|
========
|
||||||
|
|
||||||
|
A description of Synapse user profile metadata support.
|
||||||
|
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
Internally within Synapse users are referred to by an opaque ID, which consists
|
||||||
|
of some opaque localpart combined with the domain name of their home server.
|
||||||
|
Obviously this does not yield a very nice user experience; users would like to
|
||||||
|
see readable names for other users that are in some way meaningful to them.
|
||||||
|
Additionally, users like to be able to publish "profile" details to inform other
|
||||||
|
users of other information about them.
|
||||||
|
|
||||||
|
It is also conceivable that since we are attempting to provide a
|
||||||
|
worldwide-applicable messaging system, that users may wish to present different
|
||||||
|
subsets of information in their profile to different other people, from a
|
||||||
|
privacy and permissions perspective.
|
||||||
|
|
||||||
|
A Profile consists of a display name, an (optional?) avatar picture, and a set
|
||||||
|
of other metadata fields that the user may wish to publish (email address, phone
|
||||||
|
numbers, website URLs, etc...). We put no requirements on the display name other
|
||||||
|
than it being a valid Unicode string. Since it is likely that users will end up
|
||||||
|
having multiple accounts (perhaps by necessity of being hosted in multiple
|
||||||
|
places, perhaps by choice of wanting multiple distinct identifies), it would be
|
||||||
|
useful that a metadata field type exists that can refer to another Synapse User
|
||||||
|
ID, so that clients and HSes can make use of this information.
|
||||||
|
|
||||||
|
Metadata Fields
|
||||||
|
---------------
|
||||||
|
|
||||||
|
[[TODO(paul): Likely this list is incomplete; more fields can be defined as we
|
||||||
|
think of them. At the very least, any sort of supported ID for the 3rd Party ID
|
||||||
|
servers should be accounted for here.]]
|
||||||
|
|
||||||
|
* Synapse Directory Server username(s)
|
||||||
|
|
||||||
|
* Email address
|
||||||
|
|
||||||
|
* Phone number - classify "home"/"work"/"mobile"/custom?
|
||||||
|
|
||||||
|
* Twitter/Facebook/Google+/... social networks
|
||||||
|
|
||||||
|
* Location - keep this deliberately vague to allow people to choose how
|
||||||
|
granular it is
|
||||||
|
|
||||||
|
* "Bio" information - date of birth, etc...
|
||||||
|
|
||||||
|
* Synapse User ID of another account
|
||||||
|
|
||||||
|
* Web URL
|
||||||
|
|
||||||
|
* Freeform description text
|
||||||
|
|
||||||
|
|
||||||
|
Visibility Permissions
|
||||||
|
======================
|
||||||
|
|
||||||
|
A home server implementation could offer the ability to set permissions on
|
||||||
|
limited visibility of those fields. When another user requests access to the
|
||||||
|
target user's profile, their own identity should form part of that request. The
|
||||||
|
HS implementation can then decide which fields to make available to the
|
||||||
|
requestor.
|
||||||
|
|
||||||
|
A particular detail of implementation could allow the user to create one or more
|
||||||
|
ACLs; where each list is granted permission to see a given set of non-public
|
||||||
|
fields (compare to Google+ Circles) and contains a set of other people allowed
|
||||||
|
to use it. By giving these ACLs strong identities within the HS, they can be
|
||||||
|
referenced in communications with it, granting other users who encounter these
|
||||||
|
the "ACL Token" to use the details in that ACL.
|
||||||
|
|
||||||
|
If we further allow an ACL Token to be present on Room join requests or stored
|
||||||
|
by 3PID servers, then users of these ACLs gain the extra convenience of not
|
||||||
|
having to manually curate people in the access list; anyone in the room or with
|
||||||
|
knowledge of the 3rd Party ID is automatically granted access. Every HS and
|
||||||
|
client implementation would have to be aware of the existence of these ACL
|
||||||
|
Token, and include them in requests if present, but not every HS implementation
|
||||||
|
needs to actually provide the full permissions model. This can be used as a
|
||||||
|
distinguishing feature among competing implementations. However, servers MUST
|
||||||
|
NOT serve profile information from a cache if there is a chance that its limited
|
||||||
|
understanding could lead to information leakage.
|
||||||
|
|
||||||
|
|
||||||
|
Client Concerns of Multiple Accounts
|
||||||
|
====================================
|
||||||
|
|
||||||
|
Because a given person may want to have multiple Synapse User accounts, client
|
||||||
|
implementations should allow the use of multiple accounts simultaneously
|
||||||
|
(especially in the field of mobile phone clients, which generally don't support
|
||||||
|
running distinct instances of the same application). Where features like address
|
||||||
|
books, presence lists or rooms are presented, the client UI should remember to
|
||||||
|
make distinct with user account is in use for each.
|
||||||
|
|
||||||
|
|
||||||
|
Directory Servers
|
||||||
|
=================
|
||||||
|
|
||||||
|
Directory Servers can provide a forward mapping from human-readable names to
|
||||||
|
User IDs. These can provide a service similar to giving domain-namespaced names
|
||||||
|
for Rooms; in this case they can provide a way for a user to reference their
|
||||||
|
User ID in some external form (e.g. that can be printed on a business card).
|
||||||
|
|
||||||
|
The format for Synapse user name will consist of a localpart specific to the
|
||||||
|
directory server, and the domain name of that directory server:
|
||||||
|
|
||||||
|
@localname:some.domain.name
|
||||||
|
|
||||||
|
The localname is separated from the domain name using a colon, so as to ensure
|
||||||
|
the localname can still contain periods, as users may want this for similarity
|
||||||
|
to email addresses or the like, which typically can contain them. The format is
|
||||||
|
also visually quite distinct from email addresses, phone numbers, etc... so
|
||||||
|
hopefully reasonably "self-describing" when written on e.g. a business card
|
||||||
|
without surrounding context.
|
||||||
|
|
||||||
|
[[TODO(paul): we might have to think about this one - too close to email?
|
||||||
|
Twitter? Also it suggests a format scheme for room names of
|
||||||
|
#localname:domain.name, which I quite like]]
|
||||||
|
|
||||||
|
Directory server administrators should be able to make some kind of policy
|
||||||
|
decision on how these are allocated. Servers within some "closed" domain (such
|
||||||
|
as company-specific ones) may wish to verify the validity of a mapping using
|
||||||
|
their own internal mechanisms; "public" naming servers can operate on a FCFS
|
||||||
|
basis. There are overlapping concerns here with the idea of the 3rd party
|
||||||
|
identity servers as well, though in this specific case we are creating a new
|
||||||
|
namespace to allocate names into.
|
||||||
|
|
||||||
|
It would also be nice from a user experience perspective if the profile that a
|
||||||
|
given name links to can also declare that name as part of its metadata.
|
||||||
|
Furthermore as a security and consistency perspective it would be nice if each
|
||||||
|
end (the directory server and the user's home server) check the validity of the
|
||||||
|
mapping in some way. This needs investigation from a security perspective to
|
||||||
|
ensure against spoofing.
|
||||||
|
|
||||||
|
One such model may be that the user starts by declaring their intent to use a
|
||||||
|
given user name link to their home server, which then contacts the directory
|
||||||
|
service. At some point later (maybe immediately for "public open FCFS servers",
|
||||||
|
maybe after some kind of human intervention for verification) the DS decides to
|
||||||
|
honour this link, and includes it in its served output. It should also tell the
|
||||||
|
HS of this fact, so that the HS can present this as fact when requested for the
|
||||||
|
profile information. For efficiency, it may further wish to provide the HS with
|
||||||
|
a cryptographically-signed certificate as proof, so the HS serving the profile
|
||||||
|
can provide that too when asked, avoiding requesting HSes from constantly having
|
||||||
|
to contact the DS to verify this mapping. (Note: This is similar to the security
|
||||||
|
model often applied in DNS to verify PTR <-> A bidirectional mappings).
|
||||||
|
|
||||||
|
|
||||||
|
Identity Servers
|
||||||
|
================
|
||||||
|
|
||||||
|
The identity servers should support the concept of pointing a 3PID being able to
|
||||||
|
store an ACL Token as well as the main User ID. It is however, beyond scope to
|
||||||
|
do any kind of verification that any third-party IDs that the profile is
|
||||||
|
claiming match up to the 3PID mappings.
|
||||||
|
|
||||||
|
|
||||||
|
User Interface and Expectations Concerns
|
||||||
|
========================================
|
||||||
|
|
||||||
|
Given the weak "security" of some parts of this model as compared to what users
|
||||||
|
might expect, some care should be taken on how it is presented to users,
|
||||||
|
specifically in the naming or other wording of user interface components.
|
||||||
|
|
||||||
|
Most notably mere knowledge of an ACL Pointer is enough to read the information
|
||||||
|
stored in it. It is possible that Home or Identity Servers could leak this
|
||||||
|
information, allowing others to see it. This is a security-vs-convenience
|
||||||
|
balancing choice on behalf of the user who would choose, or not, to make use of
|
||||||
|
such a feature to publish their information.
|
||||||
|
|
||||||
|
Additionally, unless some form of strong end-to-end user-based encryption is
|
||||||
|
used, a user of ACLs for information privacy has to trust other home servers not
|
||||||
|
to lie about the identify of the user requesting access to the Profile.
|
||||||
|
|
||||||
|
|
||||||
|
API Requirements
|
||||||
|
================
|
||||||
|
|
||||||
|
The data model presented here puts the following requirements on the APIs:
|
||||||
|
|
||||||
|
Client-Server
|
||||||
|
-------------
|
||||||
|
|
||||||
|
Requests that a client can make to its Home Server
|
||||||
|
|
||||||
|
* get/set my Display Name
|
||||||
|
This should return/take a simple "text/plain" field
|
||||||
|
|
||||||
|
* get/set my Avatar URL
|
||||||
|
The avatar image data itself is not stored by this API; we'll just store a
|
||||||
|
URL to let the clients fetch it. Optionally HSes could integrate this with
|
||||||
|
their generic content attacmhent storage service, allowing a user to set
|
||||||
|
upload their profile Avatar and update the URL to point to it.
|
||||||
|
|
||||||
|
* get/add/remove my metadata fields
|
||||||
|
Also we need to actually define types of metadata
|
||||||
|
|
||||||
|
* get another user's Display Name / Avatar / metadata fields
|
||||||
|
|
||||||
|
[[TODO(paul): At some later stage we should consider the API for:
|
||||||
|
|
||||||
|
* get/set ACL permissions on my metadata fields
|
||||||
|
|
||||||
|
* manage my ACL tokens
|
||||||
|
]]
|
||||||
|
|
||||||
|
Server-Server
|
||||||
|
-------------
|
||||||
|
|
||||||
|
Requests that Home Servers make to others
|
||||||
|
|
||||||
|
* get a user's Display Name / Avatar
|
||||||
|
|
||||||
|
* get a user's full profile - name/avatar + MD fields
|
||||||
|
This request must allow for specifying the User ID of the requesting user,
|
||||||
|
for permissions purposes. It also needs to take into account any ACL Tokens
|
||||||
|
the requestor has.
|
||||||
|
|
||||||
|
* push a change of Display Name to observers (overlaps with the presence API)
|
||||||
|
|
||||||
|
Room Event PDU Types
|
||||||
|
--------------------
|
||||||
|
|
||||||
|
Events that are pushed from Home Servers to other Home Servers or clients.
|
||||||
|
|
||||||
|
* user Display Name change
|
||||||
|
|
||||||
|
* user Avatar change
|
||||||
|
[[TODO(paul): should the avatar image itself be stored in all the room
|
||||||
|
histories? maybe this event should just be a hint to clients that they should
|
||||||
|
re-fetch the avatar image]]
|
113
docs/model/room-join-workflow
Normal file
113
docs/model/room-join-workflow
Normal file
@ -0,0 +1,113 @@
|
|||||||
|
==================
|
||||||
|
Room Join Workflow
|
||||||
|
==================
|
||||||
|
|
||||||
|
An outline of the workflows required when a user joins a room.
|
||||||
|
|
||||||
|
Discovery
|
||||||
|
=========
|
||||||
|
|
||||||
|
To join a room, a user has to discover the room by some mechanism in order to
|
||||||
|
obtain the (opaque) Room ID and a candidate list of likely home servers that
|
||||||
|
contain it.
|
||||||
|
|
||||||
|
Sending an Invitation
|
||||||
|
---------------------
|
||||||
|
|
||||||
|
The most direct way a user discovers the existence of a room is from a
|
||||||
|
invitation from some other user who is a member of that room.
|
||||||
|
|
||||||
|
The inviter's HS sets the membership status of the invitee to "invited" in the
|
||||||
|
"m.members" state key by sending a state update PDU. The HS then broadcasts this
|
||||||
|
PDU among the existing members in the usual way. An invitation message is also
|
||||||
|
sent to the invited user, containing the Room ID and the PDU ID of this
|
||||||
|
invitation state change and potentially a list of some other home servers to use
|
||||||
|
to accept the invite. The user's client can then choose to display it in some
|
||||||
|
way to alert the user.
|
||||||
|
|
||||||
|
[[TODO(paul): At present, no API has been designed or described to actually send
|
||||||
|
that invite to the invited user. Likely it will be some facet of the larger
|
||||||
|
user-user API required for presence, profile management, etc...]]
|
||||||
|
|
||||||
|
Directory Service
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
Alternatively, the user may discover the channel via a directory service; either
|
||||||
|
by performing a name lookup, or some kind of browse or search acitivty. However
|
||||||
|
this is performed, the end result is that the user's home server requests the
|
||||||
|
Room ID and candidate list from the directory service.
|
||||||
|
|
||||||
|
[[TODO(paul): At present, no API has been designed or described for this
|
||||||
|
directory service]]
|
||||||
|
|
||||||
|
|
||||||
|
Joining
|
||||||
|
=======
|
||||||
|
|
||||||
|
Once the ID and home servers are obtained, the user can then actually join the
|
||||||
|
room.
|
||||||
|
|
||||||
|
Accepting an Invite
|
||||||
|
-------------------
|
||||||
|
|
||||||
|
If a user has received and accepted an invitation to join a room, the invitee's
|
||||||
|
home server can now send an invite acceptance message to a chosen candidate
|
||||||
|
server from the list given in the invitation, citing also the PDU ID of the
|
||||||
|
invitation as "proof" of their invite. (This is required as due to late message
|
||||||
|
propagation it could be the case that the acceptance is received before the
|
||||||
|
invite by some servers). If this message is allowed by the candidate server, it
|
||||||
|
generates a new PDU that updates the invitee's membership status to "joined",
|
||||||
|
referring back to the acceptance PDU, and broadcasts that as a state change in
|
||||||
|
the usual way. The newly-invited user is now a full member of the room, and
|
||||||
|
state propagation proceeds as usual.
|
||||||
|
|
||||||
|
Joining a Public Room
|
||||||
|
---------------------
|
||||||
|
|
||||||
|
If a user has discovered the existence of a room they wish to join but does not
|
||||||
|
have an active invitation, they can request to join it directly by sending a
|
||||||
|
join message to a candidate server on the list provided by the directory
|
||||||
|
service. As this list may be out of date, the HS should be prepared to retry
|
||||||
|
other candidates if the chosen one is no longer aware of the room, because it
|
||||||
|
has no users as members in it.
|
||||||
|
|
||||||
|
Once a candidate server that is aware of the room has been found, it can
|
||||||
|
broadcast an update PDU to add the member into the "m.members" key setting their
|
||||||
|
state directly to "joined" (i.e. bypassing the two-phase invite semantics),
|
||||||
|
remembering to include the new user's HS in that list.
|
||||||
|
|
||||||
|
Knocking on a Semi-Public Room
|
||||||
|
------------------------------
|
||||||
|
|
||||||
|
If a user requests to join a room but the join mode of the room is "knock", the
|
||||||
|
join is not immediately allowed. Instead, if the user wishes to proceed, they
|
||||||
|
can instead post a "knock" message, which informs other members of the room that
|
||||||
|
the would-be joiner wishes to become a member and sets their membership value to
|
||||||
|
"knocked". If any of them wish to accept this, they can then send an invitation
|
||||||
|
in the usual way described above. Knowing that the user has already knocked and
|
||||||
|
expressed an interest in joining, the invited user's home server should
|
||||||
|
immediately accept that invitation on the user's behalf, and go on to join the
|
||||||
|
room in the usual way.
|
||||||
|
|
||||||
|
[[NOTE(Erik): Though this may confuse users who expect 'X has joined' to
|
||||||
|
actually be a user initiated action, i.e. they may expect that 'X' is actually
|
||||||
|
looking at synapse right now?]]
|
||||||
|
|
||||||
|
[[NOTE(paul): Yes, a fair point maybe we should suggest HSes don't do that, and
|
||||||
|
just offer an invite to the user as normal]]
|
||||||
|
|
||||||
|
Private and Non-Existent Rooms
|
||||||
|
------------------------------
|
||||||
|
|
||||||
|
If a user requests to join a room but the room is either unknown by the home
|
||||||
|
server receiving the request, or is known by the join mode is "invite" and the
|
||||||
|
user has not been invited, the server must respond that the room does not exist.
|
||||||
|
This is to prevent leaking information about the existence and identity of
|
||||||
|
private rooms.
|
||||||
|
|
||||||
|
|
||||||
|
Outstanding Questions
|
||||||
|
=====================
|
||||||
|
|
||||||
|
* Do invitations or knocks time out and expire at some point? If so when? Time
|
||||||
|
is hard in distributed systems.
|
274
docs/model/rooms
Normal file
274
docs/model/rooms
Normal file
@ -0,0 +1,274 @@
|
|||||||
|
===========
|
||||||
|
Rooms Model
|
||||||
|
===========
|
||||||
|
|
||||||
|
A description of the general data model used to implement Rooms, and the
|
||||||
|
user-level visible effects and implications.
|
||||||
|
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
"Rooms" in Synapse are shared messaging channels over which all the participant
|
||||||
|
users can exchange messages. Rooms have an opaque persistent identify, a
|
||||||
|
globally-replicated set of state (consisting principly of a membership set of
|
||||||
|
users, and other management and miscellaneous metadata), and a message history.
|
||||||
|
|
||||||
|
|
||||||
|
Room Identity and Naming
|
||||||
|
========================
|
||||||
|
|
||||||
|
Rooms can be arbitrarily created by any user on any home server; at which point
|
||||||
|
the home server will sign the message that creates the channel, and the
|
||||||
|
fingerprint of this signature becomes the strong persistent identify of the
|
||||||
|
room. This now identifies the room to any home server in the network regardless
|
||||||
|
of its original origin. This allows the identify of the room to outlive any
|
||||||
|
particular server. Subject to appropriate permissions [to be discussed later],
|
||||||
|
any current member of a room can invite others to join it, can post messages
|
||||||
|
that become part of its history, and can change the persistent state of the room
|
||||||
|
(including its current set of permissions).
|
||||||
|
|
||||||
|
Home servers can provide a directory service, allowing a lookup from a
|
||||||
|
convenient human-readable form of room label to a room ID. This mapping is
|
||||||
|
scoped to the particular home server domain and so simply represents that server
|
||||||
|
administrator's opinion of what room should take that label; it does not have to
|
||||||
|
be globally replicated and does not form part of the stored state of that room.
|
||||||
|
|
||||||
|
This room name takes the form
|
||||||
|
|
||||||
|
#localname:some.domain.name
|
||||||
|
|
||||||
|
for similarity and consistency with user names on directories.
|
||||||
|
|
||||||
|
To join a room (and therefore to be allowed to inspect past history, post new
|
||||||
|
messages to it, and read its state), a user must become aware of the room's
|
||||||
|
fingerprint ID. There are two mechanisms to allow this:
|
||||||
|
|
||||||
|
* An invite message from someone else in the room
|
||||||
|
|
||||||
|
* A referral from a room directory service
|
||||||
|
|
||||||
|
As room IDs are opaque and ephemeral, they can serve as a mechanism to create
|
||||||
|
"ad-hoc" rooms deliberately unnamed, for small group-chats or even private
|
||||||
|
one-to-one message exchange.
|
||||||
|
|
||||||
|
|
||||||
|
Stored State and Permissions
|
||||||
|
============================
|
||||||
|
|
||||||
|
Every room has a globally-replicated set of stored state. This state is a set of
|
||||||
|
key/value or key/subkey/value pairs. The value of every (sub)key is a
|
||||||
|
JSON-representable object. The main key of a piece of stored state establishes
|
||||||
|
its meaning; some keys store sub-keys to allow a sub-structure within them [more
|
||||||
|
detail below]. Some keys have special meaning to Synapse, as they relate to
|
||||||
|
management details of the room itself, storing such details as user membership,
|
||||||
|
and permissions of users to alter the state of the room itself. Other keys may
|
||||||
|
store information to present to users, which the system does not directly rely
|
||||||
|
on. The key space itself is namespaced, allowing 3rd party extensions, subject
|
||||||
|
to suitable permission.
|
||||||
|
|
||||||
|
Permission management is based on the concept of "power-levels". Every user
|
||||||
|
within a room has an integer assigned, being their "power-level" within that
|
||||||
|
room. Along with its actual data value, each key (or subkey) also stores the
|
||||||
|
minimum power-level a user must have in order to write to that key, the
|
||||||
|
power-level of the last user who actually did write to it, and the PDU ID of
|
||||||
|
that state change.
|
||||||
|
|
||||||
|
To be accepted as valid, a change must NOT:
|
||||||
|
|
||||||
|
* Be made by a user having a power-level lower than required to write to the
|
||||||
|
state key
|
||||||
|
|
||||||
|
* Alter the required power-level for that state key to a value higher than the
|
||||||
|
user has
|
||||||
|
|
||||||
|
* Increase that user's own power-level
|
||||||
|
|
||||||
|
* Grant any other user a power-level higher than the level of the user making
|
||||||
|
the change
|
||||||
|
|
||||||
|
[[TODO(paul): consider if relaxations should be allowed; e.g. is the current
|
||||||
|
outright-winner allowed to raise their own level, to allow for "inflation"?]]
|
||||||
|
|
||||||
|
|
||||||
|
Room State Keys
|
||||||
|
===============
|
||||||
|
|
||||||
|
[[TODO(paul): if this list gets too big it might become necessary to move it
|
||||||
|
into its own doc]]
|
||||||
|
|
||||||
|
The following keys have special semantics or meaning to Synapse itself:
|
||||||
|
|
||||||
|
m.member (has subkeys)
|
||||||
|
Stores a sub-key for every Synapse User ID which is currently a member of
|
||||||
|
this room. Its value gives the membership type ("knocked", "invited",
|
||||||
|
"joined").
|
||||||
|
|
||||||
|
m.power_levels
|
||||||
|
Stores a mapping from Synapse User IDs to their power-level in the room. If
|
||||||
|
they are not present in this mapping, the default applies.
|
||||||
|
|
||||||
|
The reason to store this as a single value rather than a value with subkeys
|
||||||
|
is that updates to it are atomic; allowing a number of colliding-edit
|
||||||
|
problems to be avoided.
|
||||||
|
|
||||||
|
m.default_level
|
||||||
|
Gives the default power-level for members of the room that do not have one
|
||||||
|
specified in their membership key.
|
||||||
|
|
||||||
|
m.invite_level
|
||||||
|
If set, gives the minimum power-level required for members to invite others
|
||||||
|
to join, or to accept knock requests from non-members requesting access. If
|
||||||
|
absent, then invites are not allowed. An invitation involves setting their
|
||||||
|
membership type to "invited", in addition to sending the invite message.
|
||||||
|
|
||||||
|
m.join_rules
|
||||||
|
Encodes the rules on how non-members can join the room. Has the following
|
||||||
|
possibilities:
|
||||||
|
"public" - a non-member can join the room directly
|
||||||
|
"knock" - a non-member cannot join the room, but can post a single "knock"
|
||||||
|
message requesting access, which existing members may approve or deny
|
||||||
|
"invite" - non-members cannot join the room without an invite from an
|
||||||
|
existing member
|
||||||
|
"private" - nobody who is not in the 'may_join' list or already a member
|
||||||
|
may join by any mechanism
|
||||||
|
|
||||||
|
In any of the first three modes, existing members with sufficient permission
|
||||||
|
can send invites to non-members if allowed by the "m.invite_level" key. A
|
||||||
|
"private" room is not allowed to have the "m.invite_level" set.
|
||||||
|
|
||||||
|
A client may use the value of this key to hint at the user interface
|
||||||
|
expectations to provide; in particular, a private chat with one other use
|
||||||
|
might warrant specific handling in the client.
|
||||||
|
|
||||||
|
m.may_join
|
||||||
|
A list of User IDs that are always allowed to join the room, regardless of any
|
||||||
|
of the prevailing join rules and invite levels. These apply even to private
|
||||||
|
rooms. These are stored in a single list with normal update-powerlevel
|
||||||
|
permissions applied; users cannot arbitrarily remove themselves from the list.
|
||||||
|
|
||||||
|
m.add_state_level
|
||||||
|
The power-level required for a user to be able to add new state keys.
|
||||||
|
|
||||||
|
m.public_history
|
||||||
|
If set and true, anyone can request the history of the room, without needing
|
||||||
|
to be a member of the room.
|
||||||
|
|
||||||
|
m.archive_servers
|
||||||
|
For "public" rooms with public history, gives a list of home servers that
|
||||||
|
should be included in message distribution to the room, even if no users on
|
||||||
|
that server are present. These ensure that a public room can still persist
|
||||||
|
even if no users are currently members of it. This list should be consulted by
|
||||||
|
the dirctory servers as the candidate list they respond with.
|
||||||
|
|
||||||
|
The following keys are provided by Synapse for user benefit, but their value is
|
||||||
|
not otherwise used by Synapse.
|
||||||
|
|
||||||
|
m.name
|
||||||
|
Stores a short human-readable name for the room, such that clients can display
|
||||||
|
to a user to assist in identifying which room is which.
|
||||||
|
|
||||||
|
This name specifically is not the strong ID used by the message transport
|
||||||
|
system to refer to the room, because it may be changed from time to time.
|
||||||
|
|
||||||
|
m.topic
|
||||||
|
Stores the current human-readable topic
|
||||||
|
|
||||||
|
|
||||||
|
Room Creation Templates
|
||||||
|
=======================
|
||||||
|
|
||||||
|
A client (or maybe home server?) could offer a few templates for the creation of
|
||||||
|
new rooms. For example, for a simple private one-to-one chat the channel could
|
||||||
|
assign the creator a power-level of 1, requiring a level of 1 to invite, and
|
||||||
|
needing an invite before members can join. An invite is then sent to the other
|
||||||
|
party, and if accepted and the other user joins, the creator's power-level can
|
||||||
|
now be reduced to 0. This now leaves a room with two participants in it being
|
||||||
|
unable to add more.
|
||||||
|
|
||||||
|
|
||||||
|
Rooms that Continue History
|
||||||
|
===========================
|
||||||
|
|
||||||
|
An option that could be considered for room creation, is that when a new room is
|
||||||
|
created the creator could specify a PDU ID into an existing room, as the history
|
||||||
|
continuation point. This would be stored as an extra piece of meta-data on the
|
||||||
|
initial PDU of the room's creation. (It does not appear in the normal previous
|
||||||
|
PDU linkage).
|
||||||
|
|
||||||
|
This would allow users in rooms to "fork" a room, if it is considered that the
|
||||||
|
conversations in the room no longer fit its original purpose, and wish to
|
||||||
|
diverge. Existing permissions on the original room would continue to apply of
|
||||||
|
course, for viewing that history. If both rooms are considered "public" we might
|
||||||
|
also want to define a message to post into the original room to represent this
|
||||||
|
fork point, and give a reference to the new room.
|
||||||
|
|
||||||
|
|
||||||
|
User Direct Message Rooms
|
||||||
|
=========================
|
||||||
|
|
||||||
|
There is no need to build a mechanism for directly sending messages between
|
||||||
|
users, because a room can handle this ability. To allow direct user-to-user chat
|
||||||
|
messaging we simply need to be able to create rooms with specific set of
|
||||||
|
permissions to allow this direct messaging.
|
||||||
|
|
||||||
|
Between any given pair of user IDs that wish to exchange private messages, there
|
||||||
|
will exist a single shared Room, created lazily by either side. These rooms will
|
||||||
|
need a certain amount of special handling in both home servers and display on
|
||||||
|
clients, but as much as possible should be treated by the lower layers of code
|
||||||
|
the same as other rooms.
|
||||||
|
|
||||||
|
Specially, a client would likely offer a special menu choice associated with
|
||||||
|
another user (in room member lists, presence list, etc..) as "direct chat". That
|
||||||
|
would perform all the necessary steps to create the private chat room. Receiving
|
||||||
|
clients should display these in a special way too as the room name is not
|
||||||
|
important; instead it should distinguish them on the Display Name of the other
|
||||||
|
party.
|
||||||
|
|
||||||
|
Home Servers will need a client-API option to request setting up a new user-user
|
||||||
|
chat room, which will then need special handling within the server. It will
|
||||||
|
create a new room with the following
|
||||||
|
|
||||||
|
m.member: the proposing user
|
||||||
|
m.join_rules: "private"
|
||||||
|
m.may_join: both users
|
||||||
|
m.power_levels: empty
|
||||||
|
m.default_level: 0
|
||||||
|
m.add_state_level: 0
|
||||||
|
m.public_history: False
|
||||||
|
|
||||||
|
Having created the room, it can send an invite message to the other user in the
|
||||||
|
normal way - the room permissions state that no users can be set to the invited
|
||||||
|
state, but because they're in the may_join list then they'd be allowed to join
|
||||||
|
anyway.
|
||||||
|
|
||||||
|
In this arrangement there is now a room with both users may join but neither has
|
||||||
|
the power to invite any others. Both users now have the confidence that (at
|
||||||
|
least within the messaging system itself) their messages remain private and
|
||||||
|
cannot later be provably leaked to a third party. They can freely set the topic
|
||||||
|
or name if they choose and add or edit any other state of the room. The update
|
||||||
|
powerlevel of each of these fixed properties should be 1, to lock out the users
|
||||||
|
from being able to alter them.
|
||||||
|
|
||||||
|
|
||||||
|
Anti-Glare
|
||||||
|
==========
|
||||||
|
|
||||||
|
There exists the possibility of a race condition if two users who have no chat
|
||||||
|
history with each other simultaneously create a room and invite the other to it.
|
||||||
|
This is called a "glare" situation. There are two possible ideas for how to
|
||||||
|
resolve this:
|
||||||
|
|
||||||
|
* Each Home Server should persist the mapping of (user ID pair) to room ID, so
|
||||||
|
that duplicate requests can be suppressed. On receipt of a room creation
|
||||||
|
request that the HS thinks there already exists a room for, the invitation to
|
||||||
|
join can be rejected if:
|
||||||
|
a) the HS believes the sending user is already a member of the room (and
|
||||||
|
maybe their HS has forgotten this fact), or
|
||||||
|
b) the proposed room has a lexicographically-higher ID than the existing
|
||||||
|
room (to resolve true race condition conflicts)
|
||||||
|
|
||||||
|
* The room ID for a private 1:1 chat has a special form, determined by
|
||||||
|
concatenting the User IDs of both members in a deterministic order, such that
|
||||||
|
it doesn't matter which side creates it first; the HSes can just ignore
|
||||||
|
(or merge?) received PDUs that create the room twice.
|
108
docs/model/third-party-id
Normal file
108
docs/model/third-party-id
Normal file
@ -0,0 +1,108 @@
|
|||||||
|
======================
|
||||||
|
Third Party Identities
|
||||||
|
======================
|
||||||
|
|
||||||
|
A description of how email addresses, mobile phone numbers and other third
|
||||||
|
party identifiers can be used to authenticate and discover users in Matrix.
|
||||||
|
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
New users need to authenticate their account. An email or SMS text message can
|
||||||
|
be a convenient form of authentication. Users already have email addresses
|
||||||
|
and phone numbers for contacts in their address book. They want to communicate
|
||||||
|
with those contacts in Matrix without manually exchanging a Matrix User ID with
|
||||||
|
them.
|
||||||
|
|
||||||
|
Third Party IDs
|
||||||
|
---------------
|
||||||
|
|
||||||
|
[[TODO(markjh): Describe the format of a 3PID]]
|
||||||
|
|
||||||
|
|
||||||
|
Third Party ID Associations
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
An Associaton is a binding between a Matrix User ID and a Third Party ID (3PID).
|
||||||
|
Each 3PID can be associated with one Matrix User ID at a time.
|
||||||
|
|
||||||
|
[[TODO(markjh): JSON format of the association.]]
|
||||||
|
|
||||||
|
Verification
|
||||||
|
------------
|
||||||
|
|
||||||
|
An Assocation must be verified by a trusted Verification Server. Email
|
||||||
|
addresses and phone numbers can be verified by sending a token to the address
|
||||||
|
which a client can supply to the verifier to confirm ownership.
|
||||||
|
|
||||||
|
An email Verification Server may be capable of verifying all email 3PIDs or may
|
||||||
|
be restricted to verifying addresses for a particular domain. A phone number
|
||||||
|
Verification Server may be capable of verifying all phone numbers or may be
|
||||||
|
restricted to verifying numbers for a given country or phone prefix.
|
||||||
|
|
||||||
|
Verification Servers fulfil a similar role to Certificate Authorities in PKI so
|
||||||
|
a similar level of vetting should be required before clients trust their
|
||||||
|
signatures.
|
||||||
|
|
||||||
|
A Verification Server may wish to check for existing Associations for a 3PID
|
||||||
|
before creating a new Association.
|
||||||
|
|
||||||
|
Discovery
|
||||||
|
---------
|
||||||
|
|
||||||
|
Users can discover Associations using a trusted Identity Server. Each
|
||||||
|
Association will be signed by the Identity Server. An Identity Server may store
|
||||||
|
the entire space of Associations or may delegate to other Identity Servers when
|
||||||
|
looking up Associations.
|
||||||
|
|
||||||
|
Each Association returned from an Identity Server must be signed by a
|
||||||
|
Verification Server. Clients should check these signatures.
|
||||||
|
|
||||||
|
Identity Servers fulfil a similar role to DNS servers.
|
||||||
|
|
||||||
|
Privacy
|
||||||
|
-------
|
||||||
|
|
||||||
|
A User may publish the association between their phone number and Matrix User ID
|
||||||
|
on the Identity Server without publishing the number in their Profile hosted on
|
||||||
|
their Home Server.
|
||||||
|
|
||||||
|
Identity Servers should refrain from publishing reverse mappings and should
|
||||||
|
take steps, such as rate limiting, to prevent attackers enumerating the space of
|
||||||
|
mappings.
|
||||||
|
|
||||||
|
Federation
|
||||||
|
==========
|
||||||
|
|
||||||
|
Delegation
|
||||||
|
----------
|
||||||
|
|
||||||
|
Verification Servers could delegate signing to another server by issuing
|
||||||
|
certificate to that server allowing it to verify and sign a subset of 3PID on
|
||||||
|
its behalf. It would be necessary to provide a language for describing which
|
||||||
|
subset of 3PIDs that server had authority to validate. Alternatively it could
|
||||||
|
delegate the verification step to another server but sign the resulting
|
||||||
|
association itself.
|
||||||
|
|
||||||
|
The 3PID space will have a heirachical structure like DNS so Identity Servers
|
||||||
|
can delegate lookups to other servers. An Identity Server should be prepared
|
||||||
|
to host or delegate any valid association within the subset of the 3PIDs it is
|
||||||
|
resonsible for.
|
||||||
|
|
||||||
|
Multiple Root Verification Servers
|
||||||
|
----------------------------------
|
||||||
|
|
||||||
|
There can be multiple root Verification Servers and an Association could be
|
||||||
|
signed by multiple servers if different clients trust different subsets of
|
||||||
|
the verification servers.
|
||||||
|
|
||||||
|
Multiple Root Identity Servers
|
||||||
|
------------------------------
|
||||||
|
|
||||||
|
There can be be multiple root Identity Servers. Clients will add each
|
||||||
|
Association to all root Identity Servers.
|
||||||
|
|
||||||
|
[[TODO(markjh): Describe how clients find the list of root Identity Servers]]
|
||||||
|
|
||||||
|
|
64
docs/protocol_examples
Normal file
64
docs/protocol_examples
Normal file
@ -0,0 +1,64 @@
|
|||||||
|
PUT /send/abc/ HTTP/1.1
|
||||||
|
Host: ...
|
||||||
|
Content-Length: ...
|
||||||
|
Content-Type: application/json
|
||||||
|
|
||||||
|
{
|
||||||
|
"origin": "localhost:5000",
|
||||||
|
"pdus": [
|
||||||
|
{
|
||||||
|
"content": {},
|
||||||
|
"context": "tng",
|
||||||
|
"depth": 12,
|
||||||
|
"is_state": false,
|
||||||
|
"origin": "localhost:5000",
|
||||||
|
"pdu_id": 1404381396854,
|
||||||
|
"pdu_type": "feedback",
|
||||||
|
"prev_pdus": [
|
||||||
|
[
|
||||||
|
"1404381395883",
|
||||||
|
"localhost:6000"
|
||||||
|
]
|
||||||
|
],
|
||||||
|
"ts": 1404381427581
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"prev_ids": [
|
||||||
|
"1404381396852"
|
||||||
|
],
|
||||||
|
"ts": 1404381427823
|
||||||
|
}
|
||||||
|
|
||||||
|
HTTP/1.1 200 OK
|
||||||
|
...
|
||||||
|
|
||||||
|
======================================
|
||||||
|
|
||||||
|
GET /pull/-1/ HTTP/1.1
|
||||||
|
Host: ...
|
||||||
|
Content-Length: 0
|
||||||
|
|
||||||
|
HTTP/1.1 200 OK
|
||||||
|
Content-Length: ...
|
||||||
|
Content-Type: application/json
|
||||||
|
|
||||||
|
{
|
||||||
|
origin: ...,
|
||||||
|
prev_ids: ...,
|
||||||
|
data: [
|
||||||
|
{
|
||||||
|
data_id: ...,
|
||||||
|
prev_pdus: [...],
|
||||||
|
depth: ...,
|
||||||
|
ts: ...,
|
||||||
|
context: ...,
|
||||||
|
origin: ...,
|
||||||
|
content: {
|
||||||
|
...
|
||||||
|
}
|
||||||
|
},
|
||||||
|
...,
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
53
docs/python_architecture
Normal file
53
docs/python_architecture
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
= Server to Server =
|
||||||
|
|
||||||
|
== Server to Server Stack ==
|
||||||
|
|
||||||
|
To use the server to server stack, home servers should only need to interact with the Messaging layer.
|
||||||
|
|
||||||
|
The server to server side of things is designed into 4 distinct layers:
|
||||||
|
|
||||||
|
1. Messaging Layer
|
||||||
|
2. Pdu Layer
|
||||||
|
3. Transaction Layer
|
||||||
|
4. Transport Layer
|
||||||
|
|
||||||
|
Where the bottom (the transport layer) is what talks to the internet via HTTP, and the top (the messaging layer) talks to the rest of the Home Server with a domain specific API.
|
||||||
|
|
||||||
|
1. Messaging Layer
|
||||||
|
This is what the rest of the Home Server hits to send messages, join rooms, etc. It also allows you to register callbacks for when it get's notified by lower levels that e.g. a new message has been received.
|
||||||
|
|
||||||
|
It is responsible for serializing requests to send to the data layer, and to parse requests received from the data layer.
|
||||||
|
|
||||||
|
|
||||||
|
2. PDU Layer
|
||||||
|
This layer handles:
|
||||||
|
* duplicate pdu_id's - i.e., it makes sure we ignore them.
|
||||||
|
* responding to requests for a given pdu_id
|
||||||
|
* responding to requests for all metadata for a given context (i.e. room)
|
||||||
|
* handling incoming pagination requests
|
||||||
|
|
||||||
|
So it has to parse incoming messages to discover which are metadata and which aren't, and has to correctly clobber existing metadata where appropriate.
|
||||||
|
|
||||||
|
For incoming PDUs, it has to check the PDUs it references to see if we have missed any. If we have go and ask someone (another home server) for it.
|
||||||
|
|
||||||
|
|
||||||
|
3. Transaction Layer
|
||||||
|
This layer makes incoming requests idempotent. I.e., it stores which transaction id's we have seen and what our response were. If we have already seen a message with the given transaction id, we do not notify higher levels but simply respond with the previous response.
|
||||||
|
|
||||||
|
transaction_id is from "GET /send/<tx_id>/"
|
||||||
|
|
||||||
|
It's also responsible for batching PDUs into single transaction for sending to remote destinations, so that we only ever have one transaction in flight to a given destination at any one time.
|
||||||
|
|
||||||
|
This is also responsible for answering requests for things after a given set of transactions, i.e., ask for everything after 'ver' X.
|
||||||
|
|
||||||
|
|
||||||
|
4. Transport Layer
|
||||||
|
This is responsible for starting a HTTP server and hitting the correct callbacks on the Transaction layer, as well as sending both data and requests for data.
|
||||||
|
|
||||||
|
|
||||||
|
== Persistence ==
|
||||||
|
|
||||||
|
We persist things in a single sqlite3 database. All database queries get run on a separate, dedicated thread. This that we only ever have one query running at a time, making it a lot easier to do things in a safe manner.
|
||||||
|
|
||||||
|
The queries are located in the synapse.persistence.transactions module, and the table information in the synapse.persistence.tables module.
|
||||||
|
|
59
docs/server-server/protocol-format
Normal file
59
docs/server-server/protocol-format
Normal file
@ -0,0 +1,59 @@
|
|||||||
|
|
||||||
|
Transaction
|
||||||
|
===========
|
||||||
|
|
||||||
|
Required keys:
|
||||||
|
|
||||||
|
============ =================== ===============================================
|
||||||
|
Key Type Description
|
||||||
|
============ =================== ===============================================
|
||||||
|
origin String DNS name of homeserver making this transaction.
|
||||||
|
ts Integer Timestamp in milliseconds on originating
|
||||||
|
homeserver when this transaction started.
|
||||||
|
previous_ids List of Strings List of transactions that were sent immediately
|
||||||
|
prior to this transaction.
|
||||||
|
pdus List of Objects List of updates contained in this transaction.
|
||||||
|
============ =================== ===============================================
|
||||||
|
|
||||||
|
|
||||||
|
PDU
|
||||||
|
===
|
||||||
|
|
||||||
|
Required keys:
|
||||||
|
|
||||||
|
============ ================== ================================================
|
||||||
|
Key Type Description
|
||||||
|
============ ================== ================================================
|
||||||
|
context String Event context identifier
|
||||||
|
origin String DNS name of homeserver that created this PDU.
|
||||||
|
pdu_id String Unique identifier for PDU within the context for
|
||||||
|
the originating homeserver.
|
||||||
|
ts Integer Timestamp in milliseconds on originating
|
||||||
|
homeserver when this PDU was created.
|
||||||
|
pdu_type String PDU event type.
|
||||||
|
prev_pdus List of Pairs The originating homeserver and PDU ids of the
|
||||||
|
of Strings most recent PDUs the homeserver was aware of for
|
||||||
|
this context when it made this PDU.
|
||||||
|
depth Integer The maximum depth of the previous PDUs plus one.
|
||||||
|
============ ================== ================================================
|
||||||
|
|
||||||
|
Keys for state updates:
|
||||||
|
|
||||||
|
================== ============ ================================================
|
||||||
|
Key Type Description
|
||||||
|
================== ============ ================================================
|
||||||
|
is_state Boolean True if this PDU is updating state.
|
||||||
|
state_key String Optional key identifying the updated state within
|
||||||
|
the context.
|
||||||
|
power_level Integer The asserted power level of the user performing
|
||||||
|
the update.
|
||||||
|
min_update Integer The required power level needed to replace this
|
||||||
|
update.
|
||||||
|
prev_state_id String The homeserver of the update this replaces
|
||||||
|
prev_state_origin String The PDU id of the update this replaces.
|
||||||
|
user String The user updating the state.
|
||||||
|
================== ============ ================================================
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
141
docs/server-server/security-threat-model
Normal file
141
docs/server-server/security-threat-model
Normal file
@ -0,0 +1,141 @@
|
|||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
Scope
|
||||||
|
-----
|
||||||
|
|
||||||
|
This document considers threats specific to the server to server federation
|
||||||
|
synapse protocol.
|
||||||
|
|
||||||
|
|
||||||
|
Attacker
|
||||||
|
--------
|
||||||
|
|
||||||
|
It is assumed that the attacker can see and manipulate all network traffic
|
||||||
|
between any of the servers and may be in control of one or more homeservers
|
||||||
|
participating in the federation protocol.
|
||||||
|
|
||||||
|
Threat Model
|
||||||
|
============
|
||||||
|
|
||||||
|
Denial of Service
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
The attacker could attempt to prevent delivery of messages to or from the
|
||||||
|
victim in order to:
|
||||||
|
|
||||||
|
* Disrupt service or marketing campaign of a commercial competitor.
|
||||||
|
* Censor a discussion or censor a participant in a discussion.
|
||||||
|
* Perform general vandalism.
|
||||||
|
|
||||||
|
Threat: Resource Exhaustion
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could cause the victims server to exhaust a particular resource
|
||||||
|
(e.g. open TCP connections, CPU, memory, disk storage)
|
||||||
|
|
||||||
|
Threat: Unrecoverable Consistency Violations
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could send messages which created an unrecoverable "split-brain"
|
||||||
|
state in the cluster such that the victim's servers could no longer dervive a
|
||||||
|
consistent view of the chatroom state.
|
||||||
|
|
||||||
|
Threat: Bad History
|
||||||
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could convince the victim to accept invalid messages which the
|
||||||
|
victim would then include in their view of the chatroom history. Other servers
|
||||||
|
in the chatroom would reject the invalid messages and potentially reject the
|
||||||
|
victims messages as well since they depended on the invalid messages.
|
||||||
|
|
||||||
|
Threat: Block Network Traffic
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could try to firewall traffic between the victim's server and some
|
||||||
|
or all of the other servers in the chatroom.
|
||||||
|
|
||||||
|
Threat: High Volume of Messages
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could send large volumes of messages to a chatroom with the victim
|
||||||
|
making the chatroom unusable.
|
||||||
|
|
||||||
|
Threat: Banning users without necessary authorisation
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could attempt to ban a user from a chatroom with the necessary
|
||||||
|
authorisation.
|
||||||
|
|
||||||
|
Spoofing
|
||||||
|
--------
|
||||||
|
|
||||||
|
An attacker could try to send a message claiming to be from the victim without
|
||||||
|
the victim having sent the message in order to:
|
||||||
|
|
||||||
|
* Impersonate the victim while performing illict activity.
|
||||||
|
* Obtain privileges of the victim.
|
||||||
|
|
||||||
|
Threat: Altering Message Contents
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could try to alter the contents of an existing message from the
|
||||||
|
victim.
|
||||||
|
|
||||||
|
Threat: Fake Message "origin" Field
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could try to send a new message purporting to be from the victim
|
||||||
|
with a phony "origin" field.
|
||||||
|
|
||||||
|
Spamming
|
||||||
|
--------
|
||||||
|
|
||||||
|
The attacker could try to send a high volume of solicicted or unsolicted
|
||||||
|
messages to the victim in order to:
|
||||||
|
|
||||||
|
* Find victims for scams.
|
||||||
|
* Market unwanted products.
|
||||||
|
|
||||||
|
Threat: Unsoliticted Messages
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could try to send messages to victims who do not wish to receive
|
||||||
|
them.
|
||||||
|
|
||||||
|
Threat: Abusive Messages
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could send abusive or threatening messages to the victim
|
||||||
|
|
||||||
|
Spying
|
||||||
|
------
|
||||||
|
|
||||||
|
The attacker could try to access message contents or metadata for messages sent
|
||||||
|
by the victim or to the victim that were not intended to reach the attacker in
|
||||||
|
order to:
|
||||||
|
|
||||||
|
* Gain sensitive personal or commercial information.
|
||||||
|
* Impersonate the victim using credentials contained in the messages.
|
||||||
|
(e.g. password reset messages)
|
||||||
|
* Discover who the victim was talking to and when.
|
||||||
|
|
||||||
|
Threat: Disclosure during Transmission
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could try to expose the message contents or metadata during
|
||||||
|
transmission between the servers.
|
||||||
|
|
||||||
|
Threat: Disclosure to Servers Outside Chatroom
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could try to convince servers within a chatroom to send messages to
|
||||||
|
a server it controls that was not authorised to be within the chatroom.
|
||||||
|
|
||||||
|
Threat: Disclosure to Servers Within Chatroom
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
An attacker could take control of a server within a chatroom to expose message
|
||||||
|
contents or metadata for messages in that room.
|
||||||
|
|
||||||
|
|
177
docs/server-server/specification
Normal file
177
docs/server-server/specification
Normal file
@ -0,0 +1,177 @@
|
|||||||
|
============================
|
||||||
|
Synapse Server-to-Server API
|
||||||
|
============================
|
||||||
|
|
||||||
|
A description of the protocol used to communicate between Synapse home servers;
|
||||||
|
also known as Federation.
|
||||||
|
|
||||||
|
|
||||||
|
Overview
|
||||||
|
========
|
||||||
|
|
||||||
|
The server-server API is a mechanism by which two home servers can exchange
|
||||||
|
Synapse event messages, both as a real-time push of current events, and as a
|
||||||
|
historic fetching mechanism to synchronise past history for clients to view. It
|
||||||
|
uses HTTP connections between each pair of servers involved as the underlying
|
||||||
|
transport. Messages are exchanged between servers in real-time by active pushing
|
||||||
|
from each server's HTTP client into the server of the other. Queries to fetch
|
||||||
|
historic data for the purpose of back-filling scrollback buffers and the like
|
||||||
|
can also be performed.
|
||||||
|
|
||||||
|
|
||||||
|
{ Synapse entities } { Synapse entities }
|
||||||
|
^ | ^ |
|
||||||
|
| events | | events |
|
||||||
|
| V | V
|
||||||
|
+------------------+ +------------------+
|
||||||
|
| |---------( HTTP )---------->| |
|
||||||
|
| Home Server | | Home Server |
|
||||||
|
| |<--------( HTTP )-----------| |
|
||||||
|
+------------------+ +------------------+
|
||||||
|
|
||||||
|
|
||||||
|
Transactions and PDUs
|
||||||
|
=====================
|
||||||
|
|
||||||
|
The communication between home servers is performed by a bidirectional exchange
|
||||||
|
of messages. These messages are called Transactions, and are encoded as JSON
|
||||||
|
objects with a dict as the top-level element, passed over HTTP. A Transaction is
|
||||||
|
meaningful only to the pair of home servers that exchanged it; they are not
|
||||||
|
globally-meaningful.
|
||||||
|
|
||||||
|
Each transaction has an opaque ID and timestamp (UNIX epoch time in miliseconds)
|
||||||
|
generated by its origin server, an origin and destination server name, a list of
|
||||||
|
"previous IDs", and a list of PDUs - the actual message payload that the
|
||||||
|
Transaction carries.
|
||||||
|
|
||||||
|
{"transaction_id":"916d630ea616342b42e98a3be0b74113",
|
||||||
|
"ts":1404835423000,
|
||||||
|
"origin":"red",
|
||||||
|
"destination":"blue",
|
||||||
|
"prev_ids":["e1da392e61898be4d2009b9fecce5325"],
|
||||||
|
"pdus":[...]}
|
||||||
|
|
||||||
|
The "previous IDs" field will contain a list of previous transaction IDs that
|
||||||
|
the origin server has sent to this destination. Its purpose is to act as a
|
||||||
|
sequence checking mechanism - the destination server can check whether it has
|
||||||
|
successfully received that Transaction, or ask for a retransmission if not.
|
||||||
|
|
||||||
|
The "pdus" field of a transaction is a list, containing zero or more PDUs.[*]
|
||||||
|
Each PDU is itself a dict containing a number of keys, the exact details of
|
||||||
|
which will vary depending on the type of PDU.
|
||||||
|
|
||||||
|
(* Normally the PDU list will be non-empty, but the server should cope with
|
||||||
|
receiving an "empty" transaction, as this is useful for informing peers of other
|
||||||
|
transaction IDs they should be aware of. This effectively acts as a push
|
||||||
|
mechanism to encourage peers to continue to replicate content.)
|
||||||
|
|
||||||
|
All PDUs have an ID, a context, a declaration of their type, a list of other PDU
|
||||||
|
IDs that have been seen recently on that context (regardless of which origin
|
||||||
|
sent them), and a nested content field containing the actual event content.
|
||||||
|
|
||||||
|
[[TODO(paul): Update this structure so that 'pdu_id' is a two-element
|
||||||
|
[origin,ref] pair like the prev_pdus are]]
|
||||||
|
|
||||||
|
{"pdu_id":"a4ecee13e2accdadf56c1025af232176",
|
||||||
|
"context":"#example.green",
|
||||||
|
"origin":"green",
|
||||||
|
"ts":1404838188000,
|
||||||
|
"pdu_type":"m.text",
|
||||||
|
"prev_pdus":[["blue","99d16afbc857975916f1d73e49e52b65"]],
|
||||||
|
"content":...
|
||||||
|
"is_state":false}
|
||||||
|
|
||||||
|
In contrast to the transaction layer, it is important to note that the prev_pdus
|
||||||
|
field of a PDU refers to PDUs that any origin server has sent, rather than
|
||||||
|
previous IDs that this origin has sent. This list may refer to other PDUs sent
|
||||||
|
by the same origin as the current one, or other origins.
|
||||||
|
|
||||||
|
Because of the distributed nature of participants in a Synapse conversation, it
|
||||||
|
is impossible to establish a globally-consistent total ordering on the events.
|
||||||
|
However, by annotating each outbound PDU at its origin with IDs of other PDUs it
|
||||||
|
has received, a partial ordering can be constructed allowing causallity
|
||||||
|
relationships to be preserved. A client can then display these messages to the
|
||||||
|
end-user in some order consistent with their content and ensure that no message
|
||||||
|
that is semantically in reply of an earlier one is ever displayed before it.
|
||||||
|
|
||||||
|
PDUs fall into two main categories: those that deliver Events, and those that
|
||||||
|
synchronise State. For PDUs that relate to State synchronisation, additional
|
||||||
|
keys exist to support this:
|
||||||
|
|
||||||
|
{...,
|
||||||
|
"is_state":true,
|
||||||
|
"state_key":TODO
|
||||||
|
"power_level":TODO
|
||||||
|
"prev_state_id":TODO
|
||||||
|
"prev_state_origin":TODO}
|
||||||
|
|
||||||
|
[[TODO(paul): At this point we should probably have a long description of how
|
||||||
|
State management works, with descriptions of clobbering rules, power levels, etc
|
||||||
|
etc... But some of that detail is rather up-in-the-air, on the whiteboard, and
|
||||||
|
so on. This part needs refining. And writing in its own document as the details
|
||||||
|
relate to the server/system as a whole, not specifically to server-server
|
||||||
|
federation.]]
|
||||||
|
|
||||||
|
|
||||||
|
Protocol URLs
|
||||||
|
=============
|
||||||
|
|
||||||
|
For active pushing of messages representing live activity "as it happens":
|
||||||
|
|
||||||
|
PUT /send/:transaction_id/
|
||||||
|
Body: JSON encoding of a single Transaction
|
||||||
|
|
||||||
|
Response: [[TODO(paul): I don't actually understand what
|
||||||
|
ReplicationLayer.on_transaction() is doing here, so I'm not sure what the
|
||||||
|
response ought to be]]
|
||||||
|
|
||||||
|
The transaction_id path argument will override any ID given in the JSON body.
|
||||||
|
The destination name will be set to that of the receiving server itself. Each
|
||||||
|
embedded PDU in the transaction body will be processed.
|
||||||
|
|
||||||
|
|
||||||
|
To fetch a particular PDU:
|
||||||
|
|
||||||
|
GET /pdu/:origin/:pdu_id/
|
||||||
|
|
||||||
|
Response: JSON encoding of a single Transaction containing one PDU
|
||||||
|
|
||||||
|
Retrieves a given PDU from the server. The response will contain a single new
|
||||||
|
Transaction, inside which will be the requested PDU.
|
||||||
|
|
||||||
|
|
||||||
|
To fetch all the state of a given context:
|
||||||
|
|
||||||
|
GET /state/:context/
|
||||||
|
|
||||||
|
Response: JSON encoding of a single Transaction containing multiple PDUs
|
||||||
|
|
||||||
|
Retrieves a snapshot of the entire current state of the given context. The
|
||||||
|
response will contain a single Transaction, inside which will be a list of
|
||||||
|
PDUs that encode the state.
|
||||||
|
|
||||||
|
|
||||||
|
To paginate events on a given context:
|
||||||
|
|
||||||
|
GET /paginate/:context/
|
||||||
|
Query args: v, limit
|
||||||
|
|
||||||
|
Response: JSON encoding of a single Transaction containing multiple PDUs
|
||||||
|
|
||||||
|
Retrieves a sliding-window history of previous PDUs that occurred on the
|
||||||
|
given context. Starting from the PDU ID(s) given in the "v" argument, the
|
||||||
|
PDUs that preceeded it are retrieved, up to a total number given by the
|
||||||
|
"limit" argument. These are then returned in a new Transaction containing all
|
||||||
|
off the PDUs.
|
||||||
|
|
||||||
|
|
||||||
|
To stream events all the events:
|
||||||
|
|
||||||
|
GET /pull/
|
||||||
|
Query args: origin, v
|
||||||
|
|
||||||
|
Response: JSON encoding of a single Transaction consisting of multiple PDUs
|
||||||
|
|
||||||
|
Retrieves all of the transactions later than any version given by the "v"
|
||||||
|
arguments. [[TODO(paul): I'm not sure what the "origin" argument does because
|
||||||
|
I think at some point in the code it's got swapped around.]]
|
271
docs/sphinx/conf.py
Normal file
271
docs/sphinx/conf.py
Normal file
@ -0,0 +1,271 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
#
|
||||||
|
# Synapse documentation build configuration file, created by
|
||||||
|
# sphinx-quickstart on Tue Jun 10 17:31:02 2014.
|
||||||
|
#
|
||||||
|
# This file is execfile()d with the current directory set to its
|
||||||
|
# containing dir.
|
||||||
|
#
|
||||||
|
# Note that not all possible configuration values are present in this
|
||||||
|
# autogenerated file.
|
||||||
|
#
|
||||||
|
# All configuration values have a default; values that are commented out
|
||||||
|
# serve to show the default.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# If extensions (or modules to document with autodoc) are in another directory,
|
||||||
|
# add these directories to sys.path here. If the directory is relative to the
|
||||||
|
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||||
|
sys.path.insert(0, os.path.abspath('..'))
|
||||||
|
|
||||||
|
# -- General configuration ------------------------------------------------
|
||||||
|
|
||||||
|
# If your documentation needs a minimal Sphinx version, state it here.
|
||||||
|
#needs_sphinx = '1.0'
|
||||||
|
|
||||||
|
# Add any Sphinx extension module names here, as strings. They can be
|
||||||
|
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||||
|
# ones.
|
||||||
|
extensions = [
|
||||||
|
'sphinx.ext.autodoc',
|
||||||
|
'sphinx.ext.intersphinx',
|
||||||
|
'sphinx.ext.coverage',
|
||||||
|
'sphinx.ext.ifconfig',
|
||||||
|
'sphinxcontrib.napoleon',
|
||||||
|
]
|
||||||
|
|
||||||
|
# Add any paths that contain templates here, relative to this directory.
|
||||||
|
templates_path = ['_templates']
|
||||||
|
|
||||||
|
# The suffix of source filenames.
|
||||||
|
source_suffix = '.rst'
|
||||||
|
|
||||||
|
# The encoding of source files.
|
||||||
|
#source_encoding = 'utf-8-sig'
|
||||||
|
|
||||||
|
# The master toctree document.
|
||||||
|
master_doc = 'index'
|
||||||
|
|
||||||
|
# General information about the project.
|
||||||
|
project = u'Synapse'
|
||||||
|
copyright = u'2014, TNG'
|
||||||
|
|
||||||
|
# The version info for the project you're documenting, acts as replacement for
|
||||||
|
# |version| and |release|, also used in various other places throughout the
|
||||||
|
# built documents.
|
||||||
|
#
|
||||||
|
# The short X.Y version.
|
||||||
|
version = '1.0'
|
||||||
|
# The full version, including alpha/beta/rc tags.
|
||||||
|
release = '1.0'
|
||||||
|
|
||||||
|
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||||
|
# for a list of supported languages.
|
||||||
|
#language = None
|
||||||
|
|
||||||
|
# There are two options for replacing |today|: either, you set today to some
|
||||||
|
# non-false value, then it is used:
|
||||||
|
#today = ''
|
||||||
|
# Else, today_fmt is used as the format for a strftime call.
|
||||||
|
#today_fmt = '%B %d, %Y'
|
||||||
|
|
||||||
|
# List of patterns, relative to source directory, that match files and
|
||||||
|
# directories to ignore when looking for source files.
|
||||||
|
exclude_patterns = ['_build']
|
||||||
|
|
||||||
|
# The reST default role (used for this markup: `text`) to use for all
|
||||||
|
# documents.
|
||||||
|
#default_role = None
|
||||||
|
|
||||||
|
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||||
|
#add_function_parentheses = True
|
||||||
|
|
||||||
|
# If true, the current module name will be prepended to all description
|
||||||
|
# unit titles (such as .. function::).
|
||||||
|
#add_module_names = True
|
||||||
|
|
||||||
|
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||||
|
# output. They are ignored by default.
|
||||||
|
#show_authors = False
|
||||||
|
|
||||||
|
# The name of the Pygments (syntax highlighting) style to use.
|
||||||
|
pygments_style = 'sphinx'
|
||||||
|
|
||||||
|
# A list of ignored prefixes for module index sorting.
|
||||||
|
#modindex_common_prefix = []
|
||||||
|
|
||||||
|
# If true, keep warnings as "system message" paragraphs in the built documents.
|
||||||
|
#keep_warnings = False
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for HTML output ----------------------------------------------
|
||||||
|
|
||||||
|
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||||
|
# a list of builtin themes.
|
||||||
|
html_theme = 'default'
|
||||||
|
|
||||||
|
# Theme options are theme-specific and customize the look and feel of a theme
|
||||||
|
# further. For a list of options available for each theme, see the
|
||||||
|
# documentation.
|
||||||
|
#html_theme_options = {}
|
||||||
|
|
||||||
|
# Add any paths that contain custom themes here, relative to this directory.
|
||||||
|
#html_theme_path = []
|
||||||
|
|
||||||
|
# The name for this set of Sphinx documents. If None, it defaults to
|
||||||
|
# "<project> v<release> documentation".
|
||||||
|
#html_title = None
|
||||||
|
|
||||||
|
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||||
|
#html_short_title = None
|
||||||
|
|
||||||
|
# The name of an image file (relative to this directory) to place at the top
|
||||||
|
# of the sidebar.
|
||||||
|
#html_logo = None
|
||||||
|
|
||||||
|
# The name of an image file (within the static path) to use as favicon of the
|
||||||
|
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||||
|
# pixels large.
|
||||||
|
#html_favicon = None
|
||||||
|
|
||||||
|
# Add any paths that contain custom static files (such as style sheets) here,
|
||||||
|
# relative to this directory. They are copied after the builtin static files,
|
||||||
|
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||||
|
html_static_path = ['_static']
|
||||||
|
|
||||||
|
# Add any extra paths that contain custom files (such as robots.txt or
|
||||||
|
# .htaccess) here, relative to this directory. These files are copied
|
||||||
|
# directly to the root of the documentation.
|
||||||
|
#html_extra_path = []
|
||||||
|
|
||||||
|
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||||
|
# using the given strftime format.
|
||||||
|
#html_last_updated_fmt = '%b %d, %Y'
|
||||||
|
|
||||||
|
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||||
|
# typographically correct entities.
|
||||||
|
#html_use_smartypants = True
|
||||||
|
|
||||||
|
# Custom sidebar templates, maps document names to template names.
|
||||||
|
#html_sidebars = {}
|
||||||
|
|
||||||
|
# Additional templates that should be rendered to pages, maps page names to
|
||||||
|
# template names.
|
||||||
|
#html_additional_pages = {}
|
||||||
|
|
||||||
|
# If false, no module index is generated.
|
||||||
|
#html_domain_indices = True
|
||||||
|
|
||||||
|
# If false, no index is generated.
|
||||||
|
#html_use_index = True
|
||||||
|
|
||||||
|
# If true, the index is split into individual pages for each letter.
|
||||||
|
#html_split_index = False
|
||||||
|
|
||||||
|
# If true, links to the reST sources are added to the pages.
|
||||||
|
#html_show_sourcelink = True
|
||||||
|
|
||||||
|
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||||
|
#html_show_sphinx = True
|
||||||
|
|
||||||
|
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||||
|
#html_show_copyright = True
|
||||||
|
|
||||||
|
# If true, an OpenSearch description file will be output, and all pages will
|
||||||
|
# contain a <link> tag referring to it. The value of this option must be the
|
||||||
|
# base URL from which the finished HTML is served.
|
||||||
|
#html_use_opensearch = ''
|
||||||
|
|
||||||
|
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||||
|
#html_file_suffix = None
|
||||||
|
|
||||||
|
# Output file base name for HTML help builder.
|
||||||
|
htmlhelp_basename = 'Synapsedoc'
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for LaTeX output ---------------------------------------------
|
||||||
|
|
||||||
|
latex_elements = {
|
||||||
|
# The paper size ('letterpaper' or 'a4paper').
|
||||||
|
#'papersize': 'letterpaper',
|
||||||
|
|
||||||
|
# The font size ('10pt', '11pt' or '12pt').
|
||||||
|
#'pointsize': '10pt',
|
||||||
|
|
||||||
|
# Additional stuff for the LaTeX preamble.
|
||||||
|
#'preamble': '',
|
||||||
|
}
|
||||||
|
|
||||||
|
# Grouping the document tree into LaTeX files. List of tuples
|
||||||
|
# (source start file, target name, title,
|
||||||
|
# author, documentclass [howto, manual, or own class]).
|
||||||
|
latex_documents = [
|
||||||
|
('index', 'Synapse.tex', u'Synapse Documentation',
|
||||||
|
u'TNG', 'manual'),
|
||||||
|
]
|
||||||
|
|
||||||
|
# The name of an image file (relative to this directory) to place at the top of
|
||||||
|
# the title page.
|
||||||
|
#latex_logo = None
|
||||||
|
|
||||||
|
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||||
|
# not chapters.
|
||||||
|
#latex_use_parts = False
|
||||||
|
|
||||||
|
# If true, show page references after internal links.
|
||||||
|
#latex_show_pagerefs = False
|
||||||
|
|
||||||
|
# If true, show URL addresses after external links.
|
||||||
|
#latex_show_urls = False
|
||||||
|
|
||||||
|
# Documents to append as an appendix to all manuals.
|
||||||
|
#latex_appendices = []
|
||||||
|
|
||||||
|
# If false, no module index is generated.
|
||||||
|
#latex_domain_indices = True
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for manual page output ---------------------------------------
|
||||||
|
|
||||||
|
# One entry per manual page. List of tuples
|
||||||
|
# (source start file, name, description, authors, manual section).
|
||||||
|
man_pages = [
|
||||||
|
('index', 'synapse', u'Synapse Documentation',
|
||||||
|
[u'TNG'], 1)
|
||||||
|
]
|
||||||
|
|
||||||
|
# If true, show URL addresses after external links.
|
||||||
|
#man_show_urls = False
|
||||||
|
|
||||||
|
|
||||||
|
# -- Options for Texinfo output -------------------------------------------
|
||||||
|
|
||||||
|
# Grouping the document tree into Texinfo files. List of tuples
|
||||||
|
# (source start file, target name, title, author,
|
||||||
|
# dir menu entry, description, category)
|
||||||
|
texinfo_documents = [
|
||||||
|
('index', 'Synapse', u'Synapse Documentation',
|
||||||
|
u'TNG', 'Synapse', 'One line description of project.',
|
||||||
|
'Miscellaneous'),
|
||||||
|
]
|
||||||
|
|
||||||
|
# Documents to append as an appendix to all manuals.
|
||||||
|
#texinfo_appendices = []
|
||||||
|
|
||||||
|
# If false, no module index is generated.
|
||||||
|
#texinfo_domain_indices = True
|
||||||
|
|
||||||
|
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||||
|
#texinfo_show_urls = 'footnote'
|
||||||
|
|
||||||
|
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
||||||
|
#texinfo_no_detailmenu = False
|
||||||
|
|
||||||
|
|
||||||
|
# Example configuration for intersphinx: refer to the Python standard library.
|
||||||
|
intersphinx_mapping = {'http://docs.python.org/': None}
|
||||||
|
|
||||||
|
napoleon_include_special_with_doc = True
|
||||||
|
napoleon_use_ivar = True
|
20
docs/sphinx/index.rst
Normal file
20
docs/sphinx/index.rst
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
.. Synapse documentation master file, created by
|
||||||
|
sphinx-quickstart on Tue Jun 10 17:31:02 2014.
|
||||||
|
You can adapt this file completely to your liking, but it should at least
|
||||||
|
contain the root `toctree` directive.
|
||||||
|
|
||||||
|
Welcome to Synapse's documentation!
|
||||||
|
===================================
|
||||||
|
|
||||||
|
Contents:
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
synapse
|
||||||
|
|
||||||
|
Indices and tables
|
||||||
|
==================
|
||||||
|
|
||||||
|
* :ref:`genindex`
|
||||||
|
* :ref:`modindex`
|
||||||
|
* :ref:`search`
|
||||||
|
|
7
docs/sphinx/modules.rst
Normal file
7
docs/sphinx/modules.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse
|
||||||
|
=======
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:maxdepth: 4
|
||||||
|
|
||||||
|
synapse
|
7
docs/sphinx/synapse.api.auth.rst
Normal file
7
docs/sphinx/synapse.api.auth.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.auth module
|
||||||
|
=======================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.auth
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.constants.rst
Normal file
7
docs/sphinx/synapse.api.constants.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.constants module
|
||||||
|
============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.constants
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.dbobjects.rst
Normal file
7
docs/sphinx/synapse.api.dbobjects.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.dbobjects module
|
||||||
|
============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.dbobjects
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.errors.rst
Normal file
7
docs/sphinx/synapse.api.errors.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.errors module
|
||||||
|
=========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.errors
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.event_stream.rst
Normal file
7
docs/sphinx/synapse.api.event_stream.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.event_stream module
|
||||||
|
===============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.event_stream
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.events.factory.rst
Normal file
7
docs/sphinx/synapse.api.events.factory.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.events.factory module
|
||||||
|
=================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.events.factory
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.events.room.rst
Normal file
7
docs/sphinx/synapse.api.events.room.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.events.room module
|
||||||
|
==============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.events.room
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
18
docs/sphinx/synapse.api.events.rst
Normal file
18
docs/sphinx/synapse.api.events.rst
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
synapse.api.events package
|
||||||
|
==========================
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.api.events.factory
|
||||||
|
synapse.api.events.room
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.events
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.handlers.events.rst
Normal file
7
docs/sphinx/synapse.api.handlers.events.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.handlers.events module
|
||||||
|
==================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.handlers.events
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.handlers.factory.rst
Normal file
7
docs/sphinx/synapse.api.handlers.factory.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.handlers.factory module
|
||||||
|
===================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.handlers.factory
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.handlers.federation.rst
Normal file
7
docs/sphinx/synapse.api.handlers.federation.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.handlers.federation module
|
||||||
|
======================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.handlers.federation
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.handlers.register.rst
Normal file
7
docs/sphinx/synapse.api.handlers.register.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.handlers.register module
|
||||||
|
====================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.handlers.register
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.handlers.room.rst
Normal file
7
docs/sphinx/synapse.api.handlers.room.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.handlers.room module
|
||||||
|
================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.handlers.room
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
21
docs/sphinx/synapse.api.handlers.rst
Normal file
21
docs/sphinx/synapse.api.handlers.rst
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
synapse.api.handlers package
|
||||||
|
============================
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.api.handlers.events
|
||||||
|
synapse.api.handlers.factory
|
||||||
|
synapse.api.handlers.federation
|
||||||
|
synapse.api.handlers.register
|
||||||
|
synapse.api.handlers.room
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.handlers
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.notifier.rst
Normal file
7
docs/sphinx/synapse.api.notifier.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.notifier module
|
||||||
|
===========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.notifier
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.register_events.rst
Normal file
7
docs/sphinx/synapse.api.register_events.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.register_events module
|
||||||
|
==================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.register_events
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.room_events.rst
Normal file
7
docs/sphinx/synapse.api.room_events.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.room_events module
|
||||||
|
==============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.room_events
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
30
docs/sphinx/synapse.api.rst
Normal file
30
docs/sphinx/synapse.api.rst
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
synapse.api package
|
||||||
|
===================
|
||||||
|
|
||||||
|
Subpackages
|
||||||
|
-----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.api.events
|
||||||
|
synapse.api.handlers
|
||||||
|
synapse.api.streams
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.api.auth
|
||||||
|
synapse.api.constants
|
||||||
|
synapse.api.errors
|
||||||
|
synapse.api.notifier
|
||||||
|
synapse.api.storage
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.api
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.server.rst
Normal file
7
docs/sphinx/synapse.api.server.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.server module
|
||||||
|
=========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.server
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.storage.rst
Normal file
7
docs/sphinx/synapse.api.storage.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.storage module
|
||||||
|
==========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.storage
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.stream.rst
Normal file
7
docs/sphinx/synapse.api.stream.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.stream module
|
||||||
|
=========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.stream
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.api.streams.event.rst
Normal file
7
docs/sphinx/synapse.api.streams.event.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.api.streams.event module
|
||||||
|
================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.streams.event
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
17
docs/sphinx/synapse.api.streams.rst
Normal file
17
docs/sphinx/synapse.api.streams.rst
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
synapse.api.streams package
|
||||||
|
===========================
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.api.streams.event
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.api.streams
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.app.homeserver.rst
Normal file
7
docs/sphinx/synapse.app.homeserver.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.app.homeserver module
|
||||||
|
=============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.app.homeserver
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
17
docs/sphinx/synapse.app.rst
Normal file
17
docs/sphinx/synapse.app.rst
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
synapse.app package
|
||||||
|
===================
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.app.homeserver
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.app
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
10
docs/sphinx/synapse.db.rst
Normal file
10
docs/sphinx/synapse.db.rst
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
synapse.db package
|
||||||
|
==================
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.db
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.federation.handler.rst
Normal file
7
docs/sphinx/synapse.federation.handler.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.federation.handler module
|
||||||
|
=================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.federation.handler
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.federation.messaging.rst
Normal file
7
docs/sphinx/synapse.federation.messaging.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.federation.messaging module
|
||||||
|
===================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.federation.messaging
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.federation.pdu_codec.rst
Normal file
7
docs/sphinx/synapse.federation.pdu_codec.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.federation.pdu_codec module
|
||||||
|
===================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.federation.pdu_codec
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.federation.persistence.rst
Normal file
7
docs/sphinx/synapse.federation.persistence.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.federation.persistence module
|
||||||
|
=====================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.federation.persistence
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.federation.replication.rst
Normal file
7
docs/sphinx/synapse.federation.replication.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.federation.replication module
|
||||||
|
=====================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.federation.replication
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
22
docs/sphinx/synapse.federation.rst
Normal file
22
docs/sphinx/synapse.federation.rst
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
synapse.federation package
|
||||||
|
==========================
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.federation.handler
|
||||||
|
synapse.federation.pdu_codec
|
||||||
|
synapse.federation.persistence
|
||||||
|
synapse.federation.replication
|
||||||
|
synapse.federation.transport
|
||||||
|
synapse.federation.units
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.federation
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.federation.transport.rst
Normal file
7
docs/sphinx/synapse.federation.transport.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.federation.transport module
|
||||||
|
===================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.federation.transport
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.federation.units.rst
Normal file
7
docs/sphinx/synapse.federation.units.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.federation.units module
|
||||||
|
===============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.federation.units
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
19
docs/sphinx/synapse.persistence.rst
Normal file
19
docs/sphinx/synapse.persistence.rst
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
synapse.persistence package
|
||||||
|
===========================
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.persistence.service
|
||||||
|
synapse.persistence.tables
|
||||||
|
synapse.persistence.transactions
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.persistence
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.persistence.service.rst
Normal file
7
docs/sphinx/synapse.persistence.service.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.persistence.service module
|
||||||
|
==================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.persistence.service
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.persistence.tables.rst
Normal file
7
docs/sphinx/synapse.persistence.tables.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.persistence.tables module
|
||||||
|
=================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.persistence.tables
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.persistence.transactions.rst
Normal file
7
docs/sphinx/synapse.persistence.transactions.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.persistence.transactions module
|
||||||
|
=======================================
|
||||||
|
|
||||||
|
.. automodule:: synapse.persistence.transactions
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.rest.base.rst
Normal file
7
docs/sphinx/synapse.rest.base.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.rest.base module
|
||||||
|
========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.rest.base
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.rest.events.rst
Normal file
7
docs/sphinx/synapse.rest.events.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.rest.events module
|
||||||
|
==========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.rest.events
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.rest.register.rst
Normal file
7
docs/sphinx/synapse.rest.register.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.rest.register module
|
||||||
|
============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.rest.register
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.rest.room.rst
Normal file
7
docs/sphinx/synapse.rest.room.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.rest.room module
|
||||||
|
========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.rest.room
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
20
docs/sphinx/synapse.rest.rst
Normal file
20
docs/sphinx/synapse.rest.rst
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
synapse.rest package
|
||||||
|
====================
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.rest.base
|
||||||
|
synapse.rest.events
|
||||||
|
synapse.rest.register
|
||||||
|
synapse.rest.room
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.rest
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
30
docs/sphinx/synapse.rst
Normal file
30
docs/sphinx/synapse.rst
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
synapse package
|
||||||
|
===============
|
||||||
|
|
||||||
|
Subpackages
|
||||||
|
-----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.api
|
||||||
|
synapse.app
|
||||||
|
synapse.federation
|
||||||
|
synapse.persistence
|
||||||
|
synapse.rest
|
||||||
|
synapse.util
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.server
|
||||||
|
synapse.state
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.server.rst
Normal file
7
docs/sphinx/synapse.server.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.server module
|
||||||
|
=====================
|
||||||
|
|
||||||
|
.. automodule:: synapse.server
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.state.rst
Normal file
7
docs/sphinx/synapse.state.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.state module
|
||||||
|
====================
|
||||||
|
|
||||||
|
.. automodule:: synapse.state
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.util.async.rst
Normal file
7
docs/sphinx/synapse.util.async.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.util.async module
|
||||||
|
=========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.util.async
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.util.dbutils.rst
Normal file
7
docs/sphinx/synapse.util.dbutils.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.util.dbutils module
|
||||||
|
===========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.util.dbutils
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.util.http.rst
Normal file
7
docs/sphinx/synapse.util.http.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.util.http module
|
||||||
|
========================
|
||||||
|
|
||||||
|
.. automodule:: synapse.util.http
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.util.lockutils.rst
Normal file
7
docs/sphinx/synapse.util.lockutils.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.util.lockutils module
|
||||||
|
=============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.util.lockutils
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.util.logutils.rst
Normal file
7
docs/sphinx/synapse.util.logutils.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.util.logutils module
|
||||||
|
============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.util.logutils
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
21
docs/sphinx/synapse.util.rst
Normal file
21
docs/sphinx/synapse.util.rst
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
synapse.util package
|
||||||
|
====================
|
||||||
|
|
||||||
|
Submodules
|
||||||
|
----------
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
synapse.util.async
|
||||||
|
synapse.util.http
|
||||||
|
synapse.util.lockutils
|
||||||
|
synapse.util.logutils
|
||||||
|
synapse.util.stringutils
|
||||||
|
|
||||||
|
Module contents
|
||||||
|
---------------
|
||||||
|
|
||||||
|
.. automodule:: synapse.util
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
7
docs/sphinx/synapse.util.stringutils.rst
Normal file
7
docs/sphinx/synapse.util.stringutils.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
synapse.util.stringutils module
|
||||||
|
===============================
|
||||||
|
|
||||||
|
.. automodule:: synapse.util.stringutils
|
||||||
|
:members:
|
||||||
|
:undoc-members:
|
||||||
|
:show-inheritance:
|
86
docs/terminology
Normal file
86
docs/terminology
Normal file
@ -0,0 +1,86 @@
|
|||||||
|
===========
|
||||||
|
Terminology
|
||||||
|
===========
|
||||||
|
|
||||||
|
A list of definitions of specific terminology used among these documents.
|
||||||
|
These terms were originally taken from the server-server documentation, and may
|
||||||
|
not currently match the exact meanings used in other places; though as a
|
||||||
|
medium-term goal we should encourage the unification of this terminology.
|
||||||
|
|
||||||
|
|
||||||
|
Terms
|
||||||
|
=====
|
||||||
|
|
||||||
|
Context:
|
||||||
|
A single human-level entity of interest (currently, a chat room)
|
||||||
|
|
||||||
|
EDU (Ephemeral Data Unit):
|
||||||
|
A message that relates directly to a given pair of home servers that are
|
||||||
|
exchanging it. EDUs are short-lived messages that related only to one single
|
||||||
|
pair of servers; they are not persisted for a long time and are not forwarded
|
||||||
|
on to other servers. Because of this, they have no internal ID nor previous
|
||||||
|
EDUs reference chain.
|
||||||
|
|
||||||
|
Event:
|
||||||
|
A record of activity that records a single thing that happened on to a context
|
||||||
|
(currently, a chat room). These are the "chat messages" that Synapse makes
|
||||||
|
available.
|
||||||
|
[[NOTE(paul): The current server-server implementation calls these simply
|
||||||
|
"messages" but the term is too ambiguous here; I've called them Events]]
|
||||||
|
|
||||||
|
Pagination:
|
||||||
|
The process of synchronising historic state from one home server to another,
|
||||||
|
to backfill the event storage so that scrollback can be presented to the
|
||||||
|
client(s).
|
||||||
|
|
||||||
|
PDU (Persistent Data Unit):
|
||||||
|
A message that relates to a single context, irrespective of the server that
|
||||||
|
is communicating it. PDUs either encode a single Event, or a single State
|
||||||
|
change. A PDU is referred to by its PDU ID; the pair of its origin server
|
||||||
|
and local reference from that server.
|
||||||
|
|
||||||
|
PDU ID:
|
||||||
|
The pair of PDU Origin and PDU Reference, that together globally uniquely
|
||||||
|
refers to a specific PDU.
|
||||||
|
|
||||||
|
PDU Origin:
|
||||||
|
The name of the origin server that generated a given PDU. This may not be the
|
||||||
|
server from which it has been received, due to the way they are copied around
|
||||||
|
from server to server. The origin always records the original server that
|
||||||
|
created it.
|
||||||
|
|
||||||
|
PDU Reference:
|
||||||
|
A local ID used to refer to a specific PDU from a given origin server. These
|
||||||
|
references are opaque at the protocol level, but may optionally have some
|
||||||
|
structured meaning within a given origin server or implementation.
|
||||||
|
|
||||||
|
Presence:
|
||||||
|
The concept of whether a user is currently online, how available they declare
|
||||||
|
they are, and so on. See also: doc/model/presence
|
||||||
|
|
||||||
|
Profile:
|
||||||
|
A set of metadata about a user, such as a display name, provided for the
|
||||||
|
benefit of other users. See also: doc/model/profiles
|
||||||
|
|
||||||
|
Room ID:
|
||||||
|
An opaque string (of as-yet undecided format) that identifies a particular
|
||||||
|
room and used in PDUs referring to it.
|
||||||
|
|
||||||
|
Room Alias:
|
||||||
|
A human-readable string of the form #name:some.domain that users can use as a
|
||||||
|
pointer to identify a room; a Directory Server will map this to its Room ID
|
||||||
|
|
||||||
|
State:
|
||||||
|
A set of metadata maintained about a Context, which is replicated among the
|
||||||
|
servers in addition to the history of Events.
|
||||||
|
|
||||||
|
User ID:
|
||||||
|
A string of the form @localpart:domain.name that identifies a user for
|
||||||
|
wire-protocol purposes. The localpart is meaningless outside of a particular
|
||||||
|
home server. This takes a human-readable form that end-users can use directly
|
||||||
|
if they so wish, avoiding the 3PIDs.
|
||||||
|
|
||||||
|
Transaction:
|
||||||
|
A message which relates to the communication between a given pair of servers.
|
||||||
|
A transaction contains possibly-empty lists of PDUs and EDUs.
|
||||||
|
|
11
docs/versioning
Normal file
11
docs/versioning
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
Versioning is, like, hard for paginating backwards because of the number of Home Servers involved.
|
||||||
|
|
||||||
|
The way we solve this is by doing versioning as an acyclic directed graph of PDUs. For pagination purposes, this is done on a per context basis.
|
||||||
|
When we send a PDU we include all PDUs that have been received for that context that hasn't been subsequently listed in a later PDU. The trivial case is a simple list of PDUs, e.g. A <- B <- C. However, if two servers send out a PDU at the same to, both B and C would point at A - a later PDU would then list both B and C.
|
||||||
|
|
||||||
|
Problems with opaque version strings:
|
||||||
|
- How do you do clustering without mandating that a cluster can only have one transaction in flight to a given remote home server at a time.
|
||||||
|
If you have multiple transactions sent at once, then you might drop one transaction, receive anotherwith a version that is later than the dropped transaction and which point ARGH WE LOST A TRANSACTION.
|
||||||
|
- How do you do pagination? A version string defines a point in a stream w.r.t. a single home server, not a point in the context.
|
||||||
|
|
||||||
|
We only need to store the ends of the directed graph, we DO NOT need to do the whole one table of nodes and one of edges.
|
154
example/cursesio.py
Normal file
154
example/cursesio.py
Normal file
@ -0,0 +1,154 @@
|
|||||||
|
import curses
|
||||||
|
import curses.wrapper
|
||||||
|
from curses.ascii import isprint
|
||||||
|
|
||||||
|
from twisted.internet import reactor
|
||||||
|
|
||||||
|
|
||||||
|
class CursesStdIO():
|
||||||
|
def __init__(self, stdscr, callback=None):
|
||||||
|
self.statusText = "Synapse test app -"
|
||||||
|
self.searchText = ''
|
||||||
|
self.stdscr = stdscr
|
||||||
|
|
||||||
|
self.logLine = ''
|
||||||
|
|
||||||
|
self.callback = callback
|
||||||
|
|
||||||
|
self._setup()
|
||||||
|
|
||||||
|
def _setup(self):
|
||||||
|
self.stdscr.nodelay(1) # Make non blocking
|
||||||
|
|
||||||
|
self.rows, self.cols = self.stdscr.getmaxyx()
|
||||||
|
self.lines = []
|
||||||
|
|
||||||
|
curses.use_default_colors()
|
||||||
|
|
||||||
|
self.paintStatus(self.statusText)
|
||||||
|
self.stdscr.refresh()
|
||||||
|
|
||||||
|
def set_callback(self, callback):
|
||||||
|
self.callback = callback
|
||||||
|
|
||||||
|
def fileno(self):
|
||||||
|
""" We want to select on FD 0 """
|
||||||
|
return 0
|
||||||
|
|
||||||
|
def connectionLost(self, reason):
|
||||||
|
self.close()
|
||||||
|
|
||||||
|
def print_line(self, text):
|
||||||
|
""" add a line to the internal list of lines"""
|
||||||
|
|
||||||
|
self.lines.append(text)
|
||||||
|
self.redraw()
|
||||||
|
|
||||||
|
def print_log(self, text):
|
||||||
|
self.logLine = text
|
||||||
|
self.redraw()
|
||||||
|
|
||||||
|
def redraw(self):
|
||||||
|
""" method for redisplaying lines
|
||||||
|
based on internal list of lines """
|
||||||
|
|
||||||
|
self.stdscr.clear()
|
||||||
|
self.paintStatus(self.statusText)
|
||||||
|
i = 0
|
||||||
|
index = len(self.lines) - 1
|
||||||
|
while i < (self.rows - 3) and index >= 0:
|
||||||
|
self.stdscr.addstr(self.rows - 3 - i, 0, self.lines[index],
|
||||||
|
curses.A_NORMAL)
|
||||||
|
i = i + 1
|
||||||
|
index = index - 1
|
||||||
|
|
||||||
|
self.printLogLine(self.logLine)
|
||||||
|
|
||||||
|
self.stdscr.refresh()
|
||||||
|
|
||||||
|
def paintStatus(self, text):
|
||||||
|
if len(text) > self.cols:
|
||||||
|
raise RuntimeError("TextTooLongError")
|
||||||
|
|
||||||
|
self.stdscr.addstr(
|
||||||
|
self.rows - 2, 0,
|
||||||
|
text + ' ' * (self.cols - len(text)),
|
||||||
|
curses.A_STANDOUT)
|
||||||
|
|
||||||
|
def printLogLine(self, text):
|
||||||
|
self.stdscr.addstr(
|
||||||
|
0, 0,
|
||||||
|
text + ' ' * (self.cols - len(text)),
|
||||||
|
curses.A_STANDOUT)
|
||||||
|
|
||||||
|
def doRead(self):
|
||||||
|
""" Input is ready! """
|
||||||
|
curses.noecho()
|
||||||
|
c = self.stdscr.getch() # read a character
|
||||||
|
|
||||||
|
if c == curses.KEY_BACKSPACE:
|
||||||
|
self.searchText = self.searchText[:-1]
|
||||||
|
|
||||||
|
elif c == curses.KEY_ENTER or c == 10:
|
||||||
|
text = self.searchText
|
||||||
|
self.searchText = ''
|
||||||
|
|
||||||
|
self.print_line(">> %s" % text)
|
||||||
|
|
||||||
|
try:
|
||||||
|
if self.callback:
|
||||||
|
self.callback.on_line(text)
|
||||||
|
except Exception as e:
|
||||||
|
self.print_line(str(e))
|
||||||
|
|
||||||
|
self.stdscr.refresh()
|
||||||
|
|
||||||
|
elif isprint(c):
|
||||||
|
if len(self.searchText) == self.cols - 2:
|
||||||
|
return
|
||||||
|
self.searchText = self.searchText + chr(c)
|
||||||
|
|
||||||
|
self.stdscr.addstr(self.rows - 1, 0,
|
||||||
|
self.searchText + (' ' * (
|
||||||
|
self.cols - len(self.searchText) - 2)))
|
||||||
|
|
||||||
|
self.paintStatus(self.statusText + ' %d' % len(self.searchText))
|
||||||
|
self.stdscr.move(self.rows - 1, len(self.searchText))
|
||||||
|
self.stdscr.refresh()
|
||||||
|
|
||||||
|
def logPrefix(self):
|
||||||
|
return "CursesStdIO"
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
""" clean up """
|
||||||
|
|
||||||
|
curses.nocbreak()
|
||||||
|
self.stdscr.keypad(0)
|
||||||
|
curses.echo()
|
||||||
|
curses.endwin()
|
||||||
|
|
||||||
|
|
||||||
|
class Callback(object):
|
||||||
|
|
||||||
|
def __init__(self, stdio):
|
||||||
|
self.stdio = stdio
|
||||||
|
|
||||||
|
def on_line(self, text):
|
||||||
|
self.stdio.print_line(text)
|
||||||
|
|
||||||
|
|
||||||
|
def main(stdscr):
|
||||||
|
screen = CursesStdIO(stdscr) # create Screen object
|
||||||
|
|
||||||
|
callback = Callback(screen)
|
||||||
|
|
||||||
|
screen.set_callback(callback)
|
||||||
|
|
||||||
|
stdscr.refresh()
|
||||||
|
reactor.addReader(screen)
|
||||||
|
reactor.run()
|
||||||
|
screen.close()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
curses.wrapper(main)
|
380
example/test_messaging.py
Normal file
380
example/test_messaging.py
Normal file
@ -0,0 +1,380 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
""" This is an example of using the server to server implementation to do a
|
||||||
|
basic chat style thing. It accepts commands from stdin and outputs to stdout.
|
||||||
|
|
||||||
|
It assumes that ucids are of the form <user>@<domain>, and uses <domain> as
|
||||||
|
the address of the remote home server to hit.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python test_messaging.py <port>
|
||||||
|
|
||||||
|
Currently assumes the local address is localhost:<port>
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
from synapse.federation import (
|
||||||
|
ReplicationHandler
|
||||||
|
)
|
||||||
|
|
||||||
|
from synapse.federation.units import Pdu
|
||||||
|
|
||||||
|
from synapse.util import origin_from_ucid
|
||||||
|
|
||||||
|
from synapse.app.homeserver import SynapseHomeServer
|
||||||
|
|
||||||
|
#from synapse.util.logutils import log_function
|
||||||
|
|
||||||
|
from twisted.internet import reactor, defer
|
||||||
|
from twisted.python import log
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
|
||||||
|
import cursesio
|
||||||
|
import curses.wrapper
|
||||||
|
|
||||||
|
|
||||||
|
logger = logging.getLogger("example")
|
||||||
|
|
||||||
|
|
||||||
|
def excpetion_errback(failure):
|
||||||
|
logging.exception(failure)
|
||||||
|
|
||||||
|
|
||||||
|
class InputOutput(object):
|
||||||
|
""" This is responsible for basic I/O so that a user can interact with
|
||||||
|
the example app.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, screen, user):
|
||||||
|
self.screen = screen
|
||||||
|
self.user = user
|
||||||
|
|
||||||
|
def set_home_server(self, server):
|
||||||
|
self.server = server
|
||||||
|
|
||||||
|
def on_line(self, line):
|
||||||
|
""" This is where we process commands.
|
||||||
|
"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
m = re.match("^join (\S+)$", line)
|
||||||
|
if m:
|
||||||
|
# The `sender` wants to join a room.
|
||||||
|
room_name, = m.groups()
|
||||||
|
self.print_line("%s joining %s" % (self.user, room_name))
|
||||||
|
self.server.join_room(room_name, self.user, self.user)
|
||||||
|
#self.print_line("OK.")
|
||||||
|
return
|
||||||
|
|
||||||
|
m = re.match("^invite (\S+) (\S+)$", line)
|
||||||
|
if m:
|
||||||
|
# `sender` wants to invite someone to a room
|
||||||
|
room_name, invitee = m.groups()
|
||||||
|
self.print_line("%s invited to %s" % (invitee, room_name))
|
||||||
|
self.server.invite_to_room(room_name, self.user, invitee)
|
||||||
|
#self.print_line("OK.")
|
||||||
|
return
|
||||||
|
|
||||||
|
m = re.match("^send (\S+) (.*)$", line)
|
||||||
|
if m:
|
||||||
|
# `sender` wants to message a room
|
||||||
|
room_name, body = m.groups()
|
||||||
|
self.print_line("%s send to %s" % (self.user, room_name))
|
||||||
|
self.server.send_message(room_name, self.user, body)
|
||||||
|
#self.print_line("OK.")
|
||||||
|
return
|
||||||
|
|
||||||
|
m = re.match("^paginate (\S+)$", line)
|
||||||
|
if m:
|
||||||
|
# we want to paginate a room
|
||||||
|
room_name, = m.groups()
|
||||||
|
self.print_line("paginate %s" % room_name)
|
||||||
|
self.server.paginate(room_name)
|
||||||
|
return
|
||||||
|
|
||||||
|
self.print_line("Unrecognized command")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.exception(e)
|
||||||
|
|
||||||
|
def print_line(self, text):
|
||||||
|
self.screen.print_line(text)
|
||||||
|
|
||||||
|
def print_log(self, text):
|
||||||
|
self.screen.print_log(text)
|
||||||
|
|
||||||
|
|
||||||
|
class IOLoggerHandler(logging.Handler):
|
||||||
|
|
||||||
|
def __init__(self, io):
|
||||||
|
logging.Handler.__init__(self)
|
||||||
|
self.io = io
|
||||||
|
|
||||||
|
def emit(self, record):
|
||||||
|
if record.levelno < logging.WARN:
|
||||||
|
return
|
||||||
|
|
||||||
|
msg = self.format(record)
|
||||||
|
self.io.print_log(msg)
|
||||||
|
|
||||||
|
|
||||||
|
class Room(object):
|
||||||
|
""" Used to store (in memory) the current membership state of a room, and
|
||||||
|
which home servers we should send PDUs associated with the room to.
|
||||||
|
"""
|
||||||
|
def __init__(self, room_name):
|
||||||
|
self.room_name = room_name
|
||||||
|
self.invited = set()
|
||||||
|
self.participants = set()
|
||||||
|
self.servers = set()
|
||||||
|
|
||||||
|
self.oldest_server = None
|
||||||
|
|
||||||
|
self.have_got_metadata = False
|
||||||
|
|
||||||
|
def add_participant(self, participant):
|
||||||
|
""" Someone has joined the room
|
||||||
|
"""
|
||||||
|
self.participants.add(participant)
|
||||||
|
self.invited.discard(participant)
|
||||||
|
|
||||||
|
server = origin_from_ucid(participant)
|
||||||
|
self.servers.add(server)
|
||||||
|
|
||||||
|
if not self.oldest_server:
|
||||||
|
self.oldest_server = server
|
||||||
|
|
||||||
|
def add_invited(self, invitee):
|
||||||
|
""" Someone has been invited to the room
|
||||||
|
"""
|
||||||
|
self.invited.add(invitee)
|
||||||
|
self.servers.add(origin_from_ucid(invitee))
|
||||||
|
|
||||||
|
|
||||||
|
class HomeServer(ReplicationHandler):
|
||||||
|
""" A very basic home server implentation that allows people to join a
|
||||||
|
room and then invite other people.
|
||||||
|
"""
|
||||||
|
def __init__(self, server_name, replication_layer, output):
|
||||||
|
self.server_name = server_name
|
||||||
|
self.replication_layer = replication_layer
|
||||||
|
self.replication_layer.set_handler(self)
|
||||||
|
|
||||||
|
self.joined_rooms = {}
|
||||||
|
|
||||||
|
self.output = output
|
||||||
|
|
||||||
|
def on_receive_pdu(self, pdu):
|
||||||
|
""" We just received a PDU
|
||||||
|
"""
|
||||||
|
pdu_type = pdu.pdu_type
|
||||||
|
|
||||||
|
if pdu_type == "sy.room.message":
|
||||||
|
self._on_message(pdu)
|
||||||
|
elif pdu_type == "sy.room.member" and "membership" in pdu.content:
|
||||||
|
if pdu.content["membership"] == "join":
|
||||||
|
self._on_join(pdu.context, pdu.state_key)
|
||||||
|
elif pdu.content["membership"] == "invite":
|
||||||
|
self._on_invite(pdu.origin, pdu.context, pdu.state_key)
|
||||||
|
else:
|
||||||
|
self.output.print_line("#%s (unrec) %s = %s" %
|
||||||
|
(pdu.context, pdu.pdu_type, json.dumps(pdu.content))
|
||||||
|
)
|
||||||
|
|
||||||
|
#def on_state_change(self, pdu):
|
||||||
|
##self.output.print_line("#%s (state) %s *** %s" %
|
||||||
|
##(pdu.context, pdu.state_key, pdu.pdu_type)
|
||||||
|
##)
|
||||||
|
|
||||||
|
#if "joinee" in pdu.content:
|
||||||
|
#self._on_join(pdu.context, pdu.content["joinee"])
|
||||||
|
#elif "invitee" in pdu.content:
|
||||||
|
#self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"])
|
||||||
|
|
||||||
|
def _on_message(self, pdu):
|
||||||
|
""" We received a message
|
||||||
|
"""
|
||||||
|
self.output.print_line("#%s %s %s" %
|
||||||
|
(pdu.context, pdu.content["sender"], pdu.content["body"])
|
||||||
|
)
|
||||||
|
|
||||||
|
def _on_join(self, context, joinee):
|
||||||
|
""" Someone has joined a room, either a remote user or a local user
|
||||||
|
"""
|
||||||
|
room = self._get_or_create_room(context)
|
||||||
|
room.add_participant(joinee)
|
||||||
|
|
||||||
|
self.output.print_line("#%s %s %s" %
|
||||||
|
(context, joinee, "*** JOINED")
|
||||||
|
)
|
||||||
|
|
||||||
|
def _on_invite(self, origin, context, invitee):
|
||||||
|
""" Someone has been invited
|
||||||
|
"""
|
||||||
|
room = self._get_or_create_room(context)
|
||||||
|
room.add_invited(invitee)
|
||||||
|
|
||||||
|
self.output.print_line("#%s %s %s" %
|
||||||
|
(context, invitee, "*** INVITED")
|
||||||
|
)
|
||||||
|
|
||||||
|
if not room.have_got_metadata and origin is not self.server_name:
|
||||||
|
logger.debug("Get room state")
|
||||||
|
self.replication_layer.get_state_for_context(origin, context)
|
||||||
|
room.have_got_metadata = True
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def send_message(self, room_name, sender, body):
|
||||||
|
""" Send a message to a room!
|
||||||
|
"""
|
||||||
|
destinations = yield self.get_servers_for_context(room_name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
yield self.replication_layer.send_pdu(
|
||||||
|
Pdu.create_new(
|
||||||
|
context=room_name,
|
||||||
|
pdu_type="sy.room.message",
|
||||||
|
content={"sender": sender, "body": body},
|
||||||
|
origin=self.server_name,
|
||||||
|
destinations=destinations,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.exception(e)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def join_room(self, room_name, sender, joinee):
|
||||||
|
""" Join a room!
|
||||||
|
"""
|
||||||
|
self._on_join(room_name, joinee)
|
||||||
|
|
||||||
|
destinations = yield self.get_servers_for_context(room_name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
pdu = Pdu.create_new(
|
||||||
|
context=room_name,
|
||||||
|
pdu_type="sy.room.member",
|
||||||
|
is_state=True,
|
||||||
|
state_key=joinee,
|
||||||
|
content={"membership": "join"},
|
||||||
|
origin=self.server_name,
|
||||||
|
destinations=destinations,
|
||||||
|
)
|
||||||
|
yield self.replication_layer.send_pdu(pdu)
|
||||||
|
except Exception as e:
|
||||||
|
logger.exception(e)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def invite_to_room(self, room_name, sender, invitee):
|
||||||
|
""" Invite someone to a room!
|
||||||
|
"""
|
||||||
|
self._on_invite(self.server_name, room_name, invitee)
|
||||||
|
|
||||||
|
destinations = yield self.get_servers_for_context(room_name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
yield self.replication_layer.send_pdu(
|
||||||
|
Pdu.create_new(
|
||||||
|
context=room_name,
|
||||||
|
is_state=True,
|
||||||
|
pdu_type="sy.room.member",
|
||||||
|
state_key=invitee,
|
||||||
|
content={"membership": "invite"},
|
||||||
|
origin=self.server_name,
|
||||||
|
destinations=destinations,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.exception(e)
|
||||||
|
|
||||||
|
def paginate(self, room_name, limit=5):
|
||||||
|
room = self.joined_rooms.get(room_name)
|
||||||
|
|
||||||
|
if not room:
|
||||||
|
return
|
||||||
|
|
||||||
|
dest = room.oldest_server
|
||||||
|
|
||||||
|
return self.replication_layer.paginate(dest, room_name, limit)
|
||||||
|
|
||||||
|
def _get_room_remote_servers(self, room_name):
|
||||||
|
return [i for i in self.joined_rooms.setdefault(room_name,).servers]
|
||||||
|
|
||||||
|
def _get_or_create_room(self, room_name):
|
||||||
|
return self.joined_rooms.setdefault(room_name, Room(room_name))
|
||||||
|
|
||||||
|
def get_servers_for_context(self, context):
|
||||||
|
return defer.succeed(
|
||||||
|
self.joined_rooms.setdefault(context, Room(context)).servers
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def main(stdscr):
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument('user', type=str)
|
||||||
|
parser.add_argument('-v', '--verbose', action='count')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
user = args.user
|
||||||
|
server_name = origin_from_ucid(user)
|
||||||
|
|
||||||
|
## Set up logging ##
|
||||||
|
|
||||||
|
root_logger = logging.getLogger()
|
||||||
|
|
||||||
|
formatter = logging.Formatter('%(asctime)s - %(name)s - %(lineno)d - '
|
||||||
|
'%(levelname)s - %(message)s')
|
||||||
|
if not os.path.exists("logs"):
|
||||||
|
os.makedirs("logs")
|
||||||
|
fh = logging.FileHandler("logs/%s" % user)
|
||||||
|
fh.setFormatter(formatter)
|
||||||
|
|
||||||
|
root_logger.addHandler(fh)
|
||||||
|
root_logger.setLevel(logging.DEBUG)
|
||||||
|
|
||||||
|
# Hack: The only way to get it to stop logging to sys.stderr :(
|
||||||
|
log.theLogPublisher.observers = []
|
||||||
|
observer = log.PythonLoggingObserver()
|
||||||
|
observer.start()
|
||||||
|
|
||||||
|
## Set up synapse server
|
||||||
|
|
||||||
|
curses_stdio = cursesio.CursesStdIO(stdscr)
|
||||||
|
input_output = InputOutput(curses_stdio, user)
|
||||||
|
|
||||||
|
curses_stdio.set_callback(input_output)
|
||||||
|
|
||||||
|
app_hs = SynapseHomeServer(server_name, db_name="dbs/%s" % user)
|
||||||
|
replication = app_hs.get_replication_layer()
|
||||||
|
|
||||||
|
hs = HomeServer(server_name, replication, curses_stdio)
|
||||||
|
|
||||||
|
input_output.set_home_server(hs)
|
||||||
|
|
||||||
|
## Add input_output logger
|
||||||
|
io_logger = IOLoggerHandler(input_output)
|
||||||
|
io_logger.setFormatter(formatter)
|
||||||
|
root_logger.addHandler(io_logger)
|
||||||
|
|
||||||
|
## Start! ##
|
||||||
|
|
||||||
|
try:
|
||||||
|
port = int(server_name.split(":")[1])
|
||||||
|
except:
|
||||||
|
port = 12345
|
||||||
|
|
||||||
|
app_hs.get_http_server().start_listening(port)
|
||||||
|
|
||||||
|
reactor.addReader(curses_stdio)
|
||||||
|
|
||||||
|
reactor.run()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
curses.wrapper(main)
|
137
graph/graph.py
Normal file
137
graph/graph.py
Normal file
@ -0,0 +1,137 @@
|
|||||||
|
|
||||||
|
import sqlite3
|
||||||
|
import pydot
|
||||||
|
import cgi
|
||||||
|
import json
|
||||||
|
import datetime
|
||||||
|
import argparse
|
||||||
|
import urllib2
|
||||||
|
|
||||||
|
|
||||||
|
def make_name(pdu_id, origin):
|
||||||
|
return "%s@%s" % (pdu_id, origin)
|
||||||
|
|
||||||
|
|
||||||
|
def make_graph(pdus, room, filename_prefix):
|
||||||
|
pdu_map = {}
|
||||||
|
node_map = {}
|
||||||
|
|
||||||
|
origins = set()
|
||||||
|
colors = set(("red", "green", "blue", "yellow", "purple"))
|
||||||
|
|
||||||
|
for pdu in pdus:
|
||||||
|
origins.add(pdu.get("origin"))
|
||||||
|
|
||||||
|
color_map = {color: color for color in colors if color in origins}
|
||||||
|
colors -= set(color_map.values())
|
||||||
|
|
||||||
|
color_map[None] = "black"
|
||||||
|
|
||||||
|
for o in origins:
|
||||||
|
if o in color_map:
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
c = colors.pop()
|
||||||
|
color_map[o] = c
|
||||||
|
except:
|
||||||
|
print "Run out of colours!"
|
||||||
|
color_map[o] = "black"
|
||||||
|
|
||||||
|
graph = pydot.Dot(graph_name="Test")
|
||||||
|
|
||||||
|
for pdu in pdus:
|
||||||
|
name = make_name(pdu.get("pdu_id"), pdu.get("origin"))
|
||||||
|
pdu_map[name] = pdu
|
||||||
|
|
||||||
|
t = datetime.datetime.fromtimestamp(
|
||||||
|
float(pdu["ts"]) / 1000
|
||||||
|
).strftime('%Y-%m-%d %H:%M:%S,%f')
|
||||||
|
|
||||||
|
label = (
|
||||||
|
"<"
|
||||||
|
"<b>%(name)s </b><br/>"
|
||||||
|
"Type: <b>%(type)s </b><br/>"
|
||||||
|
"State key: <b>%(state_key)s </b><br/>"
|
||||||
|
"Content: <b>%(content)s </b><br/>"
|
||||||
|
"Time: <b>%(time)s </b><br/>"
|
||||||
|
"Depth: <b>%(depth)s </b><br/>"
|
||||||
|
">"
|
||||||
|
) % {
|
||||||
|
"name": name,
|
||||||
|
"type": pdu.get("pdu_type"),
|
||||||
|
"state_key": pdu.get("state_key"),
|
||||||
|
"content": cgi.escape(json.dumps(pdu.get("content")), quote=True),
|
||||||
|
"time": t,
|
||||||
|
"depth": pdu.get("depth"),
|
||||||
|
}
|
||||||
|
|
||||||
|
node = pydot.Node(
|
||||||
|
name=name,
|
||||||
|
label=label,
|
||||||
|
color=color_map[pdu.get("origin")]
|
||||||
|
)
|
||||||
|
node_map[name] = node
|
||||||
|
graph.add_node(node)
|
||||||
|
|
||||||
|
for pdu in pdus:
|
||||||
|
start_name = make_name(pdu.get("pdu_id"), pdu.get("origin"))
|
||||||
|
for i, o in pdu.get("prev_pdus", []):
|
||||||
|
end_name = make_name(i, o)
|
||||||
|
|
||||||
|
if end_name not in node_map:
|
||||||
|
print "%s not in nodes" % end_name
|
||||||
|
continue
|
||||||
|
|
||||||
|
edge = pydot.Edge(node_map[start_name], node_map[end_name])
|
||||||
|
graph.add_edge(edge)
|
||||||
|
|
||||||
|
# Add prev_state edges, if they exist
|
||||||
|
if pdu.get("prev_state_id") and pdu.get("prev_state_origin"):
|
||||||
|
prev_state_name = make_name(
|
||||||
|
pdu.get("prev_state_id"), pdu.get("prev_state_origin")
|
||||||
|
)
|
||||||
|
|
||||||
|
if prev_state_name in node_map:
|
||||||
|
state_edge = pydot.Edge(
|
||||||
|
node_map[start_name], node_map[prev_state_name],
|
||||||
|
style='dotted'
|
||||||
|
)
|
||||||
|
graph.add_edge(state_edge)
|
||||||
|
|
||||||
|
graph.write('%s.dot' % filename_prefix, format='raw', prog='dot')
|
||||||
|
graph.write_png("%s.png" % filename_prefix, prog='dot')
|
||||||
|
graph.write_svg("%s.svg" % filename_prefix, prog='dot')
|
||||||
|
|
||||||
|
|
||||||
|
def get_pdus(host, room):
|
||||||
|
transaction = json.loads(
|
||||||
|
urllib2.urlopen(
|
||||||
|
"http://%s/context/%s/" % (host, room)
|
||||||
|
).read()
|
||||||
|
)
|
||||||
|
|
||||||
|
return transaction["pdus"]
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Generate a PDU graph for a given room by talking "
|
||||||
|
"to the given homeserver to get the list of PDUs. \n"
|
||||||
|
"Requires pydot."
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"-p", "--prefix", dest="prefix",
|
||||||
|
help="String to prefix output files with"
|
||||||
|
)
|
||||||
|
parser.add_argument('host')
|
||||||
|
parser.add_argument('room')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
host = args.host
|
||||||
|
room = args.room
|
||||||
|
prefix = args.prefix if args.prefix else "%s_graph" % (room)
|
||||||
|
|
||||||
|
pdus = get_pdus(host, room)
|
||||||
|
|
||||||
|
make_graph(pdus, room, prefix)
|
10
setup.cfg
Normal file
10
setup.cfg
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
[build_sphinx]
|
||||||
|
source-dir = docs/sphinx
|
||||||
|
build-dir = docs/build
|
||||||
|
all_files = 1
|
||||||
|
|
||||||
|
[aliases]
|
||||||
|
test = trial
|
||||||
|
|
||||||
|
[trial]
|
||||||
|
test_suite = tests
|
40
setup.py
Normal file
40
setup.py
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
import os
|
||||||
|
from setuptools import setup, find_packages
|
||||||
|
|
||||||
|
|
||||||
|
# Utility function to read the README file.
|
||||||
|
# Used for the long_description. It's nice, because now 1) we have a top level
|
||||||
|
# README file and 2) it's easier to type in the README file than to put a raw
|
||||||
|
# string in below ...
|
||||||
|
def read(fname):
|
||||||
|
return open(os.path.join(os.path.dirname(__file__), fname)).read()
|
||||||
|
|
||||||
|
setup(
|
||||||
|
name="SynapseHomeServer",
|
||||||
|
version="0.1",
|
||||||
|
packages=find_packages(exclude=["tests"]),
|
||||||
|
description="Reference Synapse Home Server",
|
||||||
|
install_requires=[
|
||||||
|
"syutil==0.0.1",
|
||||||
|
"Twisted>=14.0.0",
|
||||||
|
"service_identity>=1.0.0",
|
||||||
|
"pyasn1",
|
||||||
|
"pynacl",
|
||||||
|
"daemonize",
|
||||||
|
"py-bcrypt",
|
||||||
|
],
|
||||||
|
dependency_links=[
|
||||||
|
"git+ssh://git@git.openmarket.com/tng/syutil.git#egg=syutil-0.0.1",
|
||||||
|
],
|
||||||
|
setup_requires=[
|
||||||
|
"setuptools_trial",
|
||||||
|
"setuptools>=1.0.0", # Needs setuptools that supports git+ssh. It's not obvious when support for this was introduced.
|
||||||
|
"mock"
|
||||||
|
],
|
||||||
|
include_package_data=True,
|
||||||
|
long_description=read("README.rst"),
|
||||||
|
entry_points="""
|
||||||
|
[console_scripts]
|
||||||
|
synapse-homeserver=synapse.app.homeserver:run
|
||||||
|
"""
|
||||||
|
)
|
1
sphinx_api_docs.sh
Normal file
1
sphinx_api_docs.sh
Normal file
@ -0,0 +1 @@
|
|||||||
|
sphinx-apidoc -o docs/sphinx/ synapse/ -ef
|
16
synapse/__init__.py
Normal file
16
synapse/__init__.py
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
""" This is a reference implementation of a synapse home server.
|
||||||
|
"""
|
14
synapse/api/__init__.py
Normal file
14
synapse/api/__init__.py
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
164
synapse/api/auth.py
Normal file
164
synapse/api/auth.py
Normal file
@ -0,0 +1,164 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
"""This module contains classes for authenticating the user."""
|
||||||
|
from twisted.internet import defer
|
||||||
|
|
||||||
|
from synapse.api.constants import Membership
|
||||||
|
from synapse.api.errors import AuthError, StoreError
|
||||||
|
from synapse.api.events.room import (RoomTopicEvent, RoomMemberEvent,
|
||||||
|
MessageEvent, FeedbackEvent)
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class Auth(object):
|
||||||
|
|
||||||
|
def __init__(self, hs):
|
||||||
|
self.hs = hs
|
||||||
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def check(self, event, raises=False):
|
||||||
|
""" Checks if this event is correctly authed.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if the auth checks pass.
|
||||||
|
Raises:
|
||||||
|
AuthError if there was a problem authorising this event. This will
|
||||||
|
be raised only if raises=True.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
if event.type in [RoomTopicEvent.TYPE, MessageEvent.TYPE,
|
||||||
|
FeedbackEvent.TYPE]:
|
||||||
|
yield self.check_joined_room(event.room_id, event.user_id)
|
||||||
|
defer.returnValue(True)
|
||||||
|
elif event.type == RoomMemberEvent.TYPE:
|
||||||
|
allowed = yield self.is_membership_change_allowed(event)
|
||||||
|
defer.returnValue(allowed)
|
||||||
|
else:
|
||||||
|
raise AuthError(500, "Unknown event type %s" % event.type)
|
||||||
|
except AuthError as e:
|
||||||
|
logger.info("Event auth check failed on event %s with msg: %s",
|
||||||
|
event, e.msg)
|
||||||
|
if raises:
|
||||||
|
raise e
|
||||||
|
defer.returnValue(False)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def check_joined_room(self, room_id, user_id):
|
||||||
|
try:
|
||||||
|
member = yield self.store.get_room_member(
|
||||||
|
room_id=room_id,
|
||||||
|
user_id=user_id
|
||||||
|
)
|
||||||
|
if not member or member.membership != Membership.JOIN:
|
||||||
|
raise AuthError(403, "User %s not in room %s" %
|
||||||
|
(user_id, room_id))
|
||||||
|
defer.returnValue(member)
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
defer.returnValue(None)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def is_membership_change_allowed(self, event):
|
||||||
|
# does this room even exist
|
||||||
|
room = yield self.store.get_room(event.room_id)
|
||||||
|
if not room:
|
||||||
|
raise AuthError(403, "Room does not exist")
|
||||||
|
|
||||||
|
# get info about the caller
|
||||||
|
try:
|
||||||
|
caller = yield self.store.get_room_member(
|
||||||
|
user_id=event.user_id,
|
||||||
|
room_id=event.room_id)
|
||||||
|
except:
|
||||||
|
caller = None
|
||||||
|
caller_in_room = caller and caller.membership == "join"
|
||||||
|
|
||||||
|
# get info about the target
|
||||||
|
try:
|
||||||
|
target = yield self.store.get_room_member(
|
||||||
|
user_id=event.target_user_id,
|
||||||
|
room_id=event.room_id)
|
||||||
|
except:
|
||||||
|
target = None
|
||||||
|
target_in_room = target and target.membership == "join"
|
||||||
|
|
||||||
|
membership = event.content["membership"]
|
||||||
|
|
||||||
|
if Membership.INVITE == membership:
|
||||||
|
# Invites are valid iff caller is in the room and target isn't.
|
||||||
|
if not caller_in_room: # caller isn't joined
|
||||||
|
raise AuthError(403, "You are not in room %s." % event.room_id)
|
||||||
|
elif target_in_room: # the target is already in the room.
|
||||||
|
raise AuthError(403, "%s is already in the room." %
|
||||||
|
event.target_user_id)
|
||||||
|
elif Membership.JOIN == membership:
|
||||||
|
# Joins are valid iff caller == target and they were:
|
||||||
|
# invited: They are accepting the invitation
|
||||||
|
# joined: It's a NOOP
|
||||||
|
if event.user_id != event.target_user_id:
|
||||||
|
raise AuthError(403, "Cannot force another user to join.")
|
||||||
|
elif room.is_public:
|
||||||
|
pass # anyone can join public rooms.
|
||||||
|
elif (not caller or caller.membership not in
|
||||||
|
[Membership.INVITE, Membership.JOIN]):
|
||||||
|
raise AuthError(403, "You are not invited to this room.")
|
||||||
|
elif Membership.LEAVE == membership:
|
||||||
|
if not caller_in_room: # trying to leave a room you aren't joined
|
||||||
|
raise AuthError(403, "You are not in room %s." % event.room_id)
|
||||||
|
elif event.target_user_id != event.user_id:
|
||||||
|
# trying to force another user to leave
|
||||||
|
raise AuthError(403, "Cannot force %s to leave." %
|
||||||
|
event.target_user_id)
|
||||||
|
else:
|
||||||
|
raise AuthError(500, "Unknown membership %s" % membership)
|
||||||
|
|
||||||
|
defer.returnValue(True)
|
||||||
|
|
||||||
|
def get_user_by_req(self, request):
|
||||||
|
""" Get a registered user's ID.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
request - An HTTP request with an access_token query parameter.
|
||||||
|
Returns:
|
||||||
|
UserID : User ID object of the user making the request
|
||||||
|
Raises:
|
||||||
|
AuthError if no user by that token exists or the token is invalid.
|
||||||
|
"""
|
||||||
|
# Can optionally look elsewhere in the request (e.g. headers)
|
||||||
|
try:
|
||||||
|
return self.get_user_by_token(request.args["access_token"][0])
|
||||||
|
except KeyError:
|
||||||
|
raise AuthError(403, "Missing access token.")
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def get_user_by_token(self, token):
|
||||||
|
""" Get a registered user's ID.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
token (str)- The access token to get the user by.
|
||||||
|
Returns:
|
||||||
|
UserID : User ID object of the user who has that access token.
|
||||||
|
Raises:
|
||||||
|
AuthError if no user by that token exists or the token is invalid.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
user_id = yield self.store.get_user_by_token(token=token)
|
||||||
|
defer.returnValue(self.hs.parse_userid(user_id))
|
||||||
|
except StoreError:
|
||||||
|
raise AuthError(403, "Unrecognised access token.")
|
42
synapse/api/constants.py
Normal file
42
synapse/api/constants.py
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
"""Contains constants from the specification."""
|
||||||
|
|
||||||
|
|
||||||
|
class Membership(object):
|
||||||
|
|
||||||
|
"""Represents the membership states of a user in a room."""
|
||||||
|
INVITE = u"invite"
|
||||||
|
JOIN = u"join"
|
||||||
|
KNOCK = u"knock"
|
||||||
|
LEAVE = u"leave"
|
||||||
|
|
||||||
|
|
||||||
|
class Feedback(object):
|
||||||
|
|
||||||
|
"""Represents the types of feedback a user can send in response to a
|
||||||
|
message."""
|
||||||
|
|
||||||
|
DELIVERED = u"d"
|
||||||
|
READ = u"r"
|
||||||
|
LIST = (DELIVERED, READ)
|
||||||
|
|
||||||
|
|
||||||
|
class PresenceState(object):
|
||||||
|
"""Represents the presence state of a user."""
|
||||||
|
OFFLINE = 0
|
||||||
|
BUSY = 1
|
||||||
|
ONLINE = 2
|
||||||
|
FREE_FOR_CHAT = 3
|
114
synapse/api/errors.py
Normal file
114
synapse/api/errors.py
Normal file
@ -0,0 +1,114 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
"""Contains exceptions and error codes."""
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
|
||||||
|
class Codes(object):
|
||||||
|
FORBIDDEN = "M_FORBIDDEN"
|
||||||
|
BAD_JSON = "M_BAD_JSON"
|
||||||
|
NOT_JSON = "M_NOT_JSON"
|
||||||
|
USER_IN_USE = "M_USER_IN_USE"
|
||||||
|
ROOM_IN_USE = "M_ROOM_IN_USE"
|
||||||
|
BAD_PAGINATION = "M_BAD_PAGINATION"
|
||||||
|
UNKNOWN = "M_UNKNOWN"
|
||||||
|
NOT_FOUND = "M_NOT_FOUND"
|
||||||
|
|
||||||
|
|
||||||
|
class CodeMessageException(Exception):
|
||||||
|
"""An exception with integer code and message string attributes."""
|
||||||
|
|
||||||
|
def __init__(self, code, msg):
|
||||||
|
logging.error("%s: %s, %s", type(self).__name__, code, msg)
|
||||||
|
super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
|
||||||
|
self.code = code
|
||||||
|
self.msg = msg
|
||||||
|
|
||||||
|
|
||||||
|
class SynapseError(CodeMessageException):
|
||||||
|
"""A base error which can be caught for all synapse events."""
|
||||||
|
def __init__(self, code, msg, errcode=""):
|
||||||
|
"""Constructs a synapse error.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
code (int): The integer error code (typically an HTTP response code)
|
||||||
|
msg (str): The human-readable error message.
|
||||||
|
err (str): The error code e.g 'M_FORBIDDEN'
|
||||||
|
"""
|
||||||
|
super(SynapseError, self).__init__(code, msg)
|
||||||
|
self.errcode = errcode
|
||||||
|
|
||||||
|
|
||||||
|
class RoomError(SynapseError):
|
||||||
|
"""An error raised when a room event fails."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class RegistrationError(SynapseError):
|
||||||
|
"""An error raised when a registration event fails."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class AuthError(SynapseError):
|
||||||
|
"""An error raised when there was a problem authorising an event."""
|
||||||
|
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
if "errcode" not in kwargs:
|
||||||
|
kwargs["errcode"] = Codes.FORBIDDEN
|
||||||
|
super(AuthError, self).__init__(*args, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
class EventStreamError(SynapseError):
|
||||||
|
"""An error raised when there a problem with the event stream."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class LoginError(SynapseError):
|
||||||
|
"""An error raised when there was a problem logging in."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class StoreError(SynapseError):
|
||||||
|
"""An error raised when there was a problem storing some data."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def cs_exception(exception):
|
||||||
|
if isinstance(exception, SynapseError):
|
||||||
|
return cs_error(
|
||||||
|
exception.msg,
|
||||||
|
Codes.UNKNOWN if not exception.errcode else exception.errcode)
|
||||||
|
elif isinstance(exception, CodeMessageException):
|
||||||
|
return cs_error(exception.msg)
|
||||||
|
else:
|
||||||
|
logging.error("Unknown exception type: %s", type(exception))
|
||||||
|
|
||||||
|
|
||||||
|
def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
|
||||||
|
""" Utility method for constructing an error response for client-server
|
||||||
|
interactions.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
msg (str): The error message.
|
||||||
|
code (int): The error code.
|
||||||
|
kwargs : Additional keys to add to the response.
|
||||||
|
Returns:
|
||||||
|
A dict representing the error response JSON.
|
||||||
|
"""
|
||||||
|
err = {"error": msg, "errcode": code}
|
||||||
|
for key, value in kwargs.iteritems():
|
||||||
|
err[key] = value
|
||||||
|
return err
|
152
synapse/api/events/__init__.py
Normal file
152
synapse/api/events/__init__.py
Normal file
@ -0,0 +1,152 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from synapse.api.errors import SynapseError, Codes
|
||||||
|
from synapse.util.jsonobject import JsonEncodedObject
|
||||||
|
|
||||||
|
|
||||||
|
class SynapseEvent(JsonEncodedObject):
|
||||||
|
|
||||||
|
"""Base class for Synapse events. These are JSON objects which must abide
|
||||||
|
by a certain well-defined structure.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Attributes that are currently assumed by the federation side:
|
||||||
|
# Mandatory:
|
||||||
|
# - event_id
|
||||||
|
# - room_id
|
||||||
|
# - type
|
||||||
|
# - is_state
|
||||||
|
#
|
||||||
|
# Optional:
|
||||||
|
# - state_key (mandatory when is_state is True)
|
||||||
|
# - prev_events (these can be filled out by the federation layer itself.)
|
||||||
|
# - prev_state
|
||||||
|
|
||||||
|
valid_keys = [
|
||||||
|
"event_id",
|
||||||
|
"type",
|
||||||
|
"room_id",
|
||||||
|
"user_id", # sender/initiator
|
||||||
|
"content", # HTTP body, JSON
|
||||||
|
]
|
||||||
|
|
||||||
|
internal_keys = [
|
||||||
|
"is_state",
|
||||||
|
"state_key",
|
||||||
|
"prev_events",
|
||||||
|
"prev_state",
|
||||||
|
"depth",
|
||||||
|
"destinations",
|
||||||
|
"origin",
|
||||||
|
]
|
||||||
|
|
||||||
|
required_keys = [
|
||||||
|
"event_id",
|
||||||
|
"room_id",
|
||||||
|
"content",
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self, raises=True, **kwargs):
|
||||||
|
super(SynapseEvent, self).__init__(**kwargs)
|
||||||
|
if "content" in kwargs:
|
||||||
|
self.check_json(self.content, raises=raises)
|
||||||
|
|
||||||
|
def get_content_template(self):
|
||||||
|
""" Retrieve the JSON template for this event as a dict.
|
||||||
|
|
||||||
|
The template must be a dict representing the JSON to match. Only
|
||||||
|
required keys should be present. The values of the keys in the template
|
||||||
|
are checked via type() to the values of the same keys in the actual
|
||||||
|
event JSON.
|
||||||
|
|
||||||
|
NB: If loading content via json.loads, you MUST define strings as
|
||||||
|
unicode.
|
||||||
|
|
||||||
|
For example:
|
||||||
|
Content:
|
||||||
|
{
|
||||||
|
"name": u"bob",
|
||||||
|
"age": 18,
|
||||||
|
"friends": [u"mike", u"jill"]
|
||||||
|
}
|
||||||
|
Template:
|
||||||
|
{
|
||||||
|
"name": u"string",
|
||||||
|
"age": 0,
|
||||||
|
"friends": [u"string"]
|
||||||
|
}
|
||||||
|
The values "string" and 0 could be anything, so long as the types
|
||||||
|
are the same as the content.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError("get_content_template not implemented.")
|
||||||
|
|
||||||
|
def check_json(self, content, raises=True):
|
||||||
|
"""Checks the given JSON content abides by the rules of the template.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
content : A JSON object to check.
|
||||||
|
raises: True to raise a SynapseError if the check fails.
|
||||||
|
Returns:
|
||||||
|
True if the content passes the template. Returns False if the check
|
||||||
|
fails and raises=False.
|
||||||
|
Raises:
|
||||||
|
SynapseError if the check fails and raises=True.
|
||||||
|
"""
|
||||||
|
# recursively call to inspect each layer
|
||||||
|
err_msg = self._check_json(content, self.get_content_template())
|
||||||
|
if err_msg:
|
||||||
|
if raises:
|
||||||
|
raise SynapseError(400, err_msg, Codes.BAD_JSON)
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
else:
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _check_json(self, content, template):
|
||||||
|
"""Check content and template matches.
|
||||||
|
|
||||||
|
If the template is a dict, each key in the dict will be validated with
|
||||||
|
the content, else it will just compare the types of content and
|
||||||
|
template. This basic type check is required because this function will
|
||||||
|
be recursively called and could be called with just strs or ints.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
content: The content to validate.
|
||||||
|
template: The validation template.
|
||||||
|
Returns:
|
||||||
|
str: An error message if the validation fails, else None.
|
||||||
|
"""
|
||||||
|
if type(content) != type(template):
|
||||||
|
return "Mismatched types: %s" % template
|
||||||
|
|
||||||
|
if type(template) == dict:
|
||||||
|
for key in template:
|
||||||
|
if key not in content:
|
||||||
|
return "Missing %s key" % key
|
||||||
|
|
||||||
|
if type(content[key]) != type(template[key]):
|
||||||
|
return "Key %s is of the wrong type." % key
|
||||||
|
|
||||||
|
if type(content[key]) == dict:
|
||||||
|
# we must go deeper
|
||||||
|
msg = self._check_json(content[key], template[key])
|
||||||
|
if msg:
|
||||||
|
return msg
|
||||||
|
elif type(content[key]) == list:
|
||||||
|
# make sure each item type in content matches the template
|
||||||
|
for entry in content[key]:
|
||||||
|
msg = self._check_json(entry, template[key][0])
|
||||||
|
if msg:
|
||||||
|
return msg
|
50
synapse/api/events/factory.py
Normal file
50
synapse/api/events/factory.py
Normal file
@ -0,0 +1,50 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from synapse.api.events.room import (
|
||||||
|
RoomTopicEvent, MessageEvent, RoomMemberEvent, FeedbackEvent,
|
||||||
|
InviteJoinEvent, RoomConfigEvent
|
||||||
|
)
|
||||||
|
|
||||||
|
from synapse.util.stringutils import random_string
|
||||||
|
|
||||||
|
|
||||||
|
class EventFactory(object):
|
||||||
|
|
||||||
|
_event_classes = [
|
||||||
|
RoomTopicEvent,
|
||||||
|
MessageEvent,
|
||||||
|
RoomMemberEvent,
|
||||||
|
FeedbackEvent,
|
||||||
|
InviteJoinEvent,
|
||||||
|
RoomConfigEvent
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._event_list = {} # dict of TYPE to event class
|
||||||
|
for event_class in EventFactory._event_classes:
|
||||||
|
self._event_list[event_class.TYPE] = event_class
|
||||||
|
|
||||||
|
def create_event(self, etype=None, **kwargs):
|
||||||
|
kwargs["type"] = etype
|
||||||
|
if "event_id" not in kwargs:
|
||||||
|
kwargs["event_id"] = random_string(10)
|
||||||
|
|
||||||
|
try:
|
||||||
|
handler = self._event_list[etype]
|
||||||
|
except KeyError: # unknown event type
|
||||||
|
# TODO allow custom event types.
|
||||||
|
raise NotImplementedError("Unknown etype=%s" % etype)
|
||||||
|
|
||||||
|
return handler(**kwargs)
|
99
synapse/api/events/room.py
Normal file
99
synapse/api/events/room.py
Normal file
@ -0,0 +1,99 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from . import SynapseEvent
|
||||||
|
|
||||||
|
|
||||||
|
class RoomTopicEvent(SynapseEvent):
|
||||||
|
TYPE = "m.room.topic"
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
kwargs["state_key"] = ""
|
||||||
|
super(RoomTopicEvent, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
def get_content_template(self):
|
||||||
|
return {"topic": u"string"}
|
||||||
|
|
||||||
|
|
||||||
|
class RoomMemberEvent(SynapseEvent):
|
||||||
|
TYPE = "m.room.member"
|
||||||
|
|
||||||
|
valid_keys = SynapseEvent.valid_keys + [
|
||||||
|
"target_user_id", # target
|
||||||
|
"membership", # action
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
if "target_user_id" in kwargs:
|
||||||
|
kwargs["state_key"] = kwargs["target_user_id"]
|
||||||
|
super(RoomMemberEvent, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
def get_content_template(self):
|
||||||
|
return {"membership": u"string"}
|
||||||
|
|
||||||
|
|
||||||
|
class MessageEvent(SynapseEvent):
|
||||||
|
TYPE = "m.room.message"
|
||||||
|
|
||||||
|
valid_keys = SynapseEvent.valid_keys + [
|
||||||
|
"msg_id", # unique per room + user combo
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
super(MessageEvent, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
def get_content_template(self):
|
||||||
|
return {"msgtype": u"string"}
|
||||||
|
|
||||||
|
|
||||||
|
class FeedbackEvent(SynapseEvent):
|
||||||
|
TYPE = "m.room.message.feedback"
|
||||||
|
|
||||||
|
valid_keys = SynapseEvent.valid_keys + [
|
||||||
|
"msg_id", # the message ID being acknowledged
|
||||||
|
"msg_sender_id", # person who is sending the feedback is 'user_id'
|
||||||
|
"feedback_type", # the type of feedback (delivery, read, etc)
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
super(FeedbackEvent, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
def get_content_template(self):
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
class InviteJoinEvent(SynapseEvent):
|
||||||
|
TYPE = "m.room.invite_join"
|
||||||
|
|
||||||
|
valid_keys = SynapseEvent.valid_keys + [
|
||||||
|
"target_user_id",
|
||||||
|
"target_host",
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
super(InviteJoinEvent, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
def get_content_template(self):
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
class RoomConfigEvent(SynapseEvent):
|
||||||
|
TYPE = "m.room.config"
|
||||||
|
|
||||||
|
def __init__(self, **kwargs):
|
||||||
|
kwargs["state_key"] = ""
|
||||||
|
super(RoomConfigEvent, self).__init__(**kwargs)
|
||||||
|
|
||||||
|
def get_content_template(self):
|
||||||
|
return {}
|
186
synapse/api/notifier.py
Normal file
186
synapse/api/notifier.py
Normal file
@ -0,0 +1,186 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from synapse.api.constants import Membership
|
||||||
|
from synapse.api.events.room import RoomMemberEvent
|
||||||
|
|
||||||
|
from twisted.internet import defer
|
||||||
|
from twisted.internet import reactor
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class Notifier(object):
|
||||||
|
|
||||||
|
def __init__(self, hs):
|
||||||
|
self.store = hs.get_datastore()
|
||||||
|
self.hs = hs
|
||||||
|
self.stored_event_listeners = {}
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def on_new_room_event(self, event, store_id):
|
||||||
|
"""Called when there is a new room event which may potentially be sent
|
||||||
|
down listening users' event streams.
|
||||||
|
|
||||||
|
This function looks for interested *users* who may want to be notified
|
||||||
|
for this event. This is different to users requesting from the event
|
||||||
|
stream which looks for interested *events* for this user.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event (SynapseEvent): The new event, which must have a room_id
|
||||||
|
store_id (int): The ID of this event after it was stored with the
|
||||||
|
data store.
|
||||||
|
'"""
|
||||||
|
member_list = yield self.store.get_room_members(room_id=event.room_id,
|
||||||
|
membership="join")
|
||||||
|
if not member_list:
|
||||||
|
member_list = []
|
||||||
|
|
||||||
|
member_list = [u.user_id for u in member_list]
|
||||||
|
|
||||||
|
# invites MUST prod the person being invited, who won't be in the room.
|
||||||
|
if (event.type == RoomMemberEvent.TYPE and
|
||||||
|
event.content["membership"] == Membership.INVITE):
|
||||||
|
member_list.append(event.target_user_id)
|
||||||
|
|
||||||
|
for user_id in member_list:
|
||||||
|
if user_id in self.stored_event_listeners:
|
||||||
|
self._notify_and_callback(
|
||||||
|
user_id=user_id,
|
||||||
|
event_data=event.get_dict(),
|
||||||
|
stream_type=event.type,
|
||||||
|
store_id=store_id)
|
||||||
|
|
||||||
|
def on_new_user_event(self, user_id, event_data, stream_type, store_id):
|
||||||
|
if user_id in self.stored_event_listeners:
|
||||||
|
self._notify_and_callback(
|
||||||
|
user_id=user_id,
|
||||||
|
event_data=event_data,
|
||||||
|
stream_type=stream_type,
|
||||||
|
store_id=store_id
|
||||||
|
)
|
||||||
|
|
||||||
|
def _notify_and_callback(self, user_id, event_data, stream_type, store_id):
|
||||||
|
logger.debug(
|
||||||
|
"Notifying %s of a new event.",
|
||||||
|
user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
stream_ids = list(self.stored_event_listeners[user_id])
|
||||||
|
for stream_id in stream_ids:
|
||||||
|
self._notify_and_callback_stream(user_id, stream_id, event_data,
|
||||||
|
stream_type, store_id)
|
||||||
|
|
||||||
|
if not self.stored_event_listeners[user_id]:
|
||||||
|
del self.stored_event_listeners[user_id]
|
||||||
|
|
||||||
|
def _notify_and_callback_stream(self, user_id, stream_id, event_data,
|
||||||
|
stream_type, store_id):
|
||||||
|
|
||||||
|
event_listener = self.stored_event_listeners[user_id].pop(stream_id)
|
||||||
|
return_event_object = {
|
||||||
|
k: event_listener[k] for k in ["start", "chunk", "end"]
|
||||||
|
}
|
||||||
|
|
||||||
|
# work out the new end token
|
||||||
|
token = event_listener["start"]
|
||||||
|
end = self._next_token(stream_type, store_id, token)
|
||||||
|
return_event_object["end"] = end
|
||||||
|
|
||||||
|
# add the event to the chunk
|
||||||
|
chunk = event_listener["chunk"]
|
||||||
|
chunk.append(event_data)
|
||||||
|
|
||||||
|
# callback the defer. We know this can't have been resolved before as
|
||||||
|
# we always remove the event_listener from the map before resolving.
|
||||||
|
event_listener["defer"].callback(return_event_object)
|
||||||
|
|
||||||
|
def _next_token(self, stream_type, store_id, current_token):
|
||||||
|
stream_handler = self.hs.get_handlers().event_stream_handler
|
||||||
|
return stream_handler.get_event_stream_token(
|
||||||
|
stream_type,
|
||||||
|
store_id,
|
||||||
|
current_token
|
||||||
|
)
|
||||||
|
|
||||||
|
def store_events_for(self, user_id=None, stream_id=None, from_tok=None):
|
||||||
|
"""Store all incoming events for this user. This should be paired with
|
||||||
|
get_events_for to return chunked data.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id (str): The user to monitor incoming events for.
|
||||||
|
stream (object): The stream that is receiving events
|
||||||
|
from_tok (str): The token to monitor incoming events from.
|
||||||
|
"""
|
||||||
|
event_listener = {
|
||||||
|
"start": from_tok,
|
||||||
|
"chunk": [],
|
||||||
|
"end": from_tok,
|
||||||
|
"defer": defer.Deferred(),
|
||||||
|
}
|
||||||
|
|
||||||
|
if user_id not in self.stored_event_listeners:
|
||||||
|
self.stored_event_listeners[user_id] = {stream_id: event_listener}
|
||||||
|
else:
|
||||||
|
self.stored_event_listeners[user_id][stream_id] = event_listener
|
||||||
|
|
||||||
|
def purge_events_for(self, user_id=None, stream_id=None):
|
||||||
|
"""Purges any stored events for this user.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id (str): The user to purge stored events for.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
del self.stored_event_listeners[user_id][stream_id]
|
||||||
|
if not self.stored_event_listeners[user_id]:
|
||||||
|
del self.stored_event_listeners[user_id]
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_events_for(self, user_id=None, stream_id=None, timeout=0):
|
||||||
|
"""Retrieve stored events for this user, waiting if necessary.
|
||||||
|
|
||||||
|
It is advisable to wrap this call in a maybeDeferred.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id (str): The user to get events for.
|
||||||
|
timeout (int): The time in seconds to wait before giving up.
|
||||||
|
Returns:
|
||||||
|
A Deferred or a dict containing the chunk data, depending on if
|
||||||
|
there was data to return yet. The Deferred callback may be None if
|
||||||
|
there were no events before the timeout expired.
|
||||||
|
"""
|
||||||
|
logger.debug("%s is listening for events.", user_id)
|
||||||
|
|
||||||
|
if len(self.stored_event_listeners[user_id][stream_id]["chunk"]) > 0:
|
||||||
|
logger.debug("%s returning existing chunk.", user_id)
|
||||||
|
return self.stored_event_listeners[user_id][stream_id]
|
||||||
|
|
||||||
|
reactor.callLater(
|
||||||
|
(timeout / 1000.0), self._timeout, user_id, stream_id
|
||||||
|
)
|
||||||
|
return self.stored_event_listeners[user_id][stream_id]["defer"]
|
||||||
|
|
||||||
|
def _timeout(self, user_id, stream_id):
|
||||||
|
try:
|
||||||
|
# We remove the event_listener from the map so that we can't
|
||||||
|
# resolve the deferred twice.
|
||||||
|
event_listeners = self.stored_event_listeners[user_id]
|
||||||
|
event_listener = event_listeners.pop(stream_id)
|
||||||
|
event_listener["defer"].callback(None)
|
||||||
|
logger.debug("%s event listening timed out.", user_id)
|
||||||
|
except KeyError:
|
||||||
|
pass
|
96
synapse/api/streams/__init__.py
Normal file
96
synapse/api/streams/__init__.py
Normal file
@ -0,0 +1,96 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from synapse.api.errors import SynapseError
|
||||||
|
|
||||||
|
|
||||||
|
class PaginationConfig(object):
|
||||||
|
|
||||||
|
"""A configuration object which stores pagination parameters."""
|
||||||
|
|
||||||
|
def __init__(self, from_tok=None, to_tok=None, limit=0):
|
||||||
|
self.from_tok = from_tok
|
||||||
|
self.to_tok = to_tok
|
||||||
|
self.limit = limit
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_request(cls, request, raise_invalid_params=True):
|
||||||
|
params = {
|
||||||
|
"from_tok": PaginationStream.TOK_START,
|
||||||
|
"to_tok": PaginationStream.TOK_END,
|
||||||
|
"limit": 0
|
||||||
|
}
|
||||||
|
|
||||||
|
query_param_mappings = [ # 3-tuple of qp_key, attribute, rules
|
||||||
|
("from", "from_tok", lambda x: type(x) == str),
|
||||||
|
("to", "to_tok", lambda x: type(x) == str),
|
||||||
|
("limit", "limit", lambda x: x.isdigit())
|
||||||
|
]
|
||||||
|
|
||||||
|
for qp, attr, is_valid in query_param_mappings:
|
||||||
|
if qp in request.args:
|
||||||
|
if is_valid(request.args[qp][0]):
|
||||||
|
params[attr] = request.args[qp][0]
|
||||||
|
elif raise_invalid_params:
|
||||||
|
raise SynapseError(400, "%s parameter is invalid." % qp)
|
||||||
|
|
||||||
|
return PaginationConfig(**params)
|
||||||
|
|
||||||
|
|
||||||
|
class PaginationStream(object):
|
||||||
|
|
||||||
|
""" An interface for streaming data as chunks. """
|
||||||
|
|
||||||
|
TOK_START = "START"
|
||||||
|
TOK_END = "END"
|
||||||
|
|
||||||
|
def get_chunk(self, config=None):
|
||||||
|
""" Return the next chunk in the stream.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config (PaginationConfig): The config to aid which chunk to get.
|
||||||
|
Returns:
|
||||||
|
A dict containing the new start token "start", the new end token
|
||||||
|
"end" and the data "chunk" as a list.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
|
||||||
|
class StreamData(object):
|
||||||
|
|
||||||
|
""" An interface for obtaining streaming data from a table. """
|
||||||
|
|
||||||
|
def __init__(self, hs):
|
||||||
|
self.hs = hs
|
||||||
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
|
def get_rows(self, user_id, from_pkey, to_pkey, limit):
|
||||||
|
""" Get event stream data between the specified pkeys.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id : The user's ID
|
||||||
|
from_pkey : The starting pkey.
|
||||||
|
to_pkey : The end pkey. May be -1 to mean "latest".
|
||||||
|
limit: The max number of results to return.
|
||||||
|
Returns:
|
||||||
|
A tuple containing the list of event stream data and the last pkey.
|
||||||
|
"""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def max_token(self):
|
||||||
|
""" Get the latest currently-valid token.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The latest token."""
|
||||||
|
raise NotImplementedError()
|
247
synapse/api/streams/event.py
Normal file
247
synapse/api/streams/event.py
Normal file
@ -0,0 +1,247 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# Copyright 2014 matrix.org
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
"""This module contains classes for streaming from the event stream: /events.
|
||||||
|
"""
|
||||||
|
from twisted.internet import defer
|
||||||
|
|
||||||
|
from synapse.api.errors import EventStreamError
|
||||||
|
from synapse.api.events.room import (
|
||||||
|
RoomMemberEvent, MessageEvent, FeedbackEvent, RoomTopicEvent
|
||||||
|
)
|
||||||
|
from synapse.api.streams import PaginationStream, StreamData
|
||||||
|
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class MessagesStreamData(StreamData):
|
||||||
|
EVENT_TYPE = MessageEvent.TYPE
|
||||||
|
|
||||||
|
def __init__(self, hs, room_id=None, feedback=False):
|
||||||
|
super(MessagesStreamData, self).__init__(hs)
|
||||||
|
self.room_id = room_id
|
||||||
|
self.with_feedback = feedback
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def get_rows(self, user_id, from_key, to_key, limit):
|
||||||
|
(data, latest_ver) = yield self.store.get_message_stream(
|
||||||
|
user_id=user_id,
|
||||||
|
from_key=from_key,
|
||||||
|
to_key=to_key,
|
||||||
|
limit=limit,
|
||||||
|
room_id=self.room_id,
|
||||||
|
with_feedback=self.with_feedback
|
||||||
|
)
|
||||||
|
defer.returnValue((data, latest_ver))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def max_token(self):
|
||||||
|
val = yield self.store.get_max_message_id()
|
||||||
|
defer.returnValue(val)
|
||||||
|
|
||||||
|
|
||||||
|
class RoomMemberStreamData(StreamData):
|
||||||
|
EVENT_TYPE = RoomMemberEvent.TYPE
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def get_rows(self, user_id, from_key, to_key, limit):
|
||||||
|
(data, latest_ver) = yield self.store.get_room_member_stream(
|
||||||
|
user_id=user_id,
|
||||||
|
from_key=from_key,
|
||||||
|
to_key=to_key
|
||||||
|
)
|
||||||
|
|
||||||
|
defer.returnValue((data, latest_ver))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def max_token(self):
|
||||||
|
val = yield self.store.get_max_room_member_id()
|
||||||
|
defer.returnValue(val)
|
||||||
|
|
||||||
|
|
||||||
|
class FeedbackStreamData(StreamData):
|
||||||
|
EVENT_TYPE = FeedbackEvent.TYPE
|
||||||
|
|
||||||
|
def __init__(self, hs, room_id=None):
|
||||||
|
super(FeedbackStreamData, self).__init__(hs)
|
||||||
|
self.room_id = room_id
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def get_rows(self, user_id, from_key, to_key, limit):
|
||||||
|
(data, latest_ver) = yield self.store.get_feedback_stream(
|
||||||
|
user_id=user_id,
|
||||||
|
from_key=from_key,
|
||||||
|
to_key=to_key,
|
||||||
|
limit=limit,
|
||||||
|
room_id=self.room_id
|
||||||
|
)
|
||||||
|
defer.returnValue((data, latest_ver))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def max_token(self):
|
||||||
|
val = yield self.store.get_max_feedback_id()
|
||||||
|
defer.returnValue(val)
|
||||||
|
|
||||||
|
|
||||||
|
class RoomDataStreamData(StreamData):
|
||||||
|
EVENT_TYPE = RoomTopicEvent.TYPE # TODO need multiple event types
|
||||||
|
|
||||||
|
def __init__(self, hs, room_id=None):
|
||||||
|
super(RoomDataStreamData, self).__init__(hs)
|
||||||
|
self.room_id = room_id
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def get_rows(self, user_id, from_key, to_key, limit):
|
||||||
|
(data, latest_ver) = yield self.store.get_room_data_stream(
|
||||||
|
user_id=user_id,
|
||||||
|
from_key=from_key,
|
||||||
|
to_key=to_key,
|
||||||
|
limit=limit,
|
||||||
|
room_id=self.room_id
|
||||||
|
)
|
||||||
|
defer.returnValue((data, latest_ver))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def max_token(self):
|
||||||
|
val = yield self.store.get_max_room_data_id()
|
||||||
|
defer.returnValue(val)
|
||||||
|
|
||||||
|
|
||||||
|
class EventStream(PaginationStream):
|
||||||
|
|
||||||
|
SEPARATOR = '_'
|
||||||
|
|
||||||
|
def __init__(self, user_id, stream_data_list):
|
||||||
|
super(EventStream, self).__init__()
|
||||||
|
self.user_id = user_id
|
||||||
|
self.stream_data = stream_data_list
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def fix_tokens(self, pagination_config):
|
||||||
|
pagination_config.from_tok = yield self.fix_token(
|
||||||
|
pagination_config.from_tok)
|
||||||
|
pagination_config.to_tok = yield self.fix_token(
|
||||||
|
pagination_config.to_tok)
|
||||||
|
defer.returnValue(pagination_config)
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def fix_token(self, token):
|
||||||
|
"""Fixes unknown values in a token to known values.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
token (str): The token to fix up.
|
||||||
|
Returns:
|
||||||
|
The fixed-up token, which may == token.
|
||||||
|
"""
|
||||||
|
# replace TOK_START and TOK_END with 0_0_0 or -1_-1_-1 depending.
|
||||||
|
replacements = [
|
||||||
|
(PaginationStream.TOK_START, "0"),
|
||||||
|
(PaginationStream.TOK_END, "-1")
|
||||||
|
]
|
||||||
|
for magic_token, key in replacements:
|
||||||
|
if magic_token == token:
|
||||||
|
token = EventStream.SEPARATOR.join(
|
||||||
|
[key] * len(self.stream_data)
|
||||||
|
)
|
||||||
|
|
||||||
|
# replace -1 values with an actual pkey
|
||||||
|
token_segments = self._split_token(token)
|
||||||
|
for i, tok in enumerate(token_segments):
|
||||||
|
if tok == -1:
|
||||||
|
# add 1 to the max token because results are EXCLUSIVE from the
|
||||||
|
# latest version.
|
||||||
|
token_segments[i] = 1 + (yield self.stream_data[i].max_token())
|
||||||
|
defer.returnValue(EventStream.SEPARATOR.join(
|
||||||
|
str(x) for x in token_segments
|
||||||
|
))
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def get_chunk(self, config=None):
|
||||||
|
# no support for limit on >1 streams, makes no sense.
|
||||||
|
if config.limit and len(self.stream_data) > 1:
|
||||||
|
raise EventStreamError(
|
||||||
|
400, "Limit not supported on multiplexed streams."
|
||||||
|
)
|
||||||
|
|
||||||
|
(chunk_data, next_tok) = yield self._get_chunk_data(config.from_tok,
|
||||||
|
config.to_tok,
|
||||||
|
config.limit)
|
||||||
|
|
||||||
|
defer.returnValue({
|
||||||
|
"chunk": chunk_data,
|
||||||
|
"start": config.from_tok,
|
||||||
|
"end": next_tok
|
||||||
|
})
|
||||||
|
|
||||||
|
@defer.inlineCallbacks
|
||||||
|
def _get_chunk_data(self, from_tok, to_tok, limit):
|
||||||
|
""" Get event data between the two tokens.
|
||||||
|
|
||||||
|
Tokens are SEPARATOR separated values representing pkey values of
|
||||||
|
certain tables, and the position determines the StreamData invoked
|
||||||
|
according to the STREAM_DATA list.
|
||||||
|
|
||||||
|
The magic value '-1' can be used to get the latest value.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
from_tok - The token to start from.
|
||||||
|
to_tok - The token to end at. Must have values > from_tok or be -1.
|
||||||
|
Returns:
|
||||||
|
A list of event data.
|
||||||
|
Raises:
|
||||||
|
EventStreamError if something went wrong.
|
||||||
|
"""
|
||||||
|
# sanity check
|
||||||
|
if (from_tok.count(EventStream.SEPARATOR) !=
|
||||||
|
to_tok.count(EventStream.SEPARATOR) or
|
||||||
|
(from_tok.count(EventStream.SEPARATOR) + 1) !=
|
||||||
|
len(self.stream_data)):
|
||||||
|
raise EventStreamError(400, "Token lengths don't match.")
|
||||||
|
|
||||||
|
chunk = []
|
||||||
|
next_ver = []
|
||||||
|
for i, (from_pkey, to_pkey) in enumerate(zip(
|
||||||
|
self._split_token(from_tok),
|
||||||
|
self._split_token(to_tok)
|
||||||
|
)):
|
||||||
|
if from_pkey == to_pkey:
|
||||||
|
# tokens are the same, we have nothing to do.
|
||||||
|
next_ver.append(str(to_pkey))
|
||||||
|
continue
|
||||||
|
|
||||||
|
(event_chunk, max_pkey) = yield self.stream_data[i].get_rows(
|
||||||
|
self.user_id, from_pkey, to_pkey, limit
|
||||||
|
)
|
||||||
|
|
||||||
|
chunk += event_chunk
|
||||||
|
next_ver.append(str(max_pkey))
|
||||||
|
|
||||||
|
defer.returnValue((chunk, EventStream.SEPARATOR.join(next_ver)))
|
||||||
|
|
||||||
|
def _split_token(self, token):
|
||||||
|
"""Splits the given token into a list of pkeys.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
token (str): The token with SEPARATOR values.
|
||||||
|
Returns:
|
||||||
|
A list of ints.
|
||||||
|
"""
|
||||||
|
segments = token.split(EventStream.SEPARATOR)
|
||||||
|
try:
|
||||||
|
int_segments = [int(x) for x in segments]
|
||||||
|
except ValueError:
|
||||||
|
raise EventStreamError(400, "Bad token: %s" % token)
|
||||||
|
return int_segments
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user