mirror of
https://git.anonymousland.org/anonymousland/synapse.git
synced 2024-10-01 11:49:51 -04:00
Merge branch 'develop' of github.com:matrix-org/synapse into room_config
This commit is contained in:
commit
8e2d4c6da5
@ -225,8 +225,13 @@ class SynapseCmd(cmd.Cmd):
|
|||||||
json_res = yield self.http_client.do_request("GET", url)
|
json_res = yield self.http_client.do_request("GET", url)
|
||||||
print json_res
|
print json_res
|
||||||
|
|
||||||
if ("type" not in json_res or "m.login.password" != json_res["type"] or
|
if "flows" not in json_res:
|
||||||
"stages" in json_res):
|
print "Failed to find any login flows."
|
||||||
|
defer.returnValue(False)
|
||||||
|
|
||||||
|
flow = json_res["flows"][0] # assume first is the one we want.
|
||||||
|
if ("type" not in flow or "m.login.password" != flow["type"] or
|
||||||
|
"stages" in flow):
|
||||||
fallback_url = self._url() + "/login/fallback"
|
fallback_url = self._url() + "/login/fallback"
|
||||||
print ("Unable to login via the command line client. Please visit "
|
print ("Unable to login via the command line client. Please visit "
|
||||||
"%s to login." % fallback_url)
|
"%s to login." % fallback_url)
|
||||||
|
916
docs/specification.rst
Normal file
916
docs/specification.rst
Normal file
@ -0,0 +1,916 @@
|
|||||||
|
Matrix Specification
|
||||||
|
====================
|
||||||
|
|
||||||
|
TODO(Introduction) : Matthew
|
||||||
|
- Similar to intro paragraph from README.
|
||||||
|
- Explaining the overall mission, what this spec describes...
|
||||||
|
- "What is Matrix?"
|
||||||
|
- Draw parallels with email?
|
||||||
|
|
||||||
|
Architecture
|
||||||
|
============
|
||||||
|
|
||||||
|
Clients transmit data to other clients through home servers (HSes). Clients do not communicate with each
|
||||||
|
other directly.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
How data flows between clients
|
||||||
|
==============================
|
||||||
|
|
||||||
|
{ Matrix client A } { Matrix client B }
|
||||||
|
^ | ^ |
|
||||||
|
| events | | events |
|
||||||
|
| V | V
|
||||||
|
+------------------+ +------------------+
|
||||||
|
| |---------( HTTP )---------->| |
|
||||||
|
| Home Server | | Home Server |
|
||||||
|
| |<--------( HTTP )-----------| |
|
||||||
|
+------------------+ Federation +------------------+
|
||||||
|
|
||||||
|
A "Client" is an end-user, typically a human using a web application or mobile app. Clients use the
|
||||||
|
"Client-to-Server" (C-S) API to communicate with their home server. A single Client is usually
|
||||||
|
responsible for a single user account. A user account is represented by their "User ID". This ID is
|
||||||
|
namespaced to the home server which allocated the account and looks like::
|
||||||
|
|
||||||
|
@localpart:domain
|
||||||
|
|
||||||
|
The ``localpart`` of a user ID may be a user name, or an opaque ID identifying this user.
|
||||||
|
|
||||||
|
|
||||||
|
A "Home Server" is a server which provides C-S APIs and has the ability to federate with other HSes.
|
||||||
|
It is typically responsible for multiple clients. "Federation" is the term used to describe the
|
||||||
|
sharing of data between two or more home servers.
|
||||||
|
|
||||||
|
Data in Matrix is encapsulated in an "Event". An event is an action within the system. Typically each
|
||||||
|
action (e.g. sending a message) correlates with exactly one event. Each event has a ``type`` which is
|
||||||
|
used to differentiate different kinds of data. ``type`` values SHOULD be namespaced according to standard
|
||||||
|
Java package naming conventions, e.g. ``com.example.myapp.event``. Events are usually sent in the context
|
||||||
|
of a "Room".
|
||||||
|
|
||||||
|
Room structure
|
||||||
|
--------------
|
||||||
|
|
||||||
|
A room is a conceptual place where users can send and receive events. Rooms
|
||||||
|
can be created, joined and left. Events are sent to a room, and all
|
||||||
|
participants in that room will receive the event. Rooms are uniquely
|
||||||
|
identified via a "Room ID", which look like::
|
||||||
|
|
||||||
|
!opaque_id:domain
|
||||||
|
|
||||||
|
There is exactly one room ID for each room. Whilst the room ID does contain a
|
||||||
|
domain, it is simply for namespacing room IDs. The room does NOT reside on the
|
||||||
|
domain specified. Room IDs are not meant to be human readable.
|
||||||
|
|
||||||
|
The following diagram shows an ``m.room.message`` event being sent in the room
|
||||||
|
``!qporfwt:matrix.org``::
|
||||||
|
|
||||||
|
{ @alice:matrix.org } { @bob:domain.com }
|
||||||
|
| ^
|
||||||
|
| |
|
||||||
|
Room ID: !qporfwt:matrix.org Room ID: !qporfwt:matrix.org
|
||||||
|
Event type: m.room.message Event type: m.room.message
|
||||||
|
Content: { JSON object } Content: { JSON object }
|
||||||
|
| |
|
||||||
|
V |
|
||||||
|
+------------------+ +------------------+
|
||||||
|
| Home Server | | Home Server |
|
||||||
|
| matrix.org |<-------Federation------->| domain.com |
|
||||||
|
+------------------+ +------------------+
|
||||||
|
| ................................. |
|
||||||
|
|______| Partially Shared State |_______|
|
||||||
|
| Room ID: !qporfwt:matrix.org |
|
||||||
|
| Servers: matrix.org, domain.com |
|
||||||
|
| Members: |
|
||||||
|
| - @alice:matrix.org |
|
||||||
|
| - @bob:domain.com |
|
||||||
|
|.................................|
|
||||||
|
|
||||||
|
Federation maintains shared state between multiple home servers, such that when an event is
|
||||||
|
sent to a room, the home server knows where to forward the event on to, and how to process
|
||||||
|
the event. Home servers do not need to have completely shared state in order to participate
|
||||||
|
in a room. State is scoped to a single room, and federation ensures that all home servers
|
||||||
|
have the information they need, even if that means the home server has to request more
|
||||||
|
information from another home server before processing the event.
|
||||||
|
|
||||||
|
Room Aliases
|
||||||
|
------------
|
||||||
|
|
||||||
|
Each room can also have multiple "Room Aliases", which looks like::
|
||||||
|
|
||||||
|
#room_alias:domain
|
||||||
|
|
||||||
|
A room alias "points" to a room ID. The room ID the alias is pointing to can be obtained
|
||||||
|
by visiting the domain specified. Room aliases are designed to be human readable strings
|
||||||
|
which can be used to publicise rooms. Note that the mapping from a room alias to a
|
||||||
|
room ID is not fixed, and may change over time to point to a different room ID. For this
|
||||||
|
reason, Clients SHOULD resolve the room alias to a room ID once and then use that ID on
|
||||||
|
subsequent requests.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
GET
|
||||||
|
#matrix:domain.com !aaabaa:matrix.org
|
||||||
|
| ^
|
||||||
|
| |
|
||||||
|
_______V____________________|____
|
||||||
|
| domain.com |
|
||||||
|
| Mappings: |
|
||||||
|
| #matrix >> !aaabaa:matrix.org |
|
||||||
|
| #golf >> !wfeiofh:sport.com |
|
||||||
|
| #bike >> !4rguxf:matrix.org |
|
||||||
|
|________________________________|
|
||||||
|
|
||||||
|
|
||||||
|
Identity
|
||||||
|
--------
|
||||||
|
- Identity in relation to 3PIDs. Discovery of users based on 3PIDs.
|
||||||
|
- Identity servers; trusted clique of servers which replicate content.
|
||||||
|
- They govern the mapping of 3PIDs to user IDs and the creation of said mappings.
|
||||||
|
- Not strictly required in order to communicate.
|
||||||
|
|
||||||
|
|
||||||
|
API Standards
|
||||||
|
-------------
|
||||||
|
All communication in Matrix is performed over HTTP[S] using a Content-Type of ``application/json``.
|
||||||
|
Any errors which occur on the Matrix API level MUST return a "standard error response". This is a
|
||||||
|
JSON object which looks like::
|
||||||
|
|
||||||
|
{
|
||||||
|
"errcode": "<error code>",
|
||||||
|
"error": "<error message>"
|
||||||
|
}
|
||||||
|
|
||||||
|
The ``error`` string will be a human-readable error message, usually a sentence
|
||||||
|
explaining what went wrong. The ``errcode`` string will be a unique string which can be
|
||||||
|
used to handle an error message e.g. ``M_FORBIDDEN``. These error codes should have their
|
||||||
|
namespace first in ALL CAPS, followed by a single _. For example, if there was a custom
|
||||||
|
namespace ``com.mydomain.here``, and a ``FORBIDDEN`` code, the error code should look
|
||||||
|
like ``COM.MYDOMAIN.HERE_FORBIDDEN``. There may be additional keys depending on
|
||||||
|
the error, but the keys ``error`` and ``errcode`` MUST always be present.
|
||||||
|
|
||||||
|
Some standard error codes are below:
|
||||||
|
|
||||||
|
:``M_FORBIDDEN``:
|
||||||
|
Forbidden access, e.g. joining a room without permission, failed login.
|
||||||
|
|
||||||
|
:``M_UNKNOWN_TOKEN``:
|
||||||
|
The access token specified was not recognised.
|
||||||
|
|
||||||
|
:``M_BAD_JSON``:
|
||||||
|
Request contained valid JSON, but it was malformed in some way, e.g. missing
|
||||||
|
required keys, invalid values for keys.
|
||||||
|
|
||||||
|
:``M_NOT_JSON``:
|
||||||
|
Request did not contain valid JSON.
|
||||||
|
|
||||||
|
:``M_NOT_FOUND``:
|
||||||
|
No resource was found for this request.
|
||||||
|
|
||||||
|
Some requests have unique error codes:
|
||||||
|
|
||||||
|
:``M_USER_IN_USE``:
|
||||||
|
Encountered when trying to register a user ID which has been taken.
|
||||||
|
|
||||||
|
:``M_ROOM_IN_USE``:
|
||||||
|
Encountered when trying to create a room which has been taken.
|
||||||
|
|
||||||
|
:``M_BAD_PAGINATION``:
|
||||||
|
Encountered when specifying bad pagination query parameters.
|
||||||
|
|
||||||
|
:``M_LOGIN_EMAIL_URL_NOT_YET``:
|
||||||
|
Encountered when polling for an email link which has not been clicked yet.
|
||||||
|
|
||||||
|
The C-S API typically uses ``HTTP POST`` to submit requests. This means these requests
|
||||||
|
are not idempotent. The C-S API also allows ``HTTP PUT`` to make requests idempotent.
|
||||||
|
In order to use a ``PUT``, paths should be suffixed with ``/{txnId}``. ``{txnId}`` is a
|
||||||
|
client-generated transaction ID which identifies the request. Crucially, it **only**
|
||||||
|
serves to identify new requests from retransmits. After the request has finished, the
|
||||||
|
``{txnId}`` value should be changed (how is not specified, it could be a monotonically
|
||||||
|
increasing integer, etc). It is preferable to use ``HTTP PUT`` to make sure requests to
|
||||||
|
send messages do not get sent more than once should clients need to retransmit requests.
|
||||||
|
|
||||||
|
Valid requests look like::
|
||||||
|
|
||||||
|
POST /some/path/here
|
||||||
|
{
|
||||||
|
"key": "This is a post."
|
||||||
|
}
|
||||||
|
|
||||||
|
PUT /some/path/here/11
|
||||||
|
{
|
||||||
|
"key": "This is a put with a txnId of 11."
|
||||||
|
}
|
||||||
|
|
||||||
|
In contrast, these are invalid requests::
|
||||||
|
|
||||||
|
POST /some/path/here/11
|
||||||
|
{
|
||||||
|
"key": "This is a post, but it has a txnId."
|
||||||
|
}
|
||||||
|
|
||||||
|
PUT /some/path/here
|
||||||
|
{
|
||||||
|
"key": "This is a put but it is missing a txnId."
|
||||||
|
}
|
||||||
|
|
||||||
|
Receiving live updates on a client
|
||||||
|
----------------------------------
|
||||||
|
- C-S longpoll event stream
|
||||||
|
- Concept of start/end tokens.
|
||||||
|
- Mention /initialSync to get token.
|
||||||
|
|
||||||
|
|
||||||
|
Rooms
|
||||||
|
=====
|
||||||
|
- How are they created? PDU anchor point: "root of the tree".
|
||||||
|
- Adding / removing aliases.
|
||||||
|
- Invite/join dance
|
||||||
|
- State and non-state data (+extensibility)
|
||||||
|
|
||||||
|
TODO : Room permissions / config / power levels.
|
||||||
|
|
||||||
|
Messages
|
||||||
|
========
|
||||||
|
|
||||||
|
This specification outlines several standard event types, all of which are
|
||||||
|
prefixed with ``m.``
|
||||||
|
|
||||||
|
State messages
|
||||||
|
--------------
|
||||||
|
- m.room.name
|
||||||
|
- m.room.topic
|
||||||
|
- m.room.member
|
||||||
|
- m.room.config
|
||||||
|
- m.room.invite_join
|
||||||
|
|
||||||
|
What are they, when are they used, what do they contain, how should they be used
|
||||||
|
|
||||||
|
Non-state messages
|
||||||
|
------------------
|
||||||
|
- m.room.message
|
||||||
|
- m.room.message.feedback (and compressed format)
|
||||||
|
|
||||||
|
What are they, when are they used, what do they contain, how should they be used
|
||||||
|
|
||||||
|
m.room.message msgtypes
|
||||||
|
-----------------------
|
||||||
|
Each ``m.room.message`` MUST have a ``msgtype`` key which identifies the type of
|
||||||
|
message being sent. Each type has their own required and optional keys, as outlined
|
||||||
|
below:
|
||||||
|
|
||||||
|
``m.text``
|
||||||
|
Required keys:
|
||||||
|
- ``body`` : "string" - The body of the message.
|
||||||
|
Optional keys:
|
||||||
|
None.
|
||||||
|
Example:
|
||||||
|
``{ "msgtype": "m.text", "body": "I am a fish" }``
|
||||||
|
|
||||||
|
``m.emote``
|
||||||
|
Required keys:
|
||||||
|
- ``body`` : "string" - The emote action to perform.
|
||||||
|
Optional keys:
|
||||||
|
None.
|
||||||
|
Example:
|
||||||
|
``{ "msgtype": "m.emote", "body": "tries to come up with a witty explanation" }``
|
||||||
|
|
||||||
|
``m.image``
|
||||||
|
Required keys:
|
||||||
|
- ``url`` : "string" - The URL to the image.
|
||||||
|
Optional keys:
|
||||||
|
- ``info`` : "string" - info : JSON object (ImageInfo) - The image info for image
|
||||||
|
referred to in ``url``.
|
||||||
|
- ``thumbnail_url`` : "string" - The URL to the thumbnail.
|
||||||
|
- ``thumbnail_info`` : JSON object (ImageInfo) - The image info for the image
|
||||||
|
referred to in ``thumbnail_url``.
|
||||||
|
- ``body`` : "string" - The alt text of the image, or some kind of content
|
||||||
|
description for accessibility e.g. "image attachment".
|
||||||
|
|
||||||
|
ImageInfo:
|
||||||
|
Information about an image::
|
||||||
|
|
||||||
|
{
|
||||||
|
"size" : integer (size of image in bytes),
|
||||||
|
"w" : integer (width of image in pixels),
|
||||||
|
"h" : integer (height of image in pixels),
|
||||||
|
"mimetype" : "string (e.g. image/jpeg)",
|
||||||
|
}
|
||||||
|
|
||||||
|
``m.audio``
|
||||||
|
Required keys:
|
||||||
|
- ``url`` : "string" - The URL to the audio.
|
||||||
|
Optional keys:
|
||||||
|
- ``info`` : JSON object (AudioInfo) - The audio info for the audio referred to in
|
||||||
|
``url``.
|
||||||
|
- ``body`` : "string" - A description of the audio e.g. "Bee Gees -
|
||||||
|
Stayin' Alive", or some kind of content description for accessibility e.g.
|
||||||
|
"audio attachment".
|
||||||
|
AudioInfo:
|
||||||
|
Information about a piece of audio::
|
||||||
|
|
||||||
|
{
|
||||||
|
"mimetype" : "string (e.g. audio/aac)",
|
||||||
|
"size" : integer (size of audio in bytes),
|
||||||
|
"duration" : integer (duration of audio in milliseconds),
|
||||||
|
}
|
||||||
|
|
||||||
|
``m.video``
|
||||||
|
Required keys:
|
||||||
|
- ``url`` : "string" - The URL to the video.
|
||||||
|
Optional keys:
|
||||||
|
- ``info`` : JSON object (VideoInfo) - The video info for the video referred to in
|
||||||
|
``url``.
|
||||||
|
- ``body`` : "string" - A description of the video e.g. "Gangnam style",
|
||||||
|
or some kind of content description for accessibility e.g. "video attachment".
|
||||||
|
|
||||||
|
VideoInfo:
|
||||||
|
Information about a video::
|
||||||
|
|
||||||
|
{
|
||||||
|
"mimetype" : "string (e.g. video/mp4)",
|
||||||
|
"size" : integer (size of video in bytes),
|
||||||
|
"duration" : integer (duration of video in milliseconds),
|
||||||
|
"w" : integer (width of video in pixels),
|
||||||
|
"h" : integer (height of video in pixels),
|
||||||
|
"thumbnail_url" : "string (URL to image)",
|
||||||
|
"thumbanil_info" : JSON object (ImageInfo)
|
||||||
|
}
|
||||||
|
|
||||||
|
``m.location``
|
||||||
|
Required keys:
|
||||||
|
- ``geo_uri`` : "string" - The geo URI representing the location.
|
||||||
|
Optional keys:
|
||||||
|
- ``thumbnail_url`` : "string" - The URL to a thumnail of the location being
|
||||||
|
represented.
|
||||||
|
- ``thumbnail_info`` : JSON object (ImageInfo) - The image info for the image
|
||||||
|
referred to in ``thumbnail_url``.
|
||||||
|
- ``body`` : "string" - A description of the location e.g. "Big Ben,
|
||||||
|
London, UK", or some kind of content description for accessibility e.g.
|
||||||
|
"location attachment".
|
||||||
|
|
||||||
|
The following keys can be attached to any ``m.room.message``:
|
||||||
|
|
||||||
|
Optional keys:
|
||||||
|
- ``sender_ts`` : integer - A timestamp (ms resolution) representing the
|
||||||
|
wall-clock time when the message was sent from the client.
|
||||||
|
|
||||||
|
Presence
|
||||||
|
========
|
||||||
|
|
||||||
|
Each user has the concept of presence information. This encodes the
|
||||||
|
"availability" of that user, suitable for display on other user's clients. This
|
||||||
|
is transmitted as an ``m.presence`` event and is one of the few events which
|
||||||
|
are sent *outside the context of a room*. The basic piece of presence information
|
||||||
|
is represented by the ``state`` key, which is an enum of one of the following:
|
||||||
|
|
||||||
|
- ``online`` : The default state when the user is connected to an event stream.
|
||||||
|
- ``unavailable`` : The user is not reachable at this time.
|
||||||
|
- ``offline`` : The user is not connected to an event stream.
|
||||||
|
- ``free_for_chat`` : The user is generally willing to receive messages
|
||||||
|
moreso than default.
|
||||||
|
- ``hidden`` : TODO. Behaves as offline, but allows the user to see the client
|
||||||
|
state anyway and generally interact with client features.
|
||||||
|
|
||||||
|
This basic ``state`` field applies to the user as a whole, regardless of how many
|
||||||
|
client devices they have connected. The home server should synchronise this
|
||||||
|
status choice among multiple devices to ensure the user gets a consistent
|
||||||
|
experience.
|
||||||
|
|
||||||
|
Idle Time
|
||||||
|
---------
|
||||||
|
As well as the basic ``state`` field, the presence information can also show a sense
|
||||||
|
of an "idle timer". This should be maintained individually by the user's
|
||||||
|
clients, and the home server can take the highest reported time as that to
|
||||||
|
report. When a user is offline, the home server can still report when the user was last
|
||||||
|
seen online.
|
||||||
|
|
||||||
|
Transmission
|
||||||
|
------------
|
||||||
|
- Transmitted as an EDU.
|
||||||
|
- Presence lists determine who to send to.
|
||||||
|
|
||||||
|
Presence List
|
||||||
|
-------------
|
||||||
|
Each user's home server stores a "presence list" for that user. This stores a
|
||||||
|
list of other user IDs the user has chosen to add to it. To be added to this
|
||||||
|
list, the user being added must receive permission from the list owner. Once
|
||||||
|
granted, both user's HS(es) store this information. Since such subscriptions
|
||||||
|
are likely to be bidirectional, HSes may wish to automatically accept requests
|
||||||
|
when a reverse subscription already exists.
|
||||||
|
|
||||||
|
Presence and Permissions
|
||||||
|
------------------------
|
||||||
|
For a viewing user to be allowed to see the presence information of a target
|
||||||
|
user, either:
|
||||||
|
|
||||||
|
- The target user has allowed the viewing user to add them to their presence
|
||||||
|
list, or
|
||||||
|
- The two users share at least one room in common
|
||||||
|
|
||||||
|
In the latter case, this allows for clients to display some minimal sense of
|
||||||
|
presence information in a user list for a room.
|
||||||
|
|
||||||
|
Typing notifications
|
||||||
|
====================
|
||||||
|
|
||||||
|
TODO : Leo
|
||||||
|
|
||||||
|
Voice over IP
|
||||||
|
=============
|
||||||
|
|
||||||
|
TODO : Dave
|
||||||
|
|
||||||
|
Profiles
|
||||||
|
========
|
||||||
|
|
||||||
|
Internally within Matrix users are referred to by their user ID, which is not a
|
||||||
|
human-friendly string. Profiles grant users the ability to see human-readable
|
||||||
|
names for other users that are in some way meaningful to them. Additionally,
|
||||||
|
profiles can publish additional information, such as the user's age or location.
|
||||||
|
|
||||||
|
A Profile consists of a display name, an avatar picture, and a set of other
|
||||||
|
metadata fields that the user may wish to publish (email address, phone
|
||||||
|
numbers, website URLs, etc...). This specification puts no requirements on the
|
||||||
|
display name other than it being a valid unicode string.
|
||||||
|
|
||||||
|
- Metadata extensibility
|
||||||
|
- Bundled with which events? e.g. m.room.member
|
||||||
|
- Generate own events? What type?
|
||||||
|
|
||||||
|
Registration and login
|
||||||
|
======================
|
||||||
|
|
||||||
|
Clients must register with a home server in order to use Matrix. After
|
||||||
|
registering, the client will be given an access token which must be used in ALL
|
||||||
|
requests to that home server as a query parameter 'access_token'.
|
||||||
|
|
||||||
|
- TODO Kegan : Make registration like login (just omit the "user" key on the
|
||||||
|
initial request?)
|
||||||
|
|
||||||
|
If the client has already registered, they need to be able to login to their
|
||||||
|
account. The home server may provide many different ways of logging in, such
|
||||||
|
as user/password auth, login via a social network (OAuth2), login by confirming
|
||||||
|
a token sent to their email address, etc. This specification does not define how
|
||||||
|
home servers should authorise their users who want to login to their existing
|
||||||
|
accounts, but instead defines the standard interface which implementations
|
||||||
|
should follow so that ANY client can login to ANY home server.
|
||||||
|
|
||||||
|
The login process breaks down into the following:
|
||||||
|
1. Determine the requirements for logging in.
|
||||||
|
2. Submit the login stage credentials.
|
||||||
|
3. Get credentials or be told the next stage in the login process and repeat
|
||||||
|
step 2.
|
||||||
|
|
||||||
|
As each home server may have different ways of logging in, the client needs to know how
|
||||||
|
they should login. All distinct login stages MUST have a corresponding ``type``.
|
||||||
|
A ``type`` is a namespaced string which details the mechanism for logging in.
|
||||||
|
|
||||||
|
A client may be able to login via multiple valid login flows, and should choose a single
|
||||||
|
flow when logging in. A flow is a series of login stages. The home server MUST respond
|
||||||
|
with all the valid login flows when requested::
|
||||||
|
|
||||||
|
The client can login via 3 paths: 1a and 1b, 2a and 2b, or 3. The client should
|
||||||
|
select one of these paths.
|
||||||
|
|
||||||
|
{
|
||||||
|
"flows": [
|
||||||
|
{
|
||||||
|
"type": "<login type1a>",
|
||||||
|
"stages": [ "<login type 1a>", "<login type 1b>" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "<login type2a>",
|
||||||
|
"stages": [ "<login type 2a>", "<login type 2b>" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "<login type3>"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
After the login is completed, the client's fully-qualified user ID and a new access
|
||||||
|
token MUST be returned::
|
||||||
|
|
||||||
|
{
|
||||||
|
"user_id": "@user:matrix.org",
|
||||||
|
"access_token": "abcdef0123456789"
|
||||||
|
}
|
||||||
|
|
||||||
|
The ``user_id`` key is particularly useful if the home server wishes to support
|
||||||
|
localpart entry of usernames (e.g. "user" rather than "@user:matrix.org"), as the
|
||||||
|
client may not be able to determine its ``user_id`` in this case.
|
||||||
|
|
||||||
|
If a login has multiple requests, the home server may wish to create a session. If
|
||||||
|
a home server responds with a 'session' key to a request, clients MUST submit it in
|
||||||
|
subsequent requests until the login is completed::
|
||||||
|
|
||||||
|
{
|
||||||
|
"session": "<session id>"
|
||||||
|
}
|
||||||
|
|
||||||
|
This specification defines the following login types:
|
||||||
|
- ``m.login.password``
|
||||||
|
- ``m.login.oauth2``
|
||||||
|
- ``m.login.email.code``
|
||||||
|
- ``m.login.email.url``
|
||||||
|
|
||||||
|
|
||||||
|
Password-based
|
||||||
|
--------------
|
||||||
|
:Type:
|
||||||
|
m.login.password
|
||||||
|
:Description:
|
||||||
|
Login is supported via a username and password.
|
||||||
|
|
||||||
|
To respond to this type, reply with::
|
||||||
|
|
||||||
|
{
|
||||||
|
"type": "m.login.password",
|
||||||
|
"user": "<user_id or user localpart>",
|
||||||
|
"password": "<password>"
|
||||||
|
}
|
||||||
|
|
||||||
|
The home server MUST respond with either new credentials, the next stage of the login
|
||||||
|
process, or a standard error response.
|
||||||
|
|
||||||
|
OAuth2-based
|
||||||
|
------------
|
||||||
|
:Type:
|
||||||
|
m.login.oauth2
|
||||||
|
:Description:
|
||||||
|
Login is supported via OAuth2 URLs. This login consists of multiple requests.
|
||||||
|
|
||||||
|
To respond to this type, reply with::
|
||||||
|
|
||||||
|
{
|
||||||
|
"type": "m.login.oauth2",
|
||||||
|
"user": "<user_id or user localpart>"
|
||||||
|
}
|
||||||
|
|
||||||
|
The server MUST respond with::
|
||||||
|
|
||||||
|
{
|
||||||
|
"uri": <Authorization Request URI OR service selection URI>
|
||||||
|
}
|
||||||
|
|
||||||
|
The home server acts as a 'confidential' client for the purposes of OAuth2.
|
||||||
|
If the uri is a ``sevice selection URI``, it MUST point to a webpage which prompts the
|
||||||
|
user to choose which service to authorize with. On selection of a service, this
|
||||||
|
MUST link through to an ``Authorization Request URI``. If there is only 1 service which the
|
||||||
|
home server accepts when logging in, this indirection can be skipped and the
|
||||||
|
"uri" key can be the ``Authorization Request URI``.
|
||||||
|
|
||||||
|
The client then visits the ``Authorization Request URI``, which then shows the OAuth2
|
||||||
|
Allow/Deny prompt. Hitting 'Allow' returns the ``redirect URI`` with the auth code.
|
||||||
|
Home servers can choose any path for the ``redirect URI``. The client should visit
|
||||||
|
the ``redirect URI``, which will then finish the OAuth2 login process, granting the
|
||||||
|
home server an access token for the chosen service. When the home server gets
|
||||||
|
this access token, it verifies that the cilent has authorised with the 3rd party, and
|
||||||
|
can now complete the login. The OAuth2 ``redirect URI`` (with auth code) MUST respond
|
||||||
|
with either new credentials, the next stage of the login process, or a standard error
|
||||||
|
response.
|
||||||
|
|
||||||
|
For example, if a home server accepts OAuth2 from Google, it would return the
|
||||||
|
Authorization Request URI for Google::
|
||||||
|
|
||||||
|
{
|
||||||
|
"uri": "https://accounts.google.com/o/oauth2/auth?response_type=code&
|
||||||
|
client_id=CLIENT_ID&redirect_uri=REDIRECT_URI&scope=photos"
|
||||||
|
}
|
||||||
|
|
||||||
|
The client then visits this URI and authorizes the home server. The client then
|
||||||
|
visits the REDIRECT_URI with the auth code= query parameter which returns::
|
||||||
|
|
||||||
|
{
|
||||||
|
"user_id": "@user:matrix.org",
|
||||||
|
"access_token": "0123456789abcdef"
|
||||||
|
}
|
||||||
|
|
||||||
|
Email-based (code)
|
||||||
|
------------------
|
||||||
|
:Type:
|
||||||
|
m.login.email.code
|
||||||
|
:Description:
|
||||||
|
Login is supported by typing in a code which is sent in an email. This login
|
||||||
|
consists of multiple requests.
|
||||||
|
|
||||||
|
To respond to this type, reply with::
|
||||||
|
|
||||||
|
{
|
||||||
|
"type": "m.login.email.code",
|
||||||
|
"user": "<user_id or user localpart>",
|
||||||
|
"email": "<email address>"
|
||||||
|
}
|
||||||
|
|
||||||
|
After validating the email address, the home server MUST send an email containing
|
||||||
|
an authentication code and return::
|
||||||
|
|
||||||
|
{
|
||||||
|
"type": "m.login.email.code",
|
||||||
|
"session": "<session id>"
|
||||||
|
}
|
||||||
|
|
||||||
|
The second request in this login stage involves sending this authentication code::
|
||||||
|
|
||||||
|
{
|
||||||
|
"type": "m.login.email.code",
|
||||||
|
"session": "<session id>",
|
||||||
|
"code": "<code in email sent>"
|
||||||
|
}
|
||||||
|
|
||||||
|
The home server MUST respond to this with either new credentials, the next stage of
|
||||||
|
the login process, or a standard error response.
|
||||||
|
|
||||||
|
Email-based (url)
|
||||||
|
-----------------
|
||||||
|
:Type:
|
||||||
|
m.login.email.url
|
||||||
|
:Description:
|
||||||
|
Login is supported by clicking on a URL in an email. This login consists of
|
||||||
|
multiple requests.
|
||||||
|
|
||||||
|
To respond to this type, reply with::
|
||||||
|
|
||||||
|
{
|
||||||
|
"type": "m.login.email.url",
|
||||||
|
"user": "<user_id or user localpart>",
|
||||||
|
"email": "<email address>"
|
||||||
|
}
|
||||||
|
|
||||||
|
After validating the email address, the home server MUST send an email containing
|
||||||
|
an authentication URL and return::
|
||||||
|
|
||||||
|
{
|
||||||
|
"type": "m.login.email.url",
|
||||||
|
"session": "<session id>"
|
||||||
|
}
|
||||||
|
|
||||||
|
The email contains a URL which must be clicked. After it has been clicked, the
|
||||||
|
client should perform another request::
|
||||||
|
|
||||||
|
{
|
||||||
|
"type": "m.login.email.url",
|
||||||
|
"session": "<session id>"
|
||||||
|
}
|
||||||
|
|
||||||
|
The home server MUST respond to this with either new credentials, the next stage of
|
||||||
|
the login process, or a standard error response.
|
||||||
|
|
||||||
|
A common client implementation will be to periodically poll until the link is clicked.
|
||||||
|
If the link has not been visited yet, a standard error response with an errcode of
|
||||||
|
``M_LOGIN_EMAIL_URL_NOT_YET`` should be returned.
|
||||||
|
|
||||||
|
|
||||||
|
N-Factor Authentication
|
||||||
|
-----------------------
|
||||||
|
Multiple login stages can be combined to create N-factor authentication during login.
|
||||||
|
|
||||||
|
This can be achieved by responding with the ``next`` login type on completion of a
|
||||||
|
previous login stage::
|
||||||
|
|
||||||
|
{
|
||||||
|
"next": "<next login type>"
|
||||||
|
}
|
||||||
|
|
||||||
|
If a home server implements N-factor authentication, it MUST respond with all
|
||||||
|
``stages`` when initially queried for their login requirements::
|
||||||
|
|
||||||
|
{
|
||||||
|
"type": "<1st login type>",
|
||||||
|
"stages": [ <1st login type>, <2nd login type>, ... , <Nth login type> ]
|
||||||
|
}
|
||||||
|
|
||||||
|
This can be represented conceptually as::
|
||||||
|
|
||||||
|
_______________________
|
||||||
|
| Login Stage 1 |
|
||||||
|
| type: "<login type1>" |
|
||||||
|
| ___________________ |
|
||||||
|
| |_Request_1_________| | <-- Returns "session" key which is used throughout.
|
||||||
|
| ___________________ |
|
||||||
|
| |_Request_2_________| | <-- Returns a "next" value of "login type2"
|
||||||
|
|_______________________|
|
||||||
|
|
|
||||||
|
|
|
||||||
|
_________V_____________
|
||||||
|
| Login Stage 2 |
|
||||||
|
| type: "<login type2>" |
|
||||||
|
| ___________________ |
|
||||||
|
| |_Request_1_________| |
|
||||||
|
| ___________________ |
|
||||||
|
| |_Request_2_________| |
|
||||||
|
| ___________________ |
|
||||||
|
| |_Request_3_________| | <-- Returns a "next" value of "login type3"
|
||||||
|
|_______________________|
|
||||||
|
|
|
||||||
|
|
|
||||||
|
_________V_____________
|
||||||
|
| Login Stage 3 |
|
||||||
|
| type: "<login type3>" |
|
||||||
|
| ___________________ |
|
||||||
|
| |_Request_1_________| | <-- Returns user credentials
|
||||||
|
|_______________________|
|
||||||
|
|
||||||
|
Fallback
|
||||||
|
--------
|
||||||
|
Clients cannot be expected to be able to know how to process every single
|
||||||
|
login type. If a client determines it does not know how to handle a given
|
||||||
|
login type, it should request a login fallback page::
|
||||||
|
|
||||||
|
GET matrix/client/api/v1/login/fallback
|
||||||
|
|
||||||
|
This MUST return an HTML page which can perform the entire login process.
|
||||||
|
|
||||||
|
Identity
|
||||||
|
========
|
||||||
|
|
||||||
|
TODO : Dave
|
||||||
|
- 3PIDs and identity server, functions
|
||||||
|
|
||||||
|
Federation
|
||||||
|
==========
|
||||||
|
|
||||||
|
Federation is the term used to describe how to communicate between Matrix home
|
||||||
|
servers. Federation is a mechanism by which two home servers can exchange
|
||||||
|
Matrix event messages, both as a real-time push of current events, and as a
|
||||||
|
historic fetching mechanism to synchronise past history for clients to view. It
|
||||||
|
uses HTTP connections between each pair of servers involved as the underlying
|
||||||
|
transport. Messages are exchanged between servers in real-time by active pushing
|
||||||
|
from each server's HTTP client into the server of the other. Queries to fetch
|
||||||
|
historic data for the purpose of back-filling scrollback buffers and the like
|
||||||
|
can also be performed.
|
||||||
|
|
||||||
|
There are three main kinds of communication that occur between home servers:
|
||||||
|
|
||||||
|
:Queries:
|
||||||
|
These are single request/response interactions between a given pair of
|
||||||
|
servers, initiated by one side sending an HTTP GET request to obtain some
|
||||||
|
information, and responded by the other. They are not persisted and contain
|
||||||
|
no long-term significant history. They simply request a snapshot state at the
|
||||||
|
instant the query is made.
|
||||||
|
|
||||||
|
:Ephemeral Data Units (EDUs):
|
||||||
|
These are notifications of events that are pushed from one home server to
|
||||||
|
another. They are not persisted and contain no long-term significant history,
|
||||||
|
nor does the receiving home server have to reply to them.
|
||||||
|
|
||||||
|
:Persisted Data Units (PDUs):
|
||||||
|
These are notifications of events that are broadcast from one home server to
|
||||||
|
any others that are interested in the same "context" (namely, a Room ID).
|
||||||
|
They are persisted to long-term storage and form the record of history for
|
||||||
|
that context.
|
||||||
|
|
||||||
|
EDUs and PDUs are further wrapped in an envelope called a Transaction, which is
|
||||||
|
transferred from the origin to the destination home server using an HTTP PUT request.
|
||||||
|
|
||||||
|
|
||||||
|
Transactions
|
||||||
|
------------
|
||||||
|
The transfer of EDUs and PDUs between home servers is performed by an exchange
|
||||||
|
of Transaction messages, which are encoded as JSON objects, passed over an
|
||||||
|
HTTP PUT request. A Transaction is meaningful only to the pair of home servers that
|
||||||
|
exchanged it; they are not globally-meaningful.
|
||||||
|
|
||||||
|
Each transaction has:
|
||||||
|
- An opaque transaction ID.
|
||||||
|
- A timestamp (UNIX epoch time in milliseconds) generated by its origin server.
|
||||||
|
- An origin and destination server name.
|
||||||
|
- A list of "previous IDs".
|
||||||
|
- A list of PDUs and EDUs - the actual message payload that the Transaction carries.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
{
|
||||||
|
"transaction_id":"916d630ea616342b42e98a3be0b74113",
|
||||||
|
"ts":1404835423000,
|
||||||
|
"origin":"red",
|
||||||
|
"destination":"blue",
|
||||||
|
"prev_ids":["e1da392e61898be4d2009b9fecce5325"],
|
||||||
|
"pdus":[...],
|
||||||
|
"edus":[...]
|
||||||
|
}
|
||||||
|
|
||||||
|
The ``prev_ids`` field contains a list of previous transaction IDs that
|
||||||
|
the ``origin`` server has sent to this ``destination``. Its purpose is to act as a
|
||||||
|
sequence checking mechanism - the destination server can check whether it has
|
||||||
|
successfully received that Transaction, or ask for a retransmission if not.
|
||||||
|
|
||||||
|
The ``pdus`` field of a transaction is a list, containing zero or more PDUs.[*]
|
||||||
|
Each PDU is itself a JSON object containing a number of keys, the exact details of
|
||||||
|
which will vary depending on the type of PDU. Similarly, the ``edus`` field is
|
||||||
|
another list containing the EDUs. This key may be entirely absent if there are
|
||||||
|
no EDUs to transfer.
|
||||||
|
|
||||||
|
(* Normally the PDU list will be non-empty, but the server should cope with
|
||||||
|
receiving an "empty" transaction, as this is useful for informing peers of other
|
||||||
|
transaction IDs they should be aware of. This effectively acts as a push
|
||||||
|
mechanism to encourage peers to continue to replicate content.)
|
||||||
|
|
||||||
|
PDUs and EDUs
|
||||||
|
-------------
|
||||||
|
|
||||||
|
All PDUs have:
|
||||||
|
- An ID
|
||||||
|
- A context
|
||||||
|
- A declaration of their type
|
||||||
|
- A list of other PDU IDs that have been seen recently on that context (regardless of which origin
|
||||||
|
sent them)
|
||||||
|
|
||||||
|
[[TODO(paul): Update this structure so that 'pdu_id' is a two-element
|
||||||
|
[origin,ref] pair like the prev_pdus are]]
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
{
|
||||||
|
"pdu_id":"a4ecee13e2accdadf56c1025af232176",
|
||||||
|
"context":"#example.green",
|
||||||
|
"origin":"green",
|
||||||
|
"ts":1404838188000,
|
||||||
|
"pdu_type":"m.text",
|
||||||
|
"prev_pdus":[["blue","99d16afbc857975916f1d73e49e52b65"]],
|
||||||
|
"content":...
|
||||||
|
"is_state":false
|
||||||
|
}
|
||||||
|
|
||||||
|
In contrast to Transactions, it is important to note that the ``prev_pdus``
|
||||||
|
field of a PDU refers to PDUs that any origin server has sent, rather than
|
||||||
|
previous IDs that this ``origin`` has sent. This list may refer to other PDUs sent
|
||||||
|
by the same origin as the current one, or other origins.
|
||||||
|
|
||||||
|
Because of the distributed nature of participants in a Matrix conversation, it
|
||||||
|
is impossible to establish a globally-consistent total ordering on the events.
|
||||||
|
However, by annotating each outbound PDU at its origin with IDs of other PDUs it
|
||||||
|
has received, a partial ordering can be constructed allowing causallity
|
||||||
|
relationships to be preserved. A client can then display these messages to the
|
||||||
|
end-user in some order consistent with their content and ensure that no message
|
||||||
|
that is semantically in reply of an earlier one is ever displayed before it.
|
||||||
|
|
||||||
|
PDUs fall into two main categories: those that deliver Events, and those that
|
||||||
|
synchronise State. For PDUs that relate to State synchronisation, additional
|
||||||
|
keys exist to support this:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
{...,
|
||||||
|
"is_state":true,
|
||||||
|
"state_key":TODO
|
||||||
|
"power_level":TODO
|
||||||
|
"prev_state_id":TODO
|
||||||
|
"prev_state_origin":TODO}
|
||||||
|
|
||||||
|
[[TODO(paul): At this point we should probably have a long description of how
|
||||||
|
State management works, with descriptions of clobbering rules, power levels, etc
|
||||||
|
etc... But some of that detail is rather up-in-the-air, on the whiteboard, and
|
||||||
|
so on. This part needs refining. And writing in its own document as the details
|
||||||
|
relate to the server/system as a whole, not specifically to server-server
|
||||||
|
federation.]]
|
||||||
|
|
||||||
|
EDUs, by comparison to PDUs, do not have an ID, a context, or a list of
|
||||||
|
"previous" IDs. The only mandatory fields for these are the type, origin and
|
||||||
|
destination home server names, and the actual nested content.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
{"edu_type":"m.presence",
|
||||||
|
"origin":"blue",
|
||||||
|
"destination":"orange",
|
||||||
|
"content":...}
|
||||||
|
|
||||||
|
Backfilling
|
||||||
|
-----------
|
||||||
|
- What it is, when is it used, how is it done
|
||||||
|
|
||||||
|
SRV Records
|
||||||
|
-----------
|
||||||
|
- Why it is needed
|
||||||
|
|
||||||
|
Security
|
||||||
|
========
|
||||||
|
- rate limiting
|
||||||
|
- crypto (s-s auth)
|
||||||
|
- E2E
|
||||||
|
- Lawful intercept + Key Escrow
|
||||||
|
|
||||||
|
TODO Mark
|
||||||
|
|
||||||
|
Policy Servers
|
||||||
|
==============
|
||||||
|
TODO
|
||||||
|
|
||||||
|
Content repository
|
||||||
|
==================
|
||||||
|
- thumbnail paths
|
||||||
|
|
||||||
|
Address book repository
|
||||||
|
=======================
|
||||||
|
- format
|
||||||
|
|
||||||
|
|
||||||
|
Glossary
|
||||||
|
========
|
||||||
|
- domain specific words/acronyms with definitions
|
||||||
|
|
||||||
|
User ID:
|
||||||
|
An opaque ID which identifies an end-user, which consists of some opaque
|
||||||
|
localpart combined with the domain name of their home server.
|
@ -541,7 +541,10 @@ class _TransactionQueue(object):
|
|||||||
)
|
)
|
||||||
|
|
||||||
def eb(failure):
|
def eb(failure):
|
||||||
deferred.errback(failure)
|
if not deferred.called:
|
||||||
|
deferred.errback(failure)
|
||||||
|
else:
|
||||||
|
logger.exception("Failed to send edu", failure)
|
||||||
self._attempt_new_transaction(destination).addErrback(eb)
|
self._attempt_new_transaction(destination).addErrback(eb)
|
||||||
|
|
||||||
return deferred
|
return deferred
|
||||||
|
@ -35,7 +35,7 @@ class BaseRoomHandler(BaseHandler):
|
|||||||
extra_users=[]):
|
extra_users=[]):
|
||||||
snapshot.fill_out_prev_events(event)
|
snapshot.fill_out_prev_events(event)
|
||||||
|
|
||||||
store_id = yield self.store.persist_event(event)
|
yield self.store.persist_event(event)
|
||||||
|
|
||||||
destinations = set(extra_destinations)
|
destinations = set(extra_destinations)
|
||||||
# Send a PDU to all hosts who have joined the room.
|
# Send a PDU to all hosts who have joined the room.
|
||||||
|
@ -16,6 +16,7 @@
|
|||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.events import SynapseEvent
|
from synapse.api.events import SynapseEvent
|
||||||
|
from synapse.util.logutils import log_function
|
||||||
|
|
||||||
from ._base import BaseHandler
|
from ._base import BaseHandler
|
||||||
|
|
||||||
@ -44,6 +45,7 @@ class EventStreamHandler(BaseHandler):
|
|||||||
self.notifier = hs.get_notifier()
|
self.notifier = hs.get_notifier()
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
def get_stream(self, auth_user_id, pagin_config, timeout=0):
|
def get_stream(self, auth_user_id, pagin_config, timeout=0):
|
||||||
auth_user = self.hs.parse_userid(auth_user_id)
|
auth_user = self.hs.parse_userid(auth_user_id)
|
||||||
|
|
||||||
@ -90,13 +92,15 @@ class EventStreamHandler(BaseHandler):
|
|||||||
# 10 seconds of grace to allow the client to reconnect again
|
# 10 seconds of grace to allow the client to reconnect again
|
||||||
# before we think they're gone
|
# before we think they're gone
|
||||||
def _later():
|
def _later():
|
||||||
|
logger.debug("_later stopped_user_eventstream %s", auth_user)
|
||||||
self.distributor.fire(
|
self.distributor.fire(
|
||||||
"stopped_user_eventstream", auth_user
|
"stopped_user_eventstream", auth_user
|
||||||
)
|
)
|
||||||
del self._stop_timer_per_user[auth_user]
|
del self._stop_timer_per_user[auth_user]
|
||||||
|
|
||||||
|
logger.debug("Scheduling _later: for %s", auth_user)
|
||||||
self._stop_timer_per_user[auth_user] = (
|
self._stop_timer_per_user[auth_user] = (
|
||||||
self.clock.call_later(5, _later)
|
self.clock.call_later(30, _later)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -22,8 +22,6 @@ from synapse.api.constants import Membership
|
|||||||
from synapse.util.logutils import log_function
|
from synapse.util.logutils import log_function
|
||||||
from synapse.federation.pdu_codec import PduCodec
|
from synapse.federation.pdu_codec import PduCodec
|
||||||
|
|
||||||
from synapse.api.errors import AuthError
|
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
@ -86,12 +84,6 @@ class FederationHandler(BaseHandler):
|
|||||||
|
|
||||||
yield self.replication_layer.send_pdu(pdu)
|
yield self.replication_layer.send_pdu(pdu)
|
||||||
|
|
||||||
@log_function
|
|
||||||
def get_state_for_room(self, destination, room_id):
|
|
||||||
return self.replication_layer.get_state_for_context(
|
|
||||||
destination, room_id
|
|
||||||
)
|
|
||||||
|
|
||||||
@log_function
|
@log_function
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def on_receive_pdu(self, pdu, backfilled):
|
def on_receive_pdu(self, pdu, backfilled):
|
||||||
@ -141,19 +133,19 @@ class FederationHandler(BaseHandler):
|
|||||||
|
|
||||||
yield self.hs.get_handlers().room_member_handler.change_membership(
|
yield self.hs.get_handlers().room_member_handler.change_membership(
|
||||||
new_event,
|
new_event,
|
||||||
True
|
do_auth=True
|
||||||
)
|
)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
with (yield self.room_lock.lock(event.room_id)):
|
with (yield self.room_lock.lock(event.room_id)):
|
||||||
store_id = yield self.store.persist_event(event, backfilled)
|
yield self.store.persist_event(event, backfilled)
|
||||||
|
|
||||||
room = yield self.store.get_room(event.room_id)
|
room = yield self.store.get_room(event.room_id)
|
||||||
|
|
||||||
if not room:
|
if not room:
|
||||||
# Huh, let's try and get the current state
|
# Huh, let's try and get the current state
|
||||||
try:
|
try:
|
||||||
yield self.get_state_for_room(
|
yield self.replication_layer.get_state_for_context(
|
||||||
event.origin, event.room_id
|
event.origin, event.room_id
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -163,9 +155,9 @@ class FederationHandler(BaseHandler):
|
|||||||
if self.hs.hostname in hosts:
|
if self.hs.hostname in hosts:
|
||||||
try:
|
try:
|
||||||
yield self.store.store_room(
|
yield self.store.store_room(
|
||||||
event.room_id,
|
room_id=event.room_id,
|
||||||
"",
|
room_creator_user_id="",
|
||||||
is_public=False
|
is_public=False,
|
||||||
)
|
)
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
@ -188,27 +180,14 @@ class FederationHandler(BaseHandler):
|
|||||||
@log_function
|
@log_function
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def backfill(self, dest, room_id, limit):
|
def backfill(self, dest, room_id, limit):
|
||||||
events = yield self._backfill(dest, room_id, limit)
|
|
||||||
|
|
||||||
for event in events:
|
|
||||||
try:
|
|
||||||
yield self.store.persist_event(event, backfilled=True)
|
|
||||||
except:
|
|
||||||
logger.exception("Failed to persist event: %s", event)
|
|
||||||
|
|
||||||
defer.returnValue(events)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def _backfill(self, dest, room_id, limit):
|
|
||||||
pdus = yield self.replication_layer.backfill(dest, room_id, limit)
|
pdus = yield self.replication_layer.backfill(dest, room_id, limit)
|
||||||
|
|
||||||
if not pdus:
|
events = []
|
||||||
defer.returnValue([])
|
|
||||||
|
|
||||||
events = [
|
for pdu in pdus:
|
||||||
self.pdu_codec.event_from_pdu(pdu)
|
event = self.pdu_codec.event_from_pdu(pdu)
|
||||||
for pdu in pdus
|
events.append(event)
|
||||||
]
|
yield self.store.persist_event(event, backfilled=True)
|
||||||
|
|
||||||
defer.returnValue(events)
|
defer.returnValue(events)
|
||||||
|
|
||||||
@ -224,7 +203,9 @@ class FederationHandler(BaseHandler):
|
|||||||
|
|
||||||
# First get current state to see if we are already joined.
|
# First get current state to see if we are already joined.
|
||||||
try:
|
try:
|
||||||
yield self.get_state_for_room(target_host, room_id)
|
yield self.replication_layer.get_state_for_context(
|
||||||
|
target_host, room_id
|
||||||
|
)
|
||||||
|
|
||||||
hosts = yield self.store.get_joined_hosts_for_room(room_id)
|
hosts = yield self.store.get_joined_hosts_for_room(room_id)
|
||||||
if self.hs.hostname in hosts:
|
if self.hs.hostname in hosts:
|
||||||
@ -254,8 +235,8 @@ class FederationHandler(BaseHandler):
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
yield self.store.store_room(
|
yield self.store.store_room(
|
||||||
room_id,
|
room_id=room_id,
|
||||||
"",
|
room_creator_user_id="",
|
||||||
is_public=False
|
is_public=False
|
||||||
)
|
)
|
||||||
except:
|
except:
|
||||||
|
@ -277,10 +277,13 @@ class MessageHandler(BaseRoomHandler):
|
|||||||
end_token=now_token.events_key,
|
end_token=now_token.events_key,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
start_token = now_token.copy_and_replace("events_key", token[0])
|
||||||
|
end_token = now_token.copy_and_replace("events_key", token[1])
|
||||||
|
|
||||||
d["messages"] = {
|
d["messages"] = {
|
||||||
"chunk": [m.get_dict() for m in messages],
|
"chunk": [m.get_dict() for m in messages],
|
||||||
"start": token[0],
|
"start": start_token.to_string(),
|
||||||
"end": token[1],
|
"end": end_token.to_string(),
|
||||||
}
|
}
|
||||||
|
|
||||||
current_state = yield self.store.get_current_state(
|
current_state = yield self.store.get_current_state(
|
||||||
|
@ -18,6 +18,8 @@ from twisted.internet import defer
|
|||||||
from synapse.api.errors import SynapseError, AuthError
|
from synapse.api.errors import SynapseError, AuthError
|
||||||
from synapse.api.constants import PresenceState
|
from synapse.api.constants import PresenceState
|
||||||
|
|
||||||
|
from synapse.util.logutils import log_function
|
||||||
|
|
||||||
from ._base import BaseHandler
|
from ._base import BaseHandler
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
@ -142,7 +144,7 @@ class PresenceHandler(BaseHandler):
|
|||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def is_presence_visible(self, observer_user, observed_user):
|
def is_presence_visible(self, observer_user, observed_user):
|
||||||
defer.returnValue(True)
|
defer.returnValue(True)
|
||||||
return
|
# return
|
||||||
# FIXME (erikj): This code path absolutely kills the database.
|
# FIXME (erikj): This code path absolutely kills the database.
|
||||||
|
|
||||||
assert(observed_user.is_mine)
|
assert(observed_user.is_mine)
|
||||||
@ -188,8 +190,9 @@ class PresenceHandler(BaseHandler):
|
|||||||
defer.returnValue(state)
|
defer.returnValue(state)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
def set_state(self, target_user, auth_user, state):
|
def set_state(self, target_user, auth_user, state):
|
||||||
return
|
# return
|
||||||
# TODO (erikj): Turn this back on. Why did we end up sending EDUs
|
# TODO (erikj): Turn this back on. Why did we end up sending EDUs
|
||||||
# everywhere?
|
# everywhere?
|
||||||
|
|
||||||
@ -245,33 +248,42 @@ class PresenceHandler(BaseHandler):
|
|||||||
|
|
||||||
self.push_presence(user, statuscache=statuscache)
|
self.push_presence(user, statuscache=statuscache)
|
||||||
|
|
||||||
|
@log_function
|
||||||
def started_user_eventstream(self, user):
|
def started_user_eventstream(self, user):
|
||||||
# TODO(paul): Use "last online" state
|
# TODO(paul): Use "last online" state
|
||||||
self.set_state(user, user, {"state": PresenceState.ONLINE})
|
self.set_state(user, user, {"state": PresenceState.ONLINE})
|
||||||
|
|
||||||
|
@log_function
|
||||||
def stopped_user_eventstream(self, user):
|
def stopped_user_eventstream(self, user):
|
||||||
# TODO(paul): Save current state as "last online" state
|
# TODO(paul): Save current state as "last online" state
|
||||||
self.set_state(user, user, {"state": PresenceState.OFFLINE})
|
self.set_state(user, user, {"state": PresenceState.OFFLINE})
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def user_joined_room(self, user, room_id):
|
def user_joined_room(self, user, room_id):
|
||||||
localusers = set()
|
|
||||||
remotedomains = set()
|
|
||||||
|
|
||||||
rm_handler = self.homeserver.get_handlers().room_member_handler
|
|
||||||
yield rm_handler.fetch_room_distributions_into(room_id,
|
|
||||||
localusers=localusers, remotedomains=remotedomains,
|
|
||||||
ignore_user=user)
|
|
||||||
|
|
||||||
if user.is_mine:
|
if user.is_mine:
|
||||||
yield self._send_presence_to_distribution(srcuser=user,
|
self.push_update_to_local_and_remote(
|
||||||
localusers=localusers, remotedomains=remotedomains,
|
observed_user=user,
|
||||||
|
room_ids=[room_id],
|
||||||
statuscache=self._get_or_offline_usercache(user),
|
statuscache=self._get_or_offline_usercache(user),
|
||||||
)
|
)
|
||||||
|
|
||||||
for srcuser in localusers:
|
else:
|
||||||
yield self._send_presence(srcuser=srcuser, destuser=user,
|
self.push_update_to_clients(
|
||||||
statuscache=self._get_or_offline_usercache(srcuser),
|
observed_user=user,
|
||||||
|
room_ids=[room_id],
|
||||||
|
statuscache=self._get_or_offline_usercache(user),
|
||||||
|
)
|
||||||
|
|
||||||
|
# We also want to tell them about current presence of people.
|
||||||
|
rm_handler = self.homeserver.get_handlers().room_member_handler
|
||||||
|
curr_users = yield rm_handler.get_room_members(room_id)
|
||||||
|
|
||||||
|
for local_user in [c for c in curr_users if c.is_mine]:
|
||||||
|
self.push_update_to_local_and_remote(
|
||||||
|
observed_user=local_user,
|
||||||
|
users_to_push=[user],
|
||||||
|
statuscache=self._get_or_offline_usercache(local_user),
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@ -382,11 +394,13 @@ class PresenceHandler(BaseHandler):
|
|||||||
defer.returnValue(presence)
|
defer.returnValue(presence)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
def start_polling_presence(self, user, target_user=None, state=None):
|
def start_polling_presence(self, user, target_user=None, state=None):
|
||||||
logger.debug("Start polling for presence from %s", user)
|
logger.debug("Start polling for presence from %s", user)
|
||||||
|
|
||||||
if target_user:
|
if target_user:
|
||||||
target_users = set([target_user])
|
target_users = set([target_user])
|
||||||
|
room_ids = []
|
||||||
else:
|
else:
|
||||||
presence = yield self.store.get_presence_list(
|
presence = yield self.store.get_presence_list(
|
||||||
user.localpart, accepted=True
|
user.localpart, accepted=True
|
||||||
@ -400,23 +414,37 @@ class PresenceHandler(BaseHandler):
|
|||||||
rm_handler = self.homeserver.get_handlers().room_member_handler
|
rm_handler = self.homeserver.get_handlers().room_member_handler
|
||||||
room_ids = yield rm_handler.get_rooms_for_user(user)
|
room_ids = yield rm_handler.get_rooms_for_user(user)
|
||||||
|
|
||||||
for room_id in room_ids:
|
|
||||||
for member in (yield rm_handler.get_room_members(room_id)):
|
|
||||||
target_users.add(member)
|
|
||||||
|
|
||||||
if state is None:
|
if state is None:
|
||||||
state = yield self.store.get_presence_state(user.localpart)
|
state = yield self.store.get_presence_state(user.localpart)
|
||||||
|
else:
|
||||||
|
# statuscache = self._get_or_make_usercache(user)
|
||||||
|
# self._user_cachemap_latest_serial += 1
|
||||||
|
# statuscache.update(state, self._user_cachemap_latest_serial)
|
||||||
|
pass
|
||||||
|
|
||||||
localusers, remoteusers = partitionbool(
|
yield self.push_update_to_local_and_remote(
|
||||||
target_users,
|
observed_user=user,
|
||||||
lambda u: u.is_mine
|
users_to_push=target_users,
|
||||||
|
room_ids=room_ids,
|
||||||
|
statuscache=self._get_or_make_usercache(user),
|
||||||
)
|
)
|
||||||
|
|
||||||
for target_user in localusers:
|
for target_user in target_users:
|
||||||
self._start_polling_local(user, target_user)
|
if target_user.is_mine:
|
||||||
|
self._start_polling_local(user, target_user)
|
||||||
|
|
||||||
|
# We want to tell the person that just came online
|
||||||
|
# presence state of people they are interested in?
|
||||||
|
self.push_update_to_clients(
|
||||||
|
observed_user=target_user,
|
||||||
|
users_to_push=[user],
|
||||||
|
statuscache=self._get_or_offline_usercache(target_user),
|
||||||
|
)
|
||||||
|
|
||||||
deferreds = []
|
deferreds = []
|
||||||
remoteusers_by_domain = partition(remoteusers, lambda u: u.domain)
|
remote_users = [u for u in target_users if not u.is_mine]
|
||||||
|
remoteusers_by_domain = partition(remote_users, lambda u: u.domain)
|
||||||
|
# Only poll for people in our get_presence_list
|
||||||
for domain in remoteusers_by_domain:
|
for domain in remoteusers_by_domain:
|
||||||
remoteusers = remoteusers_by_domain[domain]
|
remoteusers = remoteusers_by_domain[domain]
|
||||||
|
|
||||||
@ -438,25 +466,26 @@ class PresenceHandler(BaseHandler):
|
|||||||
|
|
||||||
self._local_pushmap[target_localpart].add(user)
|
self._local_pushmap[target_localpart].add(user)
|
||||||
|
|
||||||
self.push_update_to_clients(
|
|
||||||
observer_user=user,
|
|
||||||
observed_user=target_user,
|
|
||||||
statuscache=self._get_or_offline_usercache(target_user),
|
|
||||||
)
|
|
||||||
|
|
||||||
def _start_polling_remote(self, user, domain, remoteusers):
|
def _start_polling_remote(self, user, domain, remoteusers):
|
||||||
|
to_poll = set()
|
||||||
|
|
||||||
for u in remoteusers:
|
for u in remoteusers:
|
||||||
if u not in self._remote_recvmap:
|
if u not in self._remote_recvmap:
|
||||||
self._remote_recvmap[u] = set()
|
self._remote_recvmap[u] = set()
|
||||||
|
to_poll.add(u)
|
||||||
|
|
||||||
self._remote_recvmap[u].add(user)
|
self._remote_recvmap[u].add(user)
|
||||||
|
|
||||||
|
if not to_poll:
|
||||||
|
return defer.succeed(None)
|
||||||
|
|
||||||
return self.federation.send_edu(
|
return self.federation.send_edu(
|
||||||
destination=domain,
|
destination=domain,
|
||||||
edu_type="m.presence",
|
edu_type="m.presence",
|
||||||
content={"poll": [u.to_string() for u in remoteusers]}
|
content={"poll": [u.to_string() for u in to_poll]}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@log_function
|
||||||
def stop_polling_presence(self, user, target_user=None):
|
def stop_polling_presence(self, user, target_user=None):
|
||||||
logger.debug("Stop polling for presence from %s", user)
|
logger.debug("Stop polling for presence from %s", user)
|
||||||
|
|
||||||
@ -496,20 +525,28 @@ class PresenceHandler(BaseHandler):
|
|||||||
if not self._local_pushmap[localpart]:
|
if not self._local_pushmap[localpart]:
|
||||||
del self._local_pushmap[localpart]
|
del self._local_pushmap[localpart]
|
||||||
|
|
||||||
|
@log_function
|
||||||
def _stop_polling_remote(self, user, domain, remoteusers):
|
def _stop_polling_remote(self, user, domain, remoteusers):
|
||||||
|
to_unpoll = set()
|
||||||
|
|
||||||
for u in remoteusers:
|
for u in remoteusers:
|
||||||
self._remote_recvmap[u].remove(user)
|
self._remote_recvmap[u].remove(user)
|
||||||
|
|
||||||
if not self._remote_recvmap[u]:
|
if not self._remote_recvmap[u]:
|
||||||
del self._remote_recvmap[u]
|
del self._remote_recvmap[u]
|
||||||
|
to_unpoll.add(u)
|
||||||
|
|
||||||
|
if not to_unpoll:
|
||||||
|
return defer.succeed(None)
|
||||||
|
|
||||||
return self.federation.send_edu(
|
return self.federation.send_edu(
|
||||||
destination=domain,
|
destination=domain,
|
||||||
edu_type="m.presence",
|
edu_type="m.presence",
|
||||||
content={"unpoll": [u.to_string() for u in remoteusers]}
|
content={"unpoll": [u.to_string() for u in to_unpoll]}
|
||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
def push_presence(self, user, statuscache):
|
def push_presence(self, user, statuscache):
|
||||||
assert(user.is_mine)
|
assert(user.is_mine)
|
||||||
|
|
||||||
@ -525,53 +562,17 @@ class PresenceHandler(BaseHandler):
|
|||||||
rm_handler = self.homeserver.get_handlers().room_member_handler
|
rm_handler = self.homeserver.get_handlers().room_member_handler
|
||||||
room_ids = yield rm_handler.get_rooms_for_user(user)
|
room_ids = yield rm_handler.get_rooms_for_user(user)
|
||||||
|
|
||||||
for room_id in room_ids:
|
if not localusers and not room_ids:
|
||||||
yield rm_handler.fetch_room_distributions_into(
|
|
||||||
room_id, localusers=localusers, remotedomains=remotedomains,
|
|
||||||
ignore_user=user,
|
|
||||||
)
|
|
||||||
|
|
||||||
if not localusers and not remotedomains:
|
|
||||||
defer.returnValue(None)
|
defer.returnValue(None)
|
||||||
|
|
||||||
yield self._send_presence_to_distribution(user,
|
yield self.push_update_to_local_and_remote(
|
||||||
localusers=localusers, remotedomains=remotedomains,
|
observed_user=user,
|
||||||
statuscache=statuscache
|
users_to_push=localusers,
|
||||||
|
remote_domains=remotedomains,
|
||||||
|
room_ids=room_ids,
|
||||||
|
statuscache=statuscache,
|
||||||
)
|
)
|
||||||
|
|
||||||
def _send_presence(self, srcuser, destuser, statuscache):
|
|
||||||
if destuser.is_mine:
|
|
||||||
self.push_update_to_clients(
|
|
||||||
observer_user=destuser,
|
|
||||||
observed_user=srcuser,
|
|
||||||
statuscache=statuscache)
|
|
||||||
return defer.succeed(None)
|
|
||||||
else:
|
|
||||||
return self._push_presence_remote(srcuser, destuser.domain,
|
|
||||||
state=statuscache.get_state()
|
|
||||||
)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
|
||||||
def _send_presence_to_distribution(self, srcuser, localusers=set(),
|
|
||||||
remotedomains=set(), statuscache=None):
|
|
||||||
|
|
||||||
for u in localusers:
|
|
||||||
logger.debug(" | push to local user %s", u)
|
|
||||||
self.push_update_to_clients(
|
|
||||||
observer_user=u,
|
|
||||||
observed_user=srcuser,
|
|
||||||
statuscache=statuscache,
|
|
||||||
)
|
|
||||||
|
|
||||||
deferreds = []
|
|
||||||
for domain in remotedomains:
|
|
||||||
logger.debug(" | push to remote domain %s", domain)
|
|
||||||
deferreds.append(self._push_presence_remote(srcuser, domain,
|
|
||||||
state=statuscache.get_state())
|
|
||||||
)
|
|
||||||
|
|
||||||
yield defer.DeferredList(deferreds)
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def _push_presence_remote(self, user, destination, state=None):
|
def _push_presence_remote(self, user, destination, state=None):
|
||||||
if state is None:
|
if state is None:
|
||||||
@ -587,12 +588,17 @@ class PresenceHandler(BaseHandler):
|
|||||||
self.clock.time_msec() - state.pop("mtime")
|
self.clock.time_msec() - state.pop("mtime")
|
||||||
)
|
)
|
||||||
|
|
||||||
|
user_state = {
|
||||||
|
"user_id": user.to_string(),
|
||||||
|
}
|
||||||
|
user_state.update(**state)
|
||||||
|
|
||||||
yield self.federation.send_edu(
|
yield self.federation.send_edu(
|
||||||
destination=destination,
|
destination=destination,
|
||||||
edu_type="m.presence",
|
edu_type="m.presence",
|
||||||
content={
|
content={
|
||||||
"push": [
|
"push": [
|
||||||
dict(user_id=user.to_string(), **state),
|
user_state,
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
@ -611,12 +617,7 @@ class PresenceHandler(BaseHandler):
|
|||||||
rm_handler = self.homeserver.get_handlers().room_member_handler
|
rm_handler = self.homeserver.get_handlers().room_member_handler
|
||||||
room_ids = yield rm_handler.get_rooms_for_user(user)
|
room_ids = yield rm_handler.get_rooms_for_user(user)
|
||||||
|
|
||||||
for room_id in room_ids:
|
if not observers and not room_ids:
|
||||||
yield rm_handler.fetch_room_distributions_into(
|
|
||||||
room_id, localusers=observers, ignore_user=user
|
|
||||||
)
|
|
||||||
|
|
||||||
if not observers:
|
|
||||||
break
|
break
|
||||||
|
|
||||||
state = dict(push)
|
state = dict(push)
|
||||||
@ -632,12 +633,12 @@ class PresenceHandler(BaseHandler):
|
|||||||
self._user_cachemap_latest_serial += 1
|
self._user_cachemap_latest_serial += 1
|
||||||
statuscache.update(state, serial=self._user_cachemap_latest_serial)
|
statuscache.update(state, serial=self._user_cachemap_latest_serial)
|
||||||
|
|
||||||
for observer_user in observers:
|
self.push_update_to_clients(
|
||||||
self.push_update_to_clients(
|
observed_user=user,
|
||||||
observer_user=observer_user,
|
users_to_push=observers,
|
||||||
observed_user=user,
|
room_ids=room_ids,
|
||||||
statuscache=statuscache,
|
statuscache=statuscache,
|
||||||
)
|
)
|
||||||
|
|
||||||
if state["state"] == PresenceState.OFFLINE:
|
if state["state"] == PresenceState.OFFLINE:
|
||||||
del self._user_cachemap[user]
|
del self._user_cachemap[user]
|
||||||
@ -671,12 +672,53 @@ class PresenceHandler(BaseHandler):
|
|||||||
|
|
||||||
yield defer.DeferredList(deferreds)
|
yield defer.DeferredList(deferreds)
|
||||||
|
|
||||||
def push_update_to_clients(self, observer_user, observed_user,
|
@defer.inlineCallbacks
|
||||||
statuscache):
|
def push_update_to_local_and_remote(self, observed_user,
|
||||||
statuscache.make_event(user=observed_user, clock=self.clock)
|
users_to_push=[], room_ids=[],
|
||||||
|
remote_domains=[],
|
||||||
|
statuscache=None):
|
||||||
|
|
||||||
|
localusers, remoteusers = partitionbool(
|
||||||
|
users_to_push,
|
||||||
|
lambda u: u.is_mine
|
||||||
|
)
|
||||||
|
|
||||||
|
localusers = set(localusers)
|
||||||
|
|
||||||
|
self.push_update_to_clients(
|
||||||
|
observed_user=observed_user,
|
||||||
|
users_to_push=localusers,
|
||||||
|
room_ids=room_ids,
|
||||||
|
statuscache=statuscache,
|
||||||
|
)
|
||||||
|
|
||||||
|
remote_domains = set(remote_domains)
|
||||||
|
remote_domains |= set([r.domain for r in remoteusers])
|
||||||
|
for room_id in room_ids:
|
||||||
|
remote_domains.update(
|
||||||
|
(yield self.store.get_joined_hosts_for_room(room_id))
|
||||||
|
)
|
||||||
|
|
||||||
|
remote_domains.discard(self.hs.hostname)
|
||||||
|
|
||||||
|
deferreds = []
|
||||||
|
for domain in remote_domains:
|
||||||
|
logger.debug(" | push to remote domain %s", domain)
|
||||||
|
deferreds.append(
|
||||||
|
self._push_presence_remote(
|
||||||
|
observed_user, domain, state=statuscache.get_state()
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
yield defer.DeferredList(deferreds)
|
||||||
|
|
||||||
|
defer.returnValue((localusers, remote_domains))
|
||||||
|
|
||||||
|
def push_update_to_clients(self, observed_user, users_to_push=[],
|
||||||
|
room_ids=[], statuscache=None):
|
||||||
self.notifier.on_new_user_event(
|
self.notifier.on_new_user_event(
|
||||||
[observer_user],
|
users_to_push,
|
||||||
|
room_ids,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -360,7 +360,8 @@ class RoomMemberHandler(BaseRoomHandler):
|
|||||||
)
|
)
|
||||||
|
|
||||||
snapshot = yield self.store.snapshot_room(
|
snapshot = yield self.store.snapshot_room(
|
||||||
room_id, joinee, RoomMemberEvent.TYPE, joinee
|
room_id, joinee.to_string(), RoomMemberEvent.TYPE,
|
||||||
|
joinee.to_string()
|
||||||
)
|
)
|
||||||
|
|
||||||
yield self._do_join(new_event, snapshot, room_host=host, do_auth=True)
|
yield self._do_join(new_event, snapshot, room_host=host, do_auth=True)
|
||||||
|
@ -17,11 +17,12 @@ from twisted.internet import defer
|
|||||||
|
|
||||||
from ._base import BaseHandler
|
from ._base import BaseHandler
|
||||||
|
|
||||||
|
from synapse.api.errors import SynapseError, AuthError
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@ -119,6 +119,7 @@ class Notifier(object):
|
|||||||
)
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
|
@log_function
|
||||||
def on_new_user_event(self, users=[], rooms=[]):
|
def on_new_user_event(self, users=[], rooms=[]):
|
||||||
""" Used to inform listeners that something has happend
|
""" Used to inform listeners that something has happend
|
||||||
presence/user event wise.
|
presence/user event wise.
|
||||||
|
@ -27,7 +27,7 @@ class LoginRestServlet(RestServlet):
|
|||||||
PASS_TYPE = "m.login.password"
|
PASS_TYPE = "m.login.password"
|
||||||
|
|
||||||
def on_GET(self, request):
|
def on_GET(self, request):
|
||||||
return (200, {"type": LoginRestServlet.PASS_TYPE})
|
return (200, {"flows": [{"type": LoginRestServlet.PASS_TYPE}]})
|
||||||
|
|
||||||
def on_OPTIONS(self, request):
|
def on_OPTIONS(self, request):
|
||||||
return (200, {})
|
return (200, {})
|
||||||
|
@ -126,11 +126,6 @@ class BaseHomeServer(object):
|
|||||||
object."""
|
object."""
|
||||||
return UserID.from_string(s, hs=self)
|
return UserID.from_string(s, hs=self)
|
||||||
|
|
||||||
def parse_roomid(self, s):
|
|
||||||
"""Parse the string given by 's' as a Room ID and return a RoomID
|
|
||||||
object."""
|
|
||||||
return RoomID.from_string(s, hs=self)
|
|
||||||
|
|
||||||
def parse_roomalias(self, s):
|
def parse_roomalias(self, s):
|
||||||
"""Parse the string given by 's' as a Room Alias and return a RoomAlias
|
"""Parse the string given by 's' as a Room Alias and return a RoomAlias
|
||||||
object."""
|
object."""
|
||||||
|
@ -205,8 +205,11 @@ class StreamStore(SQLBaseStore):
|
|||||||
with_feedback=False):
|
with_feedback=False):
|
||||||
# TODO (erikj): Handle compressed feedback
|
# TODO (erikj): Handle compressed feedback
|
||||||
|
|
||||||
from_comp = '<' if direction =='b' else '>'
|
# Tokens really represent positions between elements, but we use
|
||||||
to_comp = '>' if direction =='b' else '<'
|
# the convention of pointing to the event before the gap. Hence
|
||||||
|
# we have a bit of asymmetry when it comes to equalities.
|
||||||
|
from_comp = '<=' if direction =='b' else '>'
|
||||||
|
to_comp = '>' if direction =='b' else '<='
|
||||||
order = "DESC" if direction == 'b' else "ASC"
|
order = "DESC" if direction == 'b' else "ASC"
|
||||||
|
|
||||||
args = [room_id]
|
args = [room_id]
|
||||||
@ -294,7 +297,7 @@ class StreamStore(SQLBaseStore):
|
|||||||
logger.debug("get_room_events_max_id: %s", res)
|
logger.debug("get_room_events_max_id: %s", res)
|
||||||
|
|
||||||
if not res or not res[0] or not res[0]["m"]:
|
if not res or not res[0] or not res[0]["m"]:
|
||||||
return "s1"
|
return "s0"
|
||||||
|
|
||||||
key = res[0]["m"]
|
key = res[0]["m"]
|
||||||
return "s%d" % (key,)
|
return "s%d" % (key,)
|
||||||
|
@ -81,4 +81,4 @@ class PaginationConfig(object):
|
|||||||
return (
|
return (
|
||||||
"<PaginationConfig from_tok=%s, to_tok=%s, "
|
"<PaginationConfig from_tok=%s, to_tok=%s, "
|
||||||
"direction=%s, limit=%s>"
|
"direction=%s, limit=%s>"
|
||||||
) % (self.from_tok, self.to_tok, self.direction, self.limit)
|
) % (self.from_token, self.to_token, self.direction, self.limit)
|
||||||
|
@ -15,8 +15,11 @@
|
|||||||
|
|
||||||
|
|
||||||
from inspect import getcallargs
|
from inspect import getcallargs
|
||||||
|
from functools import wraps
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
import inspect
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
|
||||||
def log_function(f):
|
def log_function(f):
|
||||||
@ -26,6 +29,7 @@ def log_function(f):
|
|||||||
lineno = f.func_code.co_firstlineno
|
lineno = f.func_code.co_firstlineno
|
||||||
pathname = f.func_code.co_filename
|
pathname = f.func_code.co_filename
|
||||||
|
|
||||||
|
@wraps(f)
|
||||||
def wrapped(*args, **kwargs):
|
def wrapped(*args, **kwargs):
|
||||||
name = f.__module__
|
name = f.__module__
|
||||||
logger = logging.getLogger(name)
|
logger = logging.getLogger(name)
|
||||||
@ -63,4 +67,55 @@ def log_function(f):
|
|||||||
|
|
||||||
return f(*args, **kwargs)
|
return f(*args, **kwargs)
|
||||||
|
|
||||||
|
wrapped.__name__ = func_name
|
||||||
|
return wrapped
|
||||||
|
|
||||||
|
|
||||||
|
def trace_function(f):
|
||||||
|
func_name = f.__name__
|
||||||
|
linenum = f.func_code.co_firstlineno
|
||||||
|
pathname = f.func_code.co_filename
|
||||||
|
|
||||||
|
def wrapped(*args, **kwargs):
|
||||||
|
name = f.__module__
|
||||||
|
logger = logging.getLogger(name)
|
||||||
|
level = logging.DEBUG
|
||||||
|
|
||||||
|
s = inspect.currentframe().f_back
|
||||||
|
|
||||||
|
to_print = [
|
||||||
|
"\t%s:%s %s. Args: args=%s, kwargs=%s" % (
|
||||||
|
pathname, linenum, func_name, args, kwargs
|
||||||
|
)
|
||||||
|
]
|
||||||
|
while s:
|
||||||
|
if True or s.f_globals["__name__"].startswith("synapse"):
|
||||||
|
filename, lineno, function, _, _ = inspect.getframeinfo(s)
|
||||||
|
args_string = inspect.formatargvalues(*inspect.getargvalues(s))
|
||||||
|
|
||||||
|
to_print.append(
|
||||||
|
"\t%s:%d %s. Args: %s" % (
|
||||||
|
filename, lineno, function, args_string
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
s = s.f_back
|
||||||
|
|
||||||
|
msg = "\nTraceback for %s:\n" % (func_name,) + "\n".join(to_print)
|
||||||
|
|
||||||
|
record = logging.LogRecord(
|
||||||
|
name=name,
|
||||||
|
level=level,
|
||||||
|
pathname=pathname,
|
||||||
|
lineno=lineno,
|
||||||
|
msg=msg,
|
||||||
|
args=None,
|
||||||
|
exc_info=None
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.handle(record)
|
||||||
|
|
||||||
|
return f(*args, **kwargs)
|
||||||
|
|
||||||
|
wrapped.__name__ = func_name
|
||||||
return wrapped
|
return wrapped
|
||||||
|
@ -28,6 +28,8 @@ from mock import NonCallableMock, ANY
|
|||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
|
from ..utils import get_mock_call_args
|
||||||
|
|
||||||
logging.getLogger().addHandler(logging.NullHandler())
|
logging.getLogger().addHandler(logging.NullHandler())
|
||||||
|
|
||||||
|
|
||||||
@ -99,9 +101,13 @@ class FederationTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
mem_handler = self.handlers.room_member_handler
|
mem_handler = self.handlers.room_member_handler
|
||||||
self.assertEquals(1, mem_handler.change_membership.call_count)
|
self.assertEquals(1, mem_handler.change_membership.call_count)
|
||||||
self.assertEquals(True, mem_handler.change_membership.call_args[0][1])
|
call_args = get_mock_call_args(
|
||||||
|
lambda event, do_auth: None,
|
||||||
|
mem_handler.change_membership
|
||||||
|
)
|
||||||
|
self.assertEquals(True, call_args["do_auth"])
|
||||||
|
|
||||||
new_event = mem_handler.change_membership.call_args[0][0]
|
new_event = call_args["event"]
|
||||||
self.assertEquals(RoomMemberEvent.TYPE, new_event.type)
|
self.assertEquals(RoomMemberEvent.TYPE, new_event.type)
|
||||||
self.assertEquals(room_id, new_event.room_id)
|
self.assertEquals(room_id, new_event.room_id)
|
||||||
self.assertEquals(user_id, new_event.state_key)
|
self.assertEquals(user_id, new_event.state_key)
|
||||||
|
@ -15,7 +15,7 @@
|
|||||||
|
|
||||||
|
|
||||||
from twisted.trial import unittest
|
from twisted.trial import unittest
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer, reactor
|
||||||
|
|
||||||
from mock import Mock, call, ANY
|
from mock import Mock, call, ANY
|
||||||
import logging
|
import logging
|
||||||
@ -192,7 +192,8 @@ class PresenceStateTestCase(unittest.TestCase):
|
|||||||
),
|
),
|
||||||
SynapseError
|
SynapseError
|
||||||
)
|
)
|
||||||
test_get_disallowed_state.skip = "Presence polling is disabled"
|
|
||||||
|
test_get_disallowed_state.skip = "Presence permissions are disabled"
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_set_my_state(self):
|
def test_set_my_state(self):
|
||||||
@ -217,7 +218,6 @@ class PresenceStateTestCase(unittest.TestCase):
|
|||||||
state={"state": OFFLINE})
|
state={"state": OFFLINE})
|
||||||
|
|
||||||
self.mock_stop.assert_called_with(self.u_apple)
|
self.mock_stop.assert_called_with(self.u_apple)
|
||||||
test_set_my_state.skip = "Presence polling is disabled"
|
|
||||||
|
|
||||||
|
|
||||||
class PresenceInvitesTestCase(unittest.TestCase):
|
class PresenceInvitesTestCase(unittest.TestCase):
|
||||||
@ -499,6 +499,7 @@ class PresencePushTestCase(unittest.TestCase):
|
|||||||
db_pool=None,
|
db_pool=None,
|
||||||
datastore=Mock(spec=[
|
datastore=Mock(spec=[
|
||||||
"set_presence_state",
|
"set_presence_state",
|
||||||
|
"get_joined_hosts_for_room",
|
||||||
|
|
||||||
# Bits that Federation needs
|
# Bits that Federation needs
|
||||||
"prep_send_transaction",
|
"prep_send_transaction",
|
||||||
@ -513,8 +514,12 @@ class PresencePushTestCase(unittest.TestCase):
|
|||||||
)
|
)
|
||||||
hs.handlers = JustPresenceHandlers(hs)
|
hs.handlers = JustPresenceHandlers(hs)
|
||||||
|
|
||||||
|
def update(*args,**kwargs):
|
||||||
|
# print "mock_update_client: Args=%s, kwargs=%s" %(args, kwargs,)
|
||||||
|
return defer.succeed(None)
|
||||||
|
|
||||||
self.mock_update_client = Mock()
|
self.mock_update_client = Mock()
|
||||||
self.mock_update_client.return_value = defer.succeed(None)
|
self.mock_update_client.side_effect = update
|
||||||
|
|
||||||
self.datastore = hs.get_datastore()
|
self.datastore = hs.get_datastore()
|
||||||
|
|
||||||
@ -548,6 +553,14 @@ class PresencePushTestCase(unittest.TestCase):
|
|||||||
return defer.succeed([])
|
return defer.succeed([])
|
||||||
self.room_member_handler.get_room_members = get_room_members
|
self.room_member_handler.get_room_members = get_room_members
|
||||||
|
|
||||||
|
def get_room_hosts(room_id):
|
||||||
|
if room_id == "a-room":
|
||||||
|
hosts = set([u.domain for u in self.room_members])
|
||||||
|
return defer.succeed(hosts)
|
||||||
|
else:
|
||||||
|
return defer.succeed([])
|
||||||
|
self.datastore.get_joined_hosts_for_room = get_room_hosts
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def fetch_room_distributions_into(room_id, localusers=None,
|
def fetch_room_distributions_into(room_id, localusers=None,
|
||||||
remotedomains=None, ignore_user=None):
|
remotedomains=None, ignore_user=None):
|
||||||
@ -613,18 +626,10 @@ class PresencePushTestCase(unittest.TestCase):
|
|||||||
{"state": ONLINE})
|
{"state": ONLINE})
|
||||||
|
|
||||||
self.mock_update_client.assert_has_calls([
|
self.mock_update_client.assert_has_calls([
|
||||||
call(observer_user=self.u_apple,
|
call(users_to_push=set([self.u_apple, self.u_banana, self.u_clementine]),
|
||||||
|
room_ids=["a-room"],
|
||||||
observed_user=self.u_apple,
|
observed_user=self.u_apple,
|
||||||
statuscache=ANY), # self-reflection
|
statuscache=ANY), # self-reflection
|
||||||
call(observer_user=self.u_banana,
|
|
||||||
observed_user=self.u_apple,
|
|
||||||
statuscache=ANY),
|
|
||||||
call(observer_user=self.u_clementine,
|
|
||||||
observed_user=self.u_apple,
|
|
||||||
statuscache=ANY),
|
|
||||||
call(observer_user=self.u_elderberry,
|
|
||||||
observed_user=self.u_apple,
|
|
||||||
statuscache=ANY),
|
|
||||||
], any_order=True)
|
], any_order=True)
|
||||||
self.mock_update_client.reset_mock()
|
self.mock_update_client.reset_mock()
|
||||||
|
|
||||||
@ -653,30 +658,30 @@ class PresencePushTestCase(unittest.TestCase):
|
|||||||
], presence)
|
], presence)
|
||||||
|
|
||||||
self.mock_update_client.assert_has_calls([
|
self.mock_update_client.assert_has_calls([
|
||||||
call(observer_user=self.u_banana,
|
call(users_to_push=set([self.u_banana]),
|
||||||
|
room_ids=[],
|
||||||
observed_user=self.u_banana,
|
observed_user=self.u_banana,
|
||||||
statuscache=ANY), # self-reflection
|
statuscache=ANY), # self-reflection
|
||||||
]) # and no others...
|
]) # and no others...
|
||||||
test_push_local.skip = "Presence polling is disabled"
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_push_remote(self):
|
def test_push_remote(self):
|
||||||
put_json = self.mock_http_client.put_json
|
put_json = self.mock_http_client.put_json
|
||||||
put_json.expect_call_and_return(
|
# put_json.expect_call_and_return(
|
||||||
call("remote",
|
# call("remote",
|
||||||
path=ANY, # Can't guarantee which txn ID will be which
|
# path=ANY, # Can't guarantee which txn ID will be which
|
||||||
data=_expect_edu("remote", "m.presence",
|
# data=_expect_edu("remote", "m.presence",
|
||||||
content={
|
# content={
|
||||||
"push": [
|
# "push": [
|
||||||
{"user_id": "@apple:test",
|
# {"user_id": "@apple:test",
|
||||||
"state": "online",
|
# "state": "online",
|
||||||
"mtime_age": 0},
|
# "mtime_age": 0},
|
||||||
],
|
# ],
|
||||||
}
|
# }
|
||||||
)
|
# )
|
||||||
),
|
# ),
|
||||||
defer.succeed((200, "OK"))
|
# defer.succeed((200, "OK"))
|
||||||
)
|
# )
|
||||||
put_json.expect_call_and_return(
|
put_json.expect_call_and_return(
|
||||||
call("farm",
|
call("farm",
|
||||||
path=ANY, # Can't guarantee which txn ID will be which
|
path=ANY, # Can't guarantee which txn ID will be which
|
||||||
@ -684,7 +689,7 @@ class PresencePushTestCase(unittest.TestCase):
|
|||||||
content={
|
content={
|
||||||
"push": [
|
"push": [
|
||||||
{"user_id": "@apple:test",
|
{"user_id": "@apple:test",
|
||||||
"state": "online",
|
"state": u"online",
|
||||||
"mtime_age": 0},
|
"mtime_age": 0},
|
||||||
],
|
],
|
||||||
}
|
}
|
||||||
@ -709,7 +714,6 @@ class PresencePushTestCase(unittest.TestCase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
yield put_json.await_calls()
|
yield put_json.await_calls()
|
||||||
test_push_remote.skip = "Presence polling is disabled"
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_recv_remote(self):
|
def test_recv_remote(self):
|
||||||
@ -734,10 +738,8 @@ class PresencePushTestCase(unittest.TestCase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
self.mock_update_client.assert_has_calls([
|
self.mock_update_client.assert_has_calls([
|
||||||
call(observer_user=self.u_apple,
|
call(users_to_push=set([self.u_apple]),
|
||||||
observed_user=self.u_potato,
|
room_ids=["a-room"],
|
||||||
statuscache=ANY),
|
|
||||||
call(observer_user=self.u_banana,
|
|
||||||
observed_user=self.u_potato,
|
observed_user=self.u_potato,
|
||||||
statuscache=ANY),
|
statuscache=ANY),
|
||||||
], any_order=True)
|
], any_order=True)
|
||||||
@ -757,19 +759,17 @@ class PresencePushTestCase(unittest.TestCase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
self.mock_update_client.assert_has_calls([
|
self.mock_update_client.assert_has_calls([
|
||||||
# Apple and Elderberry see each other
|
call(room_ids=["a-room"],
|
||||||
call(observer_user=self.u_apple,
|
|
||||||
observed_user=self.u_elderberry,
|
observed_user=self.u_elderberry,
|
||||||
|
users_to_push=set(),
|
||||||
statuscache=ANY),
|
statuscache=ANY),
|
||||||
call(observer_user=self.u_elderberry,
|
call(users_to_push=set([self.u_elderberry]),
|
||||||
observed_user=self.u_apple,
|
observed_user=self.u_apple,
|
||||||
|
room_ids=[],
|
||||||
statuscache=ANY),
|
statuscache=ANY),
|
||||||
# Banana and Elderberry see each other
|
call(users_to_push=set([self.u_elderberry]),
|
||||||
call(observer_user=self.u_banana,
|
|
||||||
observed_user=self.u_elderberry,
|
|
||||||
statuscache=ANY),
|
|
||||||
call(observer_user=self.u_elderberry,
|
|
||||||
observed_user=self.u_banana,
|
observed_user=self.u_banana,
|
||||||
|
room_ids=[],
|
||||||
statuscache=ANY),
|
statuscache=ANY),
|
||||||
], any_order=True)
|
], any_order=True)
|
||||||
|
|
||||||
@ -857,6 +857,7 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
'apple': [ "@banana:test", "@clementine:test" ],
|
'apple': [ "@banana:test", "@clementine:test" ],
|
||||||
'banana': [ "@apple:test" ],
|
'banana': [ "@apple:test" ],
|
||||||
'clementine': [ "@apple:test", "@potato:remote" ],
|
'clementine': [ "@apple:test", "@potato:remote" ],
|
||||||
|
'fig': [ "@potato:remote" ],
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -890,7 +891,12 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
self.datastore.get_received_txn_response = get_received_txn_response
|
self.datastore.get_received_txn_response = get_received_txn_response
|
||||||
|
|
||||||
self.mock_update_client = Mock()
|
self.mock_update_client = Mock()
|
||||||
self.mock_update_client.return_value = defer.succeed(None)
|
|
||||||
|
def update(*args,**kwargs):
|
||||||
|
# print "mock_update_client: Args=%s, kwargs=%s" %(args, kwargs,)
|
||||||
|
return defer.succeed(None)
|
||||||
|
|
||||||
|
self.mock_update_client.side_effect = update
|
||||||
|
|
||||||
self.handler = hs.get_handlers().presence_handler
|
self.handler = hs.get_handlers().presence_handler
|
||||||
self.handler.push_update_to_clients = self.mock_update_client
|
self.handler.push_update_to_clients = self.mock_update_client
|
||||||
@ -906,9 +912,10 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
# Mocked database state
|
# Mocked database state
|
||||||
# Local users always start offline
|
# Local users always start offline
|
||||||
self.current_user_state = {
|
self.current_user_state = {
|
||||||
"apple": OFFLINE,
|
"apple": OFFLINE,
|
||||||
"banana": OFFLINE,
|
"banana": OFFLINE,
|
||||||
"clementine": OFFLINE,
|
"clementine": OFFLINE,
|
||||||
|
"fig": OFFLINE,
|
||||||
}
|
}
|
||||||
|
|
||||||
def get_presence_state(user_localpart):
|
def get_presence_state(user_localpart):
|
||||||
@ -938,6 +945,7 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
self.u_apple = hs.parse_userid("@apple:test")
|
self.u_apple = hs.parse_userid("@apple:test")
|
||||||
self.u_banana = hs.parse_userid("@banana:test")
|
self.u_banana = hs.parse_userid("@banana:test")
|
||||||
self.u_clementine = hs.parse_userid("@clementine:test")
|
self.u_clementine = hs.parse_userid("@clementine:test")
|
||||||
|
self.u_fig = hs.parse_userid("@fig:test")
|
||||||
|
|
||||||
# Remote users
|
# Remote users
|
||||||
self.u_potato = hs.parse_userid("@potato:remote")
|
self.u_potato = hs.parse_userid("@potato:remote")
|
||||||
@ -952,10 +960,10 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
# apple should see both banana and clementine currently offline
|
# apple should see both banana and clementine currently offline
|
||||||
self.mock_update_client.assert_has_calls([
|
self.mock_update_client.assert_has_calls([
|
||||||
call(observer_user=self.u_apple,
|
call(users_to_push=[self.u_apple],
|
||||||
observed_user=self.u_banana,
|
observed_user=self.u_banana,
|
||||||
statuscache=ANY),
|
statuscache=ANY),
|
||||||
call(observer_user=self.u_apple,
|
call(users_to_push=[self.u_apple],
|
||||||
observed_user=self.u_clementine,
|
observed_user=self.u_clementine,
|
||||||
statuscache=ANY),
|
statuscache=ANY),
|
||||||
], any_order=True)
|
], any_order=True)
|
||||||
@ -975,10 +983,11 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
# apple and banana should now both see each other online
|
# apple and banana should now both see each other online
|
||||||
self.mock_update_client.assert_has_calls([
|
self.mock_update_client.assert_has_calls([
|
||||||
call(observer_user=self.u_apple,
|
call(users_to_push=set([self.u_apple]),
|
||||||
observed_user=self.u_banana,
|
observed_user=self.u_banana,
|
||||||
|
room_ids=[],
|
||||||
statuscache=ANY),
|
statuscache=ANY),
|
||||||
call(observer_user=self.u_banana,
|
call(users_to_push=[self.u_banana],
|
||||||
observed_user=self.u_apple,
|
observed_user=self.u_apple,
|
||||||
statuscache=ANY),
|
statuscache=ANY),
|
||||||
], any_order=True)
|
], any_order=True)
|
||||||
@ -995,14 +1004,14 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
# banana should now be told apple is offline
|
# banana should now be told apple is offline
|
||||||
self.mock_update_client.assert_has_calls([
|
self.mock_update_client.assert_has_calls([
|
||||||
call(observer_user=self.u_banana,
|
call(users_to_push=set([self.u_banana, self.u_apple]),
|
||||||
observed_user=self.u_apple,
|
observed_user=self.u_apple,
|
||||||
|
room_ids=[],
|
||||||
statuscache=ANY),
|
statuscache=ANY),
|
||||||
], any_order=True)
|
], any_order=True)
|
||||||
|
|
||||||
self.assertFalse("banana" in self.handler._local_pushmap)
|
self.assertFalse("banana" in self.handler._local_pushmap)
|
||||||
self.assertFalse("clementine" in self.handler._local_pushmap)
|
self.assertFalse("clementine" in self.handler._local_pushmap)
|
||||||
test_push_local.skip = "Presence polling is disabled"
|
|
||||||
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
@ -1010,7 +1019,7 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
put_json = self.mock_http_client.put_json
|
put_json = self.mock_http_client.put_json
|
||||||
put_json.expect_call_and_return(
|
put_json.expect_call_and_return(
|
||||||
call("remote",
|
call("remote",
|
||||||
path="/matrix/federation/v1/send/1000000/",
|
path=ANY,
|
||||||
data=_expect_edu("remote", "m.presence",
|
data=_expect_edu("remote", "m.presence",
|
||||||
content={
|
content={
|
||||||
"poll": [ "@potato:remote" ],
|
"poll": [ "@potato:remote" ],
|
||||||
@ -1020,6 +1029,18 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
defer.succeed((200, "OK"))
|
defer.succeed((200, "OK"))
|
||||||
)
|
)
|
||||||
|
|
||||||
|
put_json.expect_call_and_return(
|
||||||
|
call("remote",
|
||||||
|
path=ANY,
|
||||||
|
data=_expect_edu("remote", "m.presence",
|
||||||
|
content={
|
||||||
|
"push": [ {"user_id": "@clementine:test" }],
|
||||||
|
},
|
||||||
|
),
|
||||||
|
),
|
||||||
|
defer.succeed((200, "OK"))
|
||||||
|
)
|
||||||
|
|
||||||
# clementine goes online
|
# clementine goes online
|
||||||
yield self.handler.set_state(
|
yield self.handler.set_state(
|
||||||
target_user=self.u_clementine, auth_user=self.u_clementine,
|
target_user=self.u_clementine, auth_user=self.u_clementine,
|
||||||
@ -1028,13 +1049,48 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
yield put_json.await_calls()
|
yield put_json.await_calls()
|
||||||
|
|
||||||
# Gut-wrenching tests
|
# Gut-wrenching tests
|
||||||
self.assertTrue(self.u_potato in self.handler._remote_recvmap)
|
self.assertTrue(self.u_potato in self.handler._remote_recvmap,
|
||||||
|
msg="expected potato to be in _remote_recvmap"
|
||||||
|
)
|
||||||
self.assertTrue(self.u_clementine in
|
self.assertTrue(self.u_clementine in
|
||||||
self.handler._remote_recvmap[self.u_potato])
|
self.handler._remote_recvmap[self.u_potato])
|
||||||
|
|
||||||
|
|
||||||
put_json.expect_call_and_return(
|
put_json.expect_call_and_return(
|
||||||
call("remote",
|
call("remote",
|
||||||
path="/matrix/federation/v1/send/1000001/",
|
path=ANY,
|
||||||
|
data=_expect_edu("remote", "m.presence",
|
||||||
|
content={
|
||||||
|
"push": [ {"user_id": "@fig:test" }],
|
||||||
|
},
|
||||||
|
),
|
||||||
|
),
|
||||||
|
defer.succeed((200, "OK"))
|
||||||
|
)
|
||||||
|
|
||||||
|
# fig goes online; shouldn't send a second poll
|
||||||
|
yield self.handler.set_state(
|
||||||
|
target_user=self.u_fig, auth_user=self.u_fig,
|
||||||
|
state={"state": ONLINE}
|
||||||
|
)
|
||||||
|
|
||||||
|
# reactor.iterate(delay=0)
|
||||||
|
|
||||||
|
yield put_json.await_calls()
|
||||||
|
|
||||||
|
# fig goes offline
|
||||||
|
yield self.handler.set_state(
|
||||||
|
target_user=self.u_fig, auth_user=self.u_fig,
|
||||||
|
state={"state": OFFLINE}
|
||||||
|
)
|
||||||
|
|
||||||
|
reactor.iterate(delay=0)
|
||||||
|
|
||||||
|
put_json.assert_had_no_calls()
|
||||||
|
|
||||||
|
put_json.expect_call_and_return(
|
||||||
|
call("remote",
|
||||||
|
path=ANY,
|
||||||
data=_expect_edu("remote", "m.presence",
|
data=_expect_edu("remote", "m.presence",
|
||||||
content={
|
content={
|
||||||
"unpoll": [ "@potato:remote" ],
|
"unpoll": [ "@potato:remote" ],
|
||||||
@ -1049,10 +1105,11 @@ class PresencePollingTestCase(unittest.TestCase):
|
|||||||
target_user=self.u_clementine, auth_user=self.u_clementine,
|
target_user=self.u_clementine, auth_user=self.u_clementine,
|
||||||
state={"state": OFFLINE})
|
state={"state": OFFLINE})
|
||||||
|
|
||||||
put_json.await_calls()
|
yield put_json.await_calls()
|
||||||
|
|
||||||
self.assertFalse(self.u_potato in self.handler._remote_recvmap)
|
self.assertFalse(self.u_potato in self.handler._remote_recvmap,
|
||||||
test_remote_poll_send.skip = "Presence polling is disabled"
|
msg="expected potato not to be in _remote_recvmap"
|
||||||
|
)
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_remote_poll_receive(self):
|
def test_remote_poll_receive(self):
|
||||||
|
@ -81,7 +81,11 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
self.replication = hs.get_replication_layer()
|
self.replication = hs.get_replication_layer()
|
||||||
self.replication.send_edu = Mock()
|
self.replication.send_edu = Mock()
|
||||||
self.replication.send_edu.return_value = defer.succeed((200, "OK"))
|
|
||||||
|
def send_edu(*args, **kwargs):
|
||||||
|
# print "send_edu: %s, %s" % (args, kwargs)
|
||||||
|
return defer.succeed((200, "OK"))
|
||||||
|
self.replication.send_edu.side_effect = send_edu
|
||||||
|
|
||||||
def get_profile_displayname(user_localpart):
|
def get_profile_displayname(user_localpart):
|
||||||
return defer.succeed("Frank")
|
return defer.succeed("Frank")
|
||||||
@ -95,11 +99,12 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
return defer.succeed("http://foo")
|
return defer.succeed("http://foo")
|
||||||
self.datastore.get_profile_avatar_url = get_profile_avatar_url
|
self.datastore.get_profile_avatar_url = get_profile_avatar_url
|
||||||
|
|
||||||
|
self.presence_list = [
|
||||||
|
{"observed_user_id": "@banana:test"},
|
||||||
|
{"observed_user_id": "@clementine:test"},
|
||||||
|
]
|
||||||
def get_presence_list(user_localpart, accepted=None):
|
def get_presence_list(user_localpart, accepted=None):
|
||||||
return defer.succeed([
|
return defer.succeed(self.presence_list)
|
||||||
{"observed_user_id": "@banana:test"},
|
|
||||||
{"observed_user_id": "@clementine:test"},
|
|
||||||
])
|
|
||||||
self.datastore.get_presence_list = get_presence_list
|
self.datastore.get_presence_list = get_presence_list
|
||||||
|
|
||||||
def do_users_share_a_room(userlist):
|
def do_users_share_a_room(userlist):
|
||||||
@ -109,7 +114,10 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
self.handlers = hs.get_handlers()
|
self.handlers = hs.get_handlers()
|
||||||
|
|
||||||
self.mock_update_client = Mock()
|
self.mock_update_client = Mock()
|
||||||
self.mock_update_client.return_value = defer.succeed(None)
|
def update(*args, **kwargs):
|
||||||
|
# print "mock_update_client: %s, %s" %(args, kwargs)
|
||||||
|
return defer.succeed(None)
|
||||||
|
self.mock_update_client.side_effect = update
|
||||||
|
|
||||||
self.handlers.presence_handler.push_update_to_clients = (
|
self.handlers.presence_handler.push_update_to_clients = (
|
||||||
self.mock_update_client)
|
self.mock_update_client)
|
||||||
@ -130,6 +138,11 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_set_my_state(self):
|
def test_set_my_state(self):
|
||||||
|
self.presence_list = [
|
||||||
|
{"observed_user_id": "@banana:test"},
|
||||||
|
{"observed_user_id": "@clementine:test"},
|
||||||
|
]
|
||||||
|
|
||||||
mocked_set = self.datastore.set_presence_state
|
mocked_set = self.datastore.set_presence_state
|
||||||
mocked_set.return_value = defer.succeed({"state": OFFLINE})
|
mocked_set.return_value = defer.succeed({"state": OFFLINE})
|
||||||
|
|
||||||
@ -139,10 +152,14 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
|
|
||||||
mocked_set.assert_called_with("apple",
|
mocked_set.assert_called_with("apple",
|
||||||
{"state": UNAVAILABLE, "status_msg": "Away"})
|
{"state": UNAVAILABLE, "status_msg": "Away"})
|
||||||
test_set_my_state.skip = "Presence polling is disabled"
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_push_local(self):
|
def test_push_local(self):
|
||||||
|
self.presence_list = [
|
||||||
|
{"observed_user_id": "@banana:test"},
|
||||||
|
{"observed_user_id": "@clementine:test"},
|
||||||
|
]
|
||||||
|
|
||||||
self.datastore.set_presence_state.return_value = defer.succeed(
|
self.datastore.set_presence_state.return_value = defer.succeed(
|
||||||
{"state": ONLINE})
|
{"state": ONLINE})
|
||||||
|
|
||||||
@ -174,12 +191,10 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
presence)
|
presence)
|
||||||
|
|
||||||
self.mock_update_client.assert_has_calls([
|
self.mock_update_client.assert_has_calls([
|
||||||
call(observer_user=self.u_apple,
|
call(users_to_push=set([self.u_apple, self.u_banana, self.u_clementine]),
|
||||||
|
room_ids=[],
|
||||||
observed_user=self.u_apple,
|
observed_user=self.u_apple,
|
||||||
statuscache=ANY), # self-reflection
|
statuscache=ANY), # self-reflection
|
||||||
call(observer_user=self.u_banana,
|
|
||||||
observed_user=self.u_apple,
|
|
||||||
statuscache=ANY),
|
|
||||||
], any_order=True)
|
], any_order=True)
|
||||||
|
|
||||||
statuscache = self.mock_update_client.call_args[1]["statuscache"]
|
statuscache = self.mock_update_client.call_args[1]["statuscache"]
|
||||||
@ -199,12 +214,10 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
self.u_apple, "I am an Apple")
|
self.u_apple, "I am an Apple")
|
||||||
|
|
||||||
self.mock_update_client.assert_has_calls([
|
self.mock_update_client.assert_has_calls([
|
||||||
call(observer_user=self.u_apple,
|
call(users_to_push=set([self.u_apple, self.u_banana, self.u_clementine]),
|
||||||
|
room_ids=[],
|
||||||
observed_user=self.u_apple,
|
observed_user=self.u_apple,
|
||||||
statuscache=ANY), # self-reflection
|
statuscache=ANY), # self-reflection
|
||||||
call(observer_user=self.u_banana,
|
|
||||||
observed_user=self.u_apple,
|
|
||||||
statuscache=ANY),
|
|
||||||
], any_order=True)
|
], any_order=True)
|
||||||
|
|
||||||
statuscache = self.mock_update_client.call_args[1]["statuscache"]
|
statuscache = self.mock_update_client.call_args[1]["statuscache"]
|
||||||
@ -214,11 +227,14 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
"displayname": "I am an Apple",
|
"displayname": "I am an Apple",
|
||||||
"avatar_url": "http://foo",
|
"avatar_url": "http://foo",
|
||||||
}, statuscache.state)
|
}, statuscache.state)
|
||||||
test_push_local.skip = "Presence polling is disabled"
|
|
||||||
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_push_remote(self):
|
def test_push_remote(self):
|
||||||
|
self.presence_list = [
|
||||||
|
{"observed_user_id": "@potato:remote"},
|
||||||
|
]
|
||||||
|
|
||||||
self.datastore.set_presence_state.return_value = defer.succeed(
|
self.datastore.set_presence_state.return_value = defer.succeed(
|
||||||
{"state": ONLINE})
|
{"state": ONLINE})
|
||||||
|
|
||||||
@ -246,10 +262,14 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
],
|
],
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
test_push_remote.skip = "Presence polling is disabled"
|
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def test_recv_remote(self):
|
def test_recv_remote(self):
|
||||||
|
self.presence_list = [
|
||||||
|
{"observed_user_id": "@banana:test"},
|
||||||
|
{"observed_user_id": "@clementine:test"},
|
||||||
|
]
|
||||||
|
|
||||||
# TODO(paul): Gut-wrenching
|
# TODO(paul): Gut-wrenching
|
||||||
potato_set = self.handlers.presence_handler._remote_recvmap.setdefault(
|
potato_set = self.handlers.presence_handler._remote_recvmap.setdefault(
|
||||||
self.u_potato, set())
|
self.u_potato, set())
|
||||||
@ -267,7 +287,8 @@ class PresenceProfilelikeDataTestCase(unittest.TestCase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
self.mock_update_client.assert_called_with(
|
self.mock_update_client.assert_called_with(
|
||||||
observer_user=self.u_apple,
|
users_to_push=set([self.u_apple]),
|
||||||
|
room_ids=[],
|
||||||
observed_user=self.u_potato,
|
observed_user=self.u_potato,
|
||||||
statuscache=ANY)
|
statuscache=ANY)
|
||||||
|
|
||||||
|
@ -114,7 +114,6 @@ class PresenceStateTestCase(unittest.TestCase):
|
|||||||
self.assertEquals(200, code)
|
self.assertEquals(200, code)
|
||||||
mocked_set.assert_called_with("apple",
|
mocked_set.assert_called_with("apple",
|
||||||
{"state": UNAVAILABLE, "status_msg": "Away"})
|
{"state": UNAVAILABLE, "status_msg": "Away"})
|
||||||
test_set_my_status.skip = "Presence polling is disabled"
|
|
||||||
|
|
||||||
|
|
||||||
class PresenceListTestCase(unittest.TestCase):
|
class PresenceListTestCase(unittest.TestCase):
|
||||||
@ -318,4 +317,3 @@ class PresenceEventStreamTestCase(unittest.TestCase):
|
|||||||
"mtime_age": 0,
|
"mtime_age": 0,
|
||||||
}},
|
}},
|
||||||
]}, response)
|
]}, response)
|
||||||
test_shortpoll.skip = "Presence polling is disabled"
|
|
||||||
|
@ -21,13 +21,23 @@ from synapse.api.events.room import (
|
|||||||
RoomMemberEvent, MessageEvent
|
RoomMemberEvent, MessageEvent
|
||||||
)
|
)
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer, reactor
|
||||||
|
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
from mock import patch, Mock
|
from mock import patch, Mock
|
||||||
import json
|
import json
|
||||||
import urlparse
|
import urlparse
|
||||||
|
|
||||||
|
from inspect import getcallargs
|
||||||
|
|
||||||
|
|
||||||
|
def get_mock_call_args(pattern_func, mock_func):
|
||||||
|
""" Return the arguments the mock function was called with interpreted
|
||||||
|
by the pattern functions argument list.
|
||||||
|
"""
|
||||||
|
invoked_args, invoked_kargs = mock_func.call_args
|
||||||
|
return getcallargs(pattern_func, *invoked_args, **invoked_kargs)
|
||||||
|
|
||||||
|
|
||||||
# This is a mock /resource/ not an entire server
|
# This is a mock /resource/ not an entire server
|
||||||
class MockHttpResource(HttpServer):
|
class MockHttpResource(HttpServer):
|
||||||
@ -238,8 +248,11 @@ class DeferredMockCallable(object):
|
|||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.expectations = []
|
self.expectations = []
|
||||||
|
self.calls = []
|
||||||
|
|
||||||
def __call__(self, *args, **kwargs):
|
def __call__(self, *args, **kwargs):
|
||||||
|
self.calls.append((args, kwargs))
|
||||||
|
|
||||||
if not self.expectations:
|
if not self.expectations:
|
||||||
raise ValueError("%r has no pending calls to handle call(%s)" % (
|
raise ValueError("%r has no pending calls to handle call(%s)" % (
|
||||||
self, _format_call(args, kwargs))
|
self, _format_call(args, kwargs))
|
||||||
@ -250,15 +263,52 @@ class DeferredMockCallable(object):
|
|||||||
d.callback(None)
|
d.callback(None)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
raise AssertionError("Was not expecting call(%s)" %
|
failure = AssertionError("Was not expecting call(%s)" %
|
||||||
_format_call(args, kwargs)
|
_format_call(args, kwargs)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
for _, _, d in self.expectations:
|
||||||
|
try:
|
||||||
|
d.errback(failure)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
raise failure
|
||||||
|
|
||||||
def expect_call_and_return(self, call, result):
|
def expect_call_and_return(self, call, result):
|
||||||
self.expectations.append((call, result, defer.Deferred()))
|
self.expectations.append((call, result, defer.Deferred()))
|
||||||
|
|
||||||
@defer.inlineCallbacks
|
@defer.inlineCallbacks
|
||||||
def await_calls(self):
|
def await_calls(self, timeout=1000):
|
||||||
while self.expectations:
|
deferred = defer.DeferredList(
|
||||||
(_, _, d) = self.expectations.pop(0)
|
[d for _, _, d in self.expectations],
|
||||||
yield d
|
fireOnOneErrback=True
|
||||||
|
)
|
||||||
|
|
||||||
|
timer = reactor.callLater(
|
||||||
|
timeout/1000,
|
||||||
|
deferred.errback,
|
||||||
|
AssertionError(
|
||||||
|
"%d pending calls left: %s"% (
|
||||||
|
len([e for e in self.expectations if not e[2].called]),
|
||||||
|
[e for e in self.expectations if not e[2].called]
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
yield deferred
|
||||||
|
|
||||||
|
timer.cancel()
|
||||||
|
|
||||||
|
self.calls = []
|
||||||
|
|
||||||
|
def assert_had_no_calls(self):
|
||||||
|
if self.calls:
|
||||||
|
calls = self.calls
|
||||||
|
self.calls = []
|
||||||
|
|
||||||
|
raise AssertionError("Expected not to received any calls, got:\n" +
|
||||||
|
"\n".join([
|
||||||
|
"call(%s)" % _format_call(c[0], c[1]) for c in calls
|
||||||
|
])
|
||||||
|
)
|
||||||
|
@ -24,6 +24,8 @@ var matrixWebClient = angular.module('matrixWebClient', [
|
|||||||
'SettingsController',
|
'SettingsController',
|
||||||
'UserController',
|
'UserController',
|
||||||
'matrixService',
|
'matrixService',
|
||||||
|
'matrixPhoneService',
|
||||||
|
'MatrixCall',
|
||||||
'eventStreamService',
|
'eventStreamService',
|
||||||
'eventHandlerService',
|
'eventHandlerService',
|
||||||
'infinite-scroll'
|
'infinite-scroll'
|
||||||
|
@ -27,13 +27,16 @@ Typically, this service will store events or broadcast them to any listeners
|
|||||||
if typically all the $on method would do is update its own $scope.
|
if typically all the $on method would do is update its own $scope.
|
||||||
*/
|
*/
|
||||||
angular.module('eventHandlerService', [])
|
angular.module('eventHandlerService', [])
|
||||||
.factory('eventHandlerService', ['matrixService', '$rootScope', function(matrixService, $rootScope) {
|
.factory('eventHandlerService', ['matrixService', '$rootScope', '$q', function(matrixService, $rootScope, $q) {
|
||||||
var MSG_EVENT = "MSG_EVENT";
|
var MSG_EVENT = "MSG_EVENT";
|
||||||
var MEMBER_EVENT = "MEMBER_EVENT";
|
var MEMBER_EVENT = "MEMBER_EVENT";
|
||||||
var PRESENCE_EVENT = "PRESENCE_EVENT";
|
var PRESENCE_EVENT = "PRESENCE_EVENT";
|
||||||
|
var CALL_EVENT = "CALL_EVENT";
|
||||||
|
|
||||||
|
var InitialSyncDeferred = $q.defer();
|
||||||
|
|
||||||
$rootScope.events = {
|
$rootScope.events = {
|
||||||
rooms: {}, // will contain roomId: { messages:[], members:{userid1: event} }
|
rooms: {} // will contain roomId: { messages:[], members:{userid1: event} }
|
||||||
};
|
};
|
||||||
|
|
||||||
$rootScope.presence = {};
|
$rootScope.presence = {};
|
||||||
@ -47,11 +50,11 @@ angular.module('eventHandlerService', [])
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var reInitRoom = function(room_id) {
|
var resetRoomMessages = function(room_id) {
|
||||||
$rootScope.events.rooms[room_id] = {};
|
if ($rootScope.events.rooms[room_id]) {
|
||||||
$rootScope.events.rooms[room_id].messages = [];
|
$rootScope.events.rooms[room_id].messages = [];
|
||||||
$rootScope.events.rooms[room_id].members = {};
|
}
|
||||||
}
|
};
|
||||||
|
|
||||||
var handleMessage = function(event, isLiveEvent) {
|
var handleMessage = function(event, isLiveEvent) {
|
||||||
initRoom(event.room_id);
|
initRoom(event.room_id);
|
||||||
@ -92,12 +95,16 @@ angular.module('eventHandlerService', [])
|
|||||||
$rootScope.presence[event.content.user_id] = event;
|
$rootScope.presence[event.content.user_id] = event;
|
||||||
$rootScope.$broadcast(PRESENCE_EVENT, event, isLiveEvent);
|
$rootScope.$broadcast(PRESENCE_EVENT, event, isLiveEvent);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
var handleCallEvent = function(event, isLiveEvent) {
|
||||||
|
$rootScope.$broadcast(CALL_EVENT, event, isLiveEvent);
|
||||||
|
};
|
||||||
|
|
||||||
return {
|
return {
|
||||||
MSG_EVENT: MSG_EVENT,
|
MSG_EVENT: MSG_EVENT,
|
||||||
MEMBER_EVENT: MEMBER_EVENT,
|
MEMBER_EVENT: MEMBER_EVENT,
|
||||||
PRESENCE_EVENT: PRESENCE_EVENT,
|
PRESENCE_EVENT: PRESENCE_EVENT,
|
||||||
|
CALL_EVENT: CALL_EVENT,
|
||||||
|
|
||||||
|
|
||||||
handleEvent: function(event, isLiveEvent) {
|
handleEvent: function(event, isLiveEvent) {
|
||||||
@ -115,6 +122,9 @@ angular.module('eventHandlerService', [])
|
|||||||
console.log("Unable to handle event type " + event.type);
|
console.log("Unable to handle event type " + event.type);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
if (event.type.indexOf('m.call.') == 0) {
|
||||||
|
handleCallEvent(event, isLiveEvent);
|
||||||
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
// isLiveEvents determines whether notifications should be shown, whether
|
// isLiveEvents determines whether notifications should be shown, whether
|
||||||
@ -125,8 +135,18 @@ angular.module('eventHandlerService', [])
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
reInitRoom: function(room_id) {
|
handleInitialSyncDone: function() {
|
||||||
reInitRoom(room_id);
|
console.log("# handleInitialSyncDone");
|
||||||
|
InitialSyncDeferred.resolve($rootScope.events, $rootScope.presence);
|
||||||
},
|
},
|
||||||
|
|
||||||
|
// Returns a promise that resolves when the initialSync request has been processed
|
||||||
|
waitForInitialSyncCompletion: function() {
|
||||||
|
return InitialSyncDeferred.promise;
|
||||||
|
},
|
||||||
|
|
||||||
|
resetRoomMessages: function(room_id) {
|
||||||
|
resetRoomMessages(room_id);
|
||||||
|
}
|
||||||
};
|
};
|
||||||
}]);
|
}]);
|
||||||
|
@ -25,7 +25,8 @@ the eventHandlerService.
|
|||||||
angular.module('eventStreamService', [])
|
angular.module('eventStreamService', [])
|
||||||
.factory('eventStreamService', ['$q', '$timeout', 'matrixService', 'eventHandlerService', function($q, $timeout, matrixService, eventHandlerService) {
|
.factory('eventStreamService', ['$q', '$timeout', 'matrixService', 'eventHandlerService', function($q, $timeout, matrixService, eventHandlerService) {
|
||||||
var END = "END";
|
var END = "END";
|
||||||
var TIMEOUT_MS = 30000;
|
var SERVER_TIMEOUT_MS = 30000;
|
||||||
|
var CLIENT_TIMEOUT_MS = 40000;
|
||||||
var ERR_TIMEOUT_MS = 5000;
|
var ERR_TIMEOUT_MS = 5000;
|
||||||
|
|
||||||
var settings = {
|
var settings = {
|
||||||
@ -55,7 +56,7 @@ angular.module('eventStreamService', [])
|
|||||||
deferred = deferred || $q.defer();
|
deferred = deferred || $q.defer();
|
||||||
|
|
||||||
// run the stream from the latest token
|
// run the stream from the latest token
|
||||||
matrixService.getEventStream(settings.from, TIMEOUT_MS).then(
|
matrixService.getEventStream(settings.from, SERVER_TIMEOUT_MS, CLIENT_TIMEOUT_MS).then(
|
||||||
function(response) {
|
function(response) {
|
||||||
if (!settings.isActive) {
|
if (!settings.isActive) {
|
||||||
console.log("[EventStream] Got response but now inactive. Dropping data.");
|
console.log("[EventStream] Got response but now inactive. Dropping data.");
|
||||||
@ -80,7 +81,7 @@ angular.module('eventStreamService', [])
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
function(error) {
|
function(error) {
|
||||||
if (error.status == 403) {
|
if (error.status === 403) {
|
||||||
settings.shouldPoll = false;
|
settings.shouldPoll = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -96,7 +97,7 @@ angular.module('eventStreamService', [])
|
|||||||
);
|
);
|
||||||
|
|
||||||
return deferred.promise;
|
return deferred.promise;
|
||||||
}
|
};
|
||||||
|
|
||||||
var startEventStream = function() {
|
var startEventStream = function() {
|
||||||
settings.shouldPoll = true;
|
settings.shouldPoll = true;
|
||||||
@ -110,18 +111,17 @@ angular.module('eventStreamService', [])
|
|||||||
for (var i = 0; i < rooms.length; ++i) {
|
for (var i = 0; i < rooms.length; ++i) {
|
||||||
var room = rooms[i];
|
var room = rooms[i];
|
||||||
if ("state" in room) {
|
if ("state" in room) {
|
||||||
for (var j = 0; j < room.state.length; ++j) {
|
eventHandlerService.handleEvents(room.state, false);
|
||||||
eventHandlerService.handleEvents(room.state[j], false);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var presence = response.data.presence;
|
var presence = response.data.presence;
|
||||||
for (var i = 0; i < presence.length; ++i) {
|
eventHandlerService.handleEvents(presence, false);
|
||||||
eventHandlerService.handleEvent(presence[i], false);
|
|
||||||
}
|
|
||||||
|
|
||||||
settings.from = response.data.end
|
// Initial sync is done
|
||||||
|
eventHandlerService.handleInitialSyncDone();
|
||||||
|
|
||||||
|
settings.from = response.data.end;
|
||||||
doEventStream(deferred);
|
doEventStream(deferred);
|
||||||
},
|
},
|
||||||
function(error) {
|
function(error) {
|
||||||
|
268
webclient/components/matrix/matrix-call.js
Normal file
268
webclient/components/matrix/matrix-call.js
Normal file
@ -0,0 +1,268 @@
|
|||||||
|
/*
|
||||||
|
Copyright 2014 matrix.org
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
var forAllVideoTracksOnStream = function(s, f) {
|
||||||
|
var tracks = s.getVideoTracks();
|
||||||
|
for (var i = 0; i < tracks.length; i++) {
|
||||||
|
f(tracks[i]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var forAllAudioTracksOnStream = function(s, f) {
|
||||||
|
var tracks = s.getAudioTracks();
|
||||||
|
for (var i = 0; i < tracks.length; i++) {
|
||||||
|
f(tracks[i]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var forAllTracksOnStream = function(s, f) {
|
||||||
|
forAllVideoTracksOnStream(s, f);
|
||||||
|
forAllAudioTracksOnStream(s, f);
|
||||||
|
}
|
||||||
|
|
||||||
|
angular.module('MatrixCall', [])
|
||||||
|
.factory('MatrixCall', ['matrixService', 'matrixPhoneService', function MatrixCallFactory(matrixService, matrixPhoneService) {
|
||||||
|
var MatrixCall = function(room_id) {
|
||||||
|
this.room_id = room_id;
|
||||||
|
this.call_id = "c" + new Date().getTime();
|
||||||
|
this.state = 'fledgling';
|
||||||
|
}
|
||||||
|
|
||||||
|
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
|
||||||
|
|
||||||
|
window.RTCPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
|
||||||
|
|
||||||
|
MatrixCall.prototype.placeCall = function() {
|
||||||
|
self = this;
|
||||||
|
matrixPhoneService.callPlaced(this);
|
||||||
|
navigator.getUserMedia({audio: true, video: false}, function(s) { self.gotUserMediaForInvite(s); }, function(e) { self.getUserMediaFailed(e); });
|
||||||
|
self.state = 'wait_local_media';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.initWithInvite = function(msg) {
|
||||||
|
this.msg = msg;
|
||||||
|
this.peerConn = new window.RTCPeerConnection({"iceServers":[{"urls":"stun:stun.l.google.com:19302"}]})
|
||||||
|
self= this;
|
||||||
|
this.peerConn.oniceconnectionstatechange = function() { self.onIceConnectionStateChanged(); };
|
||||||
|
this.peerConn.onicecandidate = function(c) { self.gotLocalIceCandidate(c); };
|
||||||
|
this.peerConn.onsignalingstatechange = function() { self.onSignallingStateChanged(); };
|
||||||
|
this.peerConn.onaddstream = function(s) { self.onAddStream(s); };
|
||||||
|
this.peerConn.setRemoteDescription(new RTCSessionDescription(this.msg.offer), self.onSetRemoteDescriptionSuccess, self.onSetRemoteDescriptionError);
|
||||||
|
this.state = 'ringing';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.answer = function() {
|
||||||
|
console.trace("Answering call "+this.call_id);
|
||||||
|
self = this;
|
||||||
|
navigator.getUserMedia({audio: true, video: false}, function(s) { self.gotUserMediaForAnswer(s); }, function(e) { self.getUserMediaFailed(e); });
|
||||||
|
this.state = 'wait_local_media';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.hangup = function() {
|
||||||
|
console.trace("Ending call "+this.call_id);
|
||||||
|
|
||||||
|
if (this.localAVStream) {
|
||||||
|
forAllTracksOnStream(this.localAVStream, function(t) {
|
||||||
|
t.stop();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (this.remoteAVStream) {
|
||||||
|
forAllTracksOnStream(this.remoteAVStream, function(t) {
|
||||||
|
t.stop();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
var content = {
|
||||||
|
version: 0,
|
||||||
|
call_id: this.call_id,
|
||||||
|
};
|
||||||
|
matrixService.sendEvent(this.room_id, 'm.call.hangup', undefined, content).then(this.messageSent, this.messageSendFailed);
|
||||||
|
this.state = 'ended';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.gotUserMediaForInvite = function(stream) {
|
||||||
|
this.localAVStream = stream;
|
||||||
|
var audioTracks = stream.getAudioTracks();
|
||||||
|
for (var i = 0; i < audioTracks.length; i++) {
|
||||||
|
audioTracks[i].enabled = true;
|
||||||
|
}
|
||||||
|
this.peerConn = new window.RTCPeerConnection({"iceServers":[{"urls":"stun:stun.l.google.com:19302"}]})
|
||||||
|
self = this;
|
||||||
|
this.peerConn.oniceconnectionstatechange = function() { self.onIceConnectionStateChanged(); };
|
||||||
|
this.peerConn.onsignalingstatechange = function() { self.onSignallingStateChanged(); };
|
||||||
|
this.peerConn.onicecandidate = function(c) { self.gotLocalIceCandidate(c); };
|
||||||
|
this.peerConn.onaddstream = function(s) { self.onAddStream(s); };
|
||||||
|
this.peerConn.addStream(stream);
|
||||||
|
this.peerConn.createOffer(function(d) {
|
||||||
|
self.gotLocalOffer(d);
|
||||||
|
}, function(e) {
|
||||||
|
self.getLocalOfferFailed(e);
|
||||||
|
});
|
||||||
|
this.state = 'create_offer';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.gotUserMediaForAnswer = function(stream) {
|
||||||
|
this.localAVStream = stream;
|
||||||
|
var audioTracks = stream.getAudioTracks();
|
||||||
|
for (var i = 0; i < audioTracks.length; i++) {
|
||||||
|
audioTracks[i].enabled = true;
|
||||||
|
}
|
||||||
|
this.peerConn.addStream(stream);
|
||||||
|
self = this;
|
||||||
|
var constraints = {
|
||||||
|
'mandatory': {
|
||||||
|
'OfferToReceiveAudio': true,
|
||||||
|
'OfferToReceiveVideo': false
|
||||||
|
},
|
||||||
|
};
|
||||||
|
this.peerConn.createAnswer(function(d) { self.createdAnswer(d); }, function(e) {}, constraints);
|
||||||
|
this.state = 'create_answer';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.gotLocalIceCandidate = function(event) {
|
||||||
|
console.trace(event);
|
||||||
|
if (event.candidate) {
|
||||||
|
var content = {
|
||||||
|
version: 0,
|
||||||
|
call_id: this.call_id,
|
||||||
|
candidate: event.candidate
|
||||||
|
};
|
||||||
|
matrixService.sendEvent(this.room_id, 'm.call.candidate', undefined, content).then(this.messageSent, this.messageSendFailed);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
MatrixCall.prototype.gotRemoteIceCandidate = function(cand) {
|
||||||
|
console.trace("Got ICE candidate from remote: "+cand);
|
||||||
|
var candidateObject = new RTCIceCandidate({
|
||||||
|
sdpMLineIndex: cand.label,
|
||||||
|
candidate: cand.candidate
|
||||||
|
});
|
||||||
|
this.peerConn.addIceCandidate(candidateObject, function() {}, function(e) {});
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.receivedAnswer = function(msg) {
|
||||||
|
this.peerConn.setRemoteDescription(new RTCSessionDescription(msg.answer), self.onSetRemoteDescriptionSuccess, self.onSetRemoteDescriptionError);
|
||||||
|
this.state = 'connecting';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.gotLocalOffer = function(description) {
|
||||||
|
console.trace("Created offer: "+description);
|
||||||
|
this.peerConn.setLocalDescription(description);
|
||||||
|
|
||||||
|
var content = {
|
||||||
|
version: 0,
|
||||||
|
call_id: this.call_id,
|
||||||
|
offer: description
|
||||||
|
};
|
||||||
|
matrixService.sendEvent(this.room_id, 'm.call.invite', undefined, content).then(this.messageSent, this.messageSendFailed);
|
||||||
|
this.state = 'invite_sent';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.createdAnswer = function(description) {
|
||||||
|
console.trace("Created answer: "+description);
|
||||||
|
this.peerConn.setLocalDescription(description);
|
||||||
|
var content = {
|
||||||
|
version: 0,
|
||||||
|
call_id: this.call_id,
|
||||||
|
answer: description
|
||||||
|
};
|
||||||
|
matrixService.sendEvent(this.room_id, 'm.call.answer', undefined, content).then(this.messageSent, this.messageSendFailed);
|
||||||
|
this.state = 'connecting';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.messageSent = function() {
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.messageSendFailed = function(error) {
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.getLocalOfferFailed = function(error) {
|
||||||
|
this.onError("Failed to start audio for call!");
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.getUserMediaFailed = function() {
|
||||||
|
this.onError("Couldn't start capturing audio! Is your microphone set up?");
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.onIceConnectionStateChanged = function() {
|
||||||
|
console.trace("Ice connection state changed to: "+this.peerConn.iceConnectionState);
|
||||||
|
// ideally we'd consider the call to be connected when we get media but chrome doesn't implement nay of the 'onstarted' events yet
|
||||||
|
if (this.peerConn.iceConnectionState == 'completed' || this.peerConn.iceConnectionState == 'connected') {
|
||||||
|
this.state = 'connected';
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.onSignallingStateChanged = function() {
|
||||||
|
console.trace("Signalling state changed to: "+this.peerConn.signalingState);
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.onSetRemoteDescriptionSuccess = function() {
|
||||||
|
console.trace("Set remote description");
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.onSetRemoteDescriptionError = function(e) {
|
||||||
|
console.trace("Failed to set remote description"+e);
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.onAddStream = function(event) {
|
||||||
|
console.trace("Stream added"+event);
|
||||||
|
|
||||||
|
var s = event.stream;
|
||||||
|
|
||||||
|
this.remoteAVStream = s;
|
||||||
|
|
||||||
|
var self = this;
|
||||||
|
forAllTracksOnStream(s, function(t) {
|
||||||
|
// not currently implemented in chrome
|
||||||
|
t.onstarted = self.onRemoteStreamTrackStarted;
|
||||||
|
});
|
||||||
|
|
||||||
|
// not currently implemented in chrome
|
||||||
|
event.stream.onstarted = this.onRemoteStreamStarted;
|
||||||
|
var player = new Audio();
|
||||||
|
player.src = URL.createObjectURL(s);
|
||||||
|
player.play();
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.onRemoteStreamStarted = function(event) {
|
||||||
|
this.state = 'connected';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.onRemoteStreamTrackStarted = function(event) {
|
||||||
|
this.state = 'connected';
|
||||||
|
};
|
||||||
|
|
||||||
|
MatrixCall.prototype.onHangupReceived = function() {
|
||||||
|
this.state = 'ended';
|
||||||
|
|
||||||
|
if (this.localAVStream) {
|
||||||
|
forAllTracksOnStream(this.localAVStream, function(t) {
|
||||||
|
t.stop();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (this.remoteAVStream) {
|
||||||
|
forAllTracksOnStream(this.remoteAVStream, function(t) {
|
||||||
|
t.stop();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
this.onHangup();
|
||||||
|
};
|
||||||
|
|
||||||
|
return MatrixCall;
|
||||||
|
}]);
|
68
webclient/components/matrix/matrix-phone-service.js
Normal file
68
webclient/components/matrix/matrix-phone-service.js
Normal file
@ -0,0 +1,68 @@
|
|||||||
|
/*
|
||||||
|
Copyright 2014 matrix.org
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
'use strict';
|
||||||
|
|
||||||
|
angular.module('matrixPhoneService', [])
|
||||||
|
.factory('matrixPhoneService', ['$rootScope', '$injector', 'matrixService', 'eventHandlerService', function MatrixPhoneService($rootScope, $injector, matrixService, eventHandlerService) {
|
||||||
|
var matrixPhoneService = function() {
|
||||||
|
};
|
||||||
|
|
||||||
|
matrixPhoneService.INCOMING_CALL_EVENT = "INCOMING_CALL_EVENT";
|
||||||
|
matrixPhoneService.allCalls = {};
|
||||||
|
|
||||||
|
matrixPhoneService.callPlaced = function(call) {
|
||||||
|
matrixPhoneService.allCalls[call.call_id] = call;
|
||||||
|
};
|
||||||
|
|
||||||
|
$rootScope.$on(eventHandlerService.CALL_EVENT, function(ngEvent, event, isLive) {
|
||||||
|
if (!isLive) return; // until matrix supports expiring messages
|
||||||
|
if (event.user_id == matrixService.config().user_id) return;
|
||||||
|
var msg = event.content;
|
||||||
|
if (event.type == 'm.call.invite') {
|
||||||
|
var MatrixCall = $injector.get('MatrixCall');
|
||||||
|
var call = new MatrixCall(event.room_id);
|
||||||
|
call.call_id = msg.call_id;
|
||||||
|
call.initWithInvite(msg);
|
||||||
|
matrixPhoneService.allCalls[call.call_id] = call;
|
||||||
|
$rootScope.$broadcast(matrixPhoneService.INCOMING_CALL_EVENT, call);
|
||||||
|
} else if (event.type == 'm.call.answer') {
|
||||||
|
var call = matrixPhoneService.allCalls[msg.call_id];
|
||||||
|
if (!call) {
|
||||||
|
console.trace("Got answer for unknown call ID "+msg.call_id);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
call.receivedAnswer(msg);
|
||||||
|
} else if (event.type == 'm.call.candidate') {
|
||||||
|
var call = matrixPhoneService.allCalls[msg.call_id];
|
||||||
|
if (!call) {
|
||||||
|
console.trace("Got candidate for unknown call ID "+msg.call_id);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
call.gotRemoteIceCandidate(msg.candidate);
|
||||||
|
} else if (event.type == 'm.call.hangup') {
|
||||||
|
var call = matrixPhoneService.allCalls[msg.call_id];
|
||||||
|
if (!call) {
|
||||||
|
console.trace("Got hangup for unknown call ID "+msg.call_id);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
call.onHangupReceived();
|
||||||
|
matrixPhoneService.allCalls[msg.call_id] = undefined;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
return matrixPhoneService;
|
||||||
|
}]);
|
@ -41,7 +41,7 @@ angular.module('matrixService', [])
|
|||||||
var prefixPath = "/matrix/client/api/v1";
|
var prefixPath = "/matrix/client/api/v1";
|
||||||
var MAPPING_PREFIX = "alias_for_";
|
var MAPPING_PREFIX = "alias_for_";
|
||||||
|
|
||||||
var doRequest = function(method, path, params, data) {
|
var doRequest = function(method, path, params, data, $httpParams) {
|
||||||
if (!config) {
|
if (!config) {
|
||||||
console.warn("No config exists. Cannot perform request to "+path);
|
console.warn("No config exists. Cannot perform request to "+path);
|
||||||
return;
|
return;
|
||||||
@ -58,7 +58,7 @@ angular.module('matrixService', [])
|
|||||||
path = prefixPath + path;
|
path = prefixPath + path;
|
||||||
}
|
}
|
||||||
|
|
||||||
return doBaseRequest(config.homeserver, method, path, params, data, undefined);
|
return doBaseRequest(config.homeserver, method, path, params, data, undefined, $httpParams);
|
||||||
};
|
};
|
||||||
|
|
||||||
var doBaseRequest = function(baseUrl, method, path, params, data, headers, $httpParams) {
|
var doBaseRequest = function(baseUrl, method, path, params, data, headers, $httpParams) {
|
||||||
@ -172,9 +172,9 @@ angular.module('matrixService', [])
|
|||||||
return doRequest("GET", path, undefined, {});
|
return doRequest("GET", path, undefined, {});
|
||||||
},
|
},
|
||||||
|
|
||||||
sendMessage: function(room_id, txn_id, content) {
|
sendEvent: function(room_id, eventType, txn_id, content) {
|
||||||
// The REST path spec
|
// The REST path spec
|
||||||
var path = "/rooms/$room_id/send/m.room.message/$txn_id";
|
var path = "/rooms/$room_id/send/"+eventType+"/$txn_id";
|
||||||
|
|
||||||
if (!txn_id) {
|
if (!txn_id) {
|
||||||
txn_id = "m" + new Date().getTime();
|
txn_id = "m" + new Date().getTime();
|
||||||
@ -190,6 +190,10 @@ angular.module('matrixService', [])
|
|||||||
return doRequest("PUT", path, undefined, content);
|
return doRequest("PUT", path, undefined, content);
|
||||||
},
|
},
|
||||||
|
|
||||||
|
sendMessage: function(room_id, txn_id, content) {
|
||||||
|
return this.sendEvent(room_id, 'm.room.message', txn_id, content);
|
||||||
|
},
|
||||||
|
|
||||||
// Send a text message
|
// Send a text message
|
||||||
sendTextMessage: function(room_id, body, msg_id) {
|
sendTextMessage: function(room_id, body, msg_id) {
|
||||||
var content = {
|
var content = {
|
||||||
@ -343,15 +347,31 @@ angular.module('matrixService', [])
|
|||||||
|
|
||||||
return doBaseRequest(config.homeserver, "POST", path, params, file, headers, $httpParams);
|
return doBaseRequest(config.homeserver, "POST", path, params, file, headers, $httpParams);
|
||||||
},
|
},
|
||||||
|
|
||||||
// start listening on /events
|
/**
|
||||||
getEventStream: function(from, timeout) {
|
* Start listening on /events
|
||||||
|
* @param {String} from the token from which to listen events to
|
||||||
|
* @param {Integer} serverTimeout the time in ms the server will hold open the connection
|
||||||
|
* @param {Integer} clientTimeout the timeout in ms used at the client HTTP request level
|
||||||
|
* @returns a promise
|
||||||
|
*/
|
||||||
|
getEventStream: function(from, serverTimeout, clientTimeout) {
|
||||||
var path = "/events";
|
var path = "/events";
|
||||||
var params = {
|
var params = {
|
||||||
from: from,
|
from: from,
|
||||||
timeout: timeout
|
timeout: serverTimeout
|
||||||
};
|
};
|
||||||
return doRequest("GET", path, params);
|
|
||||||
|
var $httpParams;
|
||||||
|
if (clientTimeout) {
|
||||||
|
// If the Internet connection is lost, this timeout is used to be able to
|
||||||
|
// cancel the current request and notify the client so that it can retry with a new request.
|
||||||
|
$httpParams = {
|
||||||
|
timeout: clientTimeout
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return doRequest("GET", path, params, undefined, $httpParams);
|
||||||
},
|
},
|
||||||
|
|
||||||
// Indicates if user authentications details are stored in cache
|
// Indicates if user authentications details are stored in cache
|
||||||
@ -420,34 +440,38 @@ angular.module('matrixService', [])
|
|||||||
/****** Room aliases management ******/
|
/****** Room aliases management ******/
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Enhance data returned by rooms() and publicRooms() by adding room_alias
|
* Get the room_alias & room_display_name which are computed from data
|
||||||
* & room_display_name which are computed from data already retrieved from the server.
|
* already retrieved from the server.
|
||||||
* @param {Array} data the response of rooms() and publicRooms()
|
* @param {Room object} room one element of the array returned by the response
|
||||||
* @returns {Array} the same array with enriched objects
|
* of rooms() and publicRooms()
|
||||||
|
* @returns {Object} {room_alias: "...", room_display_name: "..."}
|
||||||
*/
|
*/
|
||||||
assignRoomAliases: function(data) {
|
getRoomAliasAndDisplayName: function(room) {
|
||||||
for (var i=0; i<data.length; i++) {
|
var result = {
|
||||||
var alias = this.getRoomIdToAliasMapping(data[i].room_id);
|
room_alias: undefined,
|
||||||
if (alias) {
|
room_display_name: undefined
|
||||||
// use the existing alias from storage
|
};
|
||||||
data[i].room_alias = alias;
|
|
||||||
data[i].room_display_name = alias;
|
var alias = this.getRoomIdToAliasMapping(room.room_id);
|
||||||
}
|
if (alias) {
|
||||||
else if (data[i].aliases && data[i].aliases[0]) {
|
// use the existing alias from storage
|
||||||
// save the mapping
|
result.room_alias = alias;
|
||||||
// TODO: select the smarter alias from the array
|
result.room_display_name = alias;
|
||||||
this.createRoomIdToAliasMapping(data[i].room_id, data[i].aliases[0]);
|
|
||||||
data[i].room_display_name = data[i].aliases[0];
|
|
||||||
}
|
|
||||||
else if (data[i].membership == "invite" && "inviter" in data[i]) {
|
|
||||||
data[i].room_display_name = data[i].inviter + "'s room"
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
// last resort use the room id
|
|
||||||
data[i].room_display_name = data[i].room_id;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return data;
|
else if (room.aliases && room.aliases[0]) {
|
||||||
|
// save the mapping
|
||||||
|
// TODO: select the smarter alias from the array
|
||||||
|
this.createRoomIdToAliasMapping(room.room_id, room.aliases[0]);
|
||||||
|
result.room_display_name = room.aliases[0];
|
||||||
|
}
|
||||||
|
else if (room.membership === "invite" && "inviter" in room) {
|
||||||
|
result.room_display_name = room.inviter + "'s room";
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
// last resort use the room id
|
||||||
|
result.room_display_name = room.room_id;
|
||||||
|
}
|
||||||
|
return result;
|
||||||
},
|
},
|
||||||
|
|
||||||
createRoomIdToAliasMapping: function(roomId, alias) {
|
createRoomIdToAliasMapping: function(roomId, alias) {
|
||||||
|
@ -17,8 +17,8 @@ limitations under the License.
|
|||||||
'use strict';
|
'use strict';
|
||||||
|
|
||||||
angular.module('HomeController', ['matrixService', 'eventHandlerService', 'RecentsController'])
|
angular.module('HomeController', ['matrixService', 'eventHandlerService', 'RecentsController'])
|
||||||
.controller('HomeController', ['$scope', '$location', 'matrixService', 'eventHandlerService', 'eventStreamService',
|
.controller('HomeController', ['$scope', '$location', 'matrixService',
|
||||||
function($scope, $location, matrixService, eventHandlerService, eventStreamService) {
|
function($scope, $location, matrixService) {
|
||||||
|
|
||||||
$scope.config = matrixService.config();
|
$scope.config = matrixService.config();
|
||||||
$scope.public_rooms = [];
|
$scope.public_rooms = [];
|
||||||
@ -42,7 +42,13 @@ angular.module('HomeController', ['matrixService', 'eventHandlerService', 'Recen
|
|||||||
|
|
||||||
matrixService.publicRooms().then(
|
matrixService.publicRooms().then(
|
||||||
function(response) {
|
function(response) {
|
||||||
$scope.public_rooms = matrixService.assignRoomAliases(response.data.chunk);
|
$scope.public_rooms = response.data.chunk;
|
||||||
|
for (var i = 0; i < $scope.public_rooms.length; i++) {
|
||||||
|
var room = $scope.public_rooms[i];
|
||||||
|
|
||||||
|
// Add room_alias & room_display_name members
|
||||||
|
angular.extend(room, matrixService.getRoomAliasAndDisplayName(room));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
};
|
};
|
||||||
|
@ -26,6 +26,8 @@
|
|||||||
<script src="settings/settings-controller.js"></script>
|
<script src="settings/settings-controller.js"></script>
|
||||||
<script src="user/user-controller.js"></script>
|
<script src="user/user-controller.js"></script>
|
||||||
<script src="components/matrix/matrix-service.js"></script>
|
<script src="components/matrix/matrix-service.js"></script>
|
||||||
|
<script src="components/matrix/matrix-call.js"></script>
|
||||||
|
<script src="components/matrix/matrix-phone-service.js"></script>
|
||||||
<script src="components/matrix/event-stream-service.js"></script>
|
<script src="components/matrix/event-stream-service.js"></script>
|
||||||
<script src="components/matrix/event-handler-service.js"></script>
|
<script src="components/matrix/event-handler-service.js"></script>
|
||||||
<script src="components/matrix/presence-service.js"></script>
|
<script src="components/matrix/presence-service.js"></script>
|
||||||
|
@ -17,21 +17,34 @@
|
|||||||
'use strict';
|
'use strict';
|
||||||
|
|
||||||
angular.module('RecentsController', ['matrixService', 'eventHandlerService'])
|
angular.module('RecentsController', ['matrixService', 'eventHandlerService'])
|
||||||
.controller('RecentsController', ['$scope', 'matrixService', 'eventHandlerService', 'eventStreamService',
|
.controller('RecentsController', ['$scope', 'matrixService', 'eventHandlerService',
|
||||||
function($scope, matrixService, eventHandlerService, eventStreamService) {
|
function($scope, matrixService, eventHandlerService) {
|
||||||
$scope.rooms = {};
|
$scope.rooms = {};
|
||||||
|
|
||||||
// $scope of the parent where the recents component is included can override this value
|
// $scope of the parent where the recents component is included can override this value
|
||||||
// in order to highlight a specific room in the list
|
// in order to highlight a specific room in the list
|
||||||
$scope.recentsSelectedRoomID;
|
$scope.recentsSelectedRoomID;
|
||||||
|
|
||||||
// Refresh the list on matrix invitation and message event
|
var listenToEventStream = function() {
|
||||||
$scope.$on(eventHandlerService.MEMBER_EVENT, function(ngEvent, event, isLive) {
|
// Refresh the list on matrix invitation and message event
|
||||||
refresh();
|
$scope.$on(eventHandlerService.MEMBER_EVENT, function(ngEvent, event, isLive) {
|
||||||
});
|
var config = matrixService.config();
|
||||||
$scope.$on(eventHandlerService.MSG_EVENT, function(ngEvent, event, isLive) {
|
if (isLive && event.state_key === config.user_id && event.content.membership === "invite") {
|
||||||
refresh();
|
console.log("Invited to room " + event.room_id);
|
||||||
});
|
// FIXME push membership to top level key to match /im/sync
|
||||||
|
event.membership = event.content.membership;
|
||||||
|
// FIXME bodge a nicer name than the room ID for this invite.
|
||||||
|
event.room_display_name = event.user_id + "'s room";
|
||||||
|
$scope.rooms[event.room_id] = event;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
$scope.$on(eventHandlerService.MSG_EVENT, function(ngEvent, event, isLive) {
|
||||||
|
if (isLive) {
|
||||||
|
$scope.rooms[event.room_id].lastMsg = event;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
var refresh = function() {
|
var refresh = function() {
|
||||||
// List all rooms joined or been invited to
|
// List all rooms joined or been invited to
|
||||||
@ -42,13 +55,16 @@ angular.module('RecentsController', ['matrixService', 'eventHandlerService'])
|
|||||||
// Reset data
|
// Reset data
|
||||||
$scope.rooms = {};
|
$scope.rooms = {};
|
||||||
|
|
||||||
var data = matrixService.assignRoomAliases(response.data.rooms);
|
var rooms = response.data.rooms;
|
||||||
for (var i=0; i<data.length; i++) {
|
for (var i=0; i<rooms.length; i++) {
|
||||||
$scope.rooms[data[i].room_id] = data[i];
|
var room = rooms[i];
|
||||||
|
|
||||||
|
// Add room_alias & room_display_name members
|
||||||
|
$scope.rooms[room.room_id] = angular.extend(room, matrixService.getRoomAliasAndDisplayName(room));
|
||||||
|
|
||||||
// Create a shortcut for the last message of this room
|
// Create a shortcut for the last message of this room
|
||||||
if (data[i].messages && data[i].messages.chunk && data[i].messages.chunk[0]) {
|
if (room.messages && room.messages.chunk && room.messages.chunk[0]) {
|
||||||
$scope.rooms[data[i].room_id].lastMsg = data[i].messages.chunk[0];
|
$scope.rooms[room.room_id].lastMsg = room.messages.chunk[0];
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -56,6 +72,9 @@ angular.module('RecentsController', ['matrixService', 'eventHandlerService'])
|
|||||||
for (var i = 0; i < presence.length; ++i) {
|
for (var i = 0; i < presence.length; ++i) {
|
||||||
eventHandlerService.handleEvent(presence[i], false);
|
eventHandlerService.handleEvent(presence[i], false);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// From now, update recents from the stream
|
||||||
|
listenToEventStream();
|
||||||
},
|
},
|
||||||
function(error) {
|
function(error) {
|
||||||
$scope.feedback = "Failure: " + error.data;
|
$scope.feedback = "Failure: " + error.data;
|
||||||
|
@ -39,6 +39,11 @@
|
|||||||
{{ room.lastMsg.user_id }} sent an image
|
{{ room.lastMsg.user_id }} sent an image
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<div ng-switch-when="m.emote">
|
||||||
|
<span ng-bind-html="'* ' + (room.lastMsg.user_id) + ' ' + room.lastMsg.content.body | linky:'_blank'">
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
|
||||||
<div ng-switch-default>
|
<div ng-switch-default>
|
||||||
{{ room.lastMsg.content }}
|
{{ room.lastMsg.content }}
|
||||||
</div>
|
</div>
|
||||||
|
@ -14,9 +14,9 @@ See the License for the specific language governing permissions and
|
|||||||
limitations under the License.
|
limitations under the License.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
angular.module('RoomController', ['ngSanitize', 'mFileInput'])
|
||||||
.controller('RoomController', ['$scope', '$http', '$timeout', '$routeParams', '$location', 'matrixService', 'eventStreamService', 'eventHandlerService', 'mFileUpload', 'mUtilities', '$rootScope',
|
.controller('RoomController', ['$scope', '$timeout', '$routeParams', '$location', '$rootScope', 'matrixService', 'eventHandlerService', 'mFileUpload', 'mPresence', 'matrixPhoneService', 'MatrixCall',
|
||||||
function($scope, $http, $timeout, $routeParams, $location, matrixService, eventStreamService, eventHandlerService, mFileUpload, mUtilities, $rootScope) {
|
function($scope, $timeout, $routeParams, $location, $rootScope, matrixService, eventHandlerService, mFileUpload, mPresence, matrixPhoneService, MatrixCall) {
|
||||||
'use strict';
|
'use strict';
|
||||||
var MESSAGES_PER_PAGINATION = 30;
|
var MESSAGES_PER_PAGINATION = 30;
|
||||||
var THUMBNAIL_SIZE = 320;
|
var THUMBNAIL_SIZE = 320;
|
||||||
@ -51,21 +51,20 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
objDiv.scrollTop = objDiv.scrollHeight;
|
objDiv.scrollTop = objDiv.scrollHeight;
|
||||||
}, 0);
|
}, 0);
|
||||||
};
|
};
|
||||||
|
|
||||||
$scope.$on(eventHandlerService.MSG_EVENT, function(ngEvent, event, isLive) {
|
$scope.$on(eventHandlerService.MSG_EVENT, function(ngEvent, event, isLive) {
|
||||||
if (isLive && event.room_id === $scope.room_id) {
|
if (isLive && event.room_id === $scope.room_id) {
|
||||||
scrollToBottom();
|
scrollToBottom();
|
||||||
|
|
||||||
if (window.Notification) {
|
if (window.Notification) {
|
||||||
// FIXME: we should also notify based on a timer or other heuristics
|
// Show notification when the user is idle
|
||||||
// rather than the window being minimised
|
if (matrixService.presence.offline === mPresence.getState()) {
|
||||||
if (document.hidden) {
|
|
||||||
var notification = new window.Notification(
|
var notification = new window.Notification(
|
||||||
($scope.members[event.user_id].displayname || event.user_id) +
|
($scope.members[event.user_id].displayname || event.user_id) +
|
||||||
" (" + ($scope.room_alias || $scope.room_id) + ")", // FIXME: don't leak room_ids here
|
" (" + ($scope.room_alias || $scope.room_id) + ")", // FIXME: don't leak room_ids here
|
||||||
{
|
{
|
||||||
"body": event.content.body,
|
"body": event.content.body,
|
||||||
"icon": $scope.members[event.user_id].avatar_url,
|
"icon": $scope.members[event.user_id].avatar_url
|
||||||
});
|
});
|
||||||
$timeout(function() {
|
$timeout(function() {
|
||||||
notification.close();
|
notification.close();
|
||||||
@ -82,6 +81,17 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
$scope.$on(eventHandlerService.PRESENCE_EVENT, function(ngEvent, event, isLive) {
|
$scope.$on(eventHandlerService.PRESENCE_EVENT, function(ngEvent, event, isLive) {
|
||||||
updatePresence(event);
|
updatePresence(event);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
$rootScope.$on(matrixPhoneService.INCOMING_CALL_EVENT, function(ngEvent, call) {
|
||||||
|
console.trace("incoming call");
|
||||||
|
call.onError = $scope.onCallError;
|
||||||
|
call.onHangup = $scope.onCallHangup;
|
||||||
|
$scope.currentCall = call;
|
||||||
|
});
|
||||||
|
|
||||||
|
$scope.memberCount = function() {
|
||||||
|
return Object.keys($scope.members).length;
|
||||||
|
};
|
||||||
|
|
||||||
$scope.paginateMore = function() {
|
$scope.paginateMore = function() {
|
||||||
if ($scope.state.can_paginate) {
|
if ($scope.state.can_paginate) {
|
||||||
@ -89,6 +99,15 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
paginate(MESSAGES_PER_PAGINATION);
|
paginate(MESSAGES_PER_PAGINATION);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
$scope.answerCall = function() {
|
||||||
|
$scope.currentCall.answer();
|
||||||
|
};
|
||||||
|
|
||||||
|
$scope.hangupCall = function() {
|
||||||
|
$scope.currentCall.hangup();
|
||||||
|
$scope.currentCall = undefined;
|
||||||
|
};
|
||||||
|
|
||||||
var paginate = function(numItems) {
|
var paginate = function(numItems) {
|
||||||
// console.log("paginate " + numItems);
|
// console.log("paginate " + numItems);
|
||||||
@ -214,7 +233,7 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
var member = $scope.members[target_user_id];
|
var member = $scope.members[target_user_id];
|
||||||
member.content.membership = chunk.content.membership;
|
member.content.membership = chunk.content.membership;
|
||||||
}
|
}
|
||||||
}
|
};
|
||||||
|
|
||||||
var updatePresence = function(chunk) {
|
var updatePresence = function(chunk) {
|
||||||
if (!(chunk.content.user_id in $scope.members)) {
|
if (!(chunk.content.user_id in $scope.members)) {
|
||||||
@ -241,10 +260,10 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
if ("avatar_url" in chunk.content) {
|
if ("avatar_url" in chunk.content) {
|
||||||
member.avatar_url = chunk.content.avatar_url;
|
member.avatar_url = chunk.content.avatar_url;
|
||||||
}
|
}
|
||||||
}
|
};
|
||||||
|
|
||||||
$scope.send = function() {
|
$scope.send = function() {
|
||||||
if ($scope.textInput == "") {
|
if ($scope.textInput === "") {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -253,7 +272,7 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
// Send the text message
|
// Send the text message
|
||||||
var promise;
|
var promise;
|
||||||
// FIXME: handle other commands too
|
// FIXME: handle other commands too
|
||||||
if ($scope.textInput.indexOf("/me") == 0) {
|
if ($scope.textInput.indexOf("/me") === 0) {
|
||||||
promise = matrixService.sendEmoteMessage($scope.room_id, $scope.textInput.substr(4));
|
promise = matrixService.sendEmoteMessage($scope.room_id, $scope.textInput.substr(4));
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
@ -282,7 +301,7 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (room_id_or_alias && '!' === room_id_or_alias[0]) {
|
if (room_id_or_alias && '!' === room_id_or_alias[0]) {
|
||||||
// Yes. We can start right now
|
// Yes. We can go on right now
|
||||||
$scope.room_id = room_id_or_alias;
|
$scope.room_id = room_id_or_alias;
|
||||||
$scope.room_alias = matrixService.getRoomIdToAliasMapping($scope.room_id);
|
$scope.room_alias = matrixService.getRoomIdToAliasMapping($scope.room_id);
|
||||||
onInit2();
|
onInit2();
|
||||||
@ -313,7 +332,7 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
$scope.room_id = response.data.room_id;
|
$scope.room_id = response.data.room_id;
|
||||||
console.log(" -> Room ID: " + $scope.room_id);
|
console.log(" -> Room ID: " + $scope.room_id);
|
||||||
|
|
||||||
// Now, we can start
|
// Now, we can go on
|
||||||
onInit2();
|
onInit2();
|
||||||
},
|
},
|
||||||
function () {
|
function () {
|
||||||
@ -323,36 +342,61 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
var onInit2 = function() {
|
var onInit2 = function() {
|
||||||
eventHandlerService.reInitRoom($scope.room_id);
|
console.log("onInit2");
|
||||||
|
|
||||||
|
// Make sure the initialSync has been before going further
|
||||||
|
eventHandlerService.waitForInitialSyncCompletion().then(
|
||||||
|
function() {
|
||||||
|
var needsToJoin = true;
|
||||||
|
|
||||||
|
// The room members is available in the data fetched by initialSync
|
||||||
|
if ($rootScope.events.rooms[$scope.room_id]) {
|
||||||
|
var members = $rootScope.events.rooms[$scope.room_id].members;
|
||||||
|
|
||||||
|
// Update the member list
|
||||||
|
for (var i in members) {
|
||||||
|
var member = members[i];
|
||||||
|
updateMemberList(member);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if the user has already join the room
|
||||||
|
if ($scope.state.user_id in members) {
|
||||||
|
if ("join" === members[$scope.state.user_id].membership) {
|
||||||
|
needsToJoin = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Do we to join the room before starting?
|
||||||
|
if (needsToJoin) {
|
||||||
|
matrixService.join($scope.room_id).then(
|
||||||
|
function() {
|
||||||
|
console.log("Joined room "+$scope.room_id);
|
||||||
|
onInit3();
|
||||||
|
},
|
||||||
|
function(reason) {
|
||||||
|
$scope.feedback = "Can't join room: " + reason;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
onInit3();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
var onInit3 = function() {
|
||||||
|
console.log("onInit3");
|
||||||
|
|
||||||
|
// TODO: We should be able to keep them
|
||||||
|
eventHandlerService.resetRoomMessages($scope.room_id);
|
||||||
|
|
||||||
// Make recents highlight the current room
|
// Make recents highlight the current room
|
||||||
$scope.recentsSelectedRoomID = $scope.room_id;
|
$scope.recentsSelectedRoomID = $scope.room_id;
|
||||||
|
|
||||||
// Join the room
|
paginate(MESSAGES_PER_PAGINATION);
|
||||||
matrixService.join($scope.room_id).then(
|
|
||||||
function() {
|
|
||||||
console.log("Joined room "+$scope.room_id);
|
|
||||||
|
|
||||||
// Get the current member list
|
|
||||||
matrixService.getMemberList($scope.room_id).then(
|
|
||||||
function(response) {
|
|
||||||
for (var i = 0; i < response.data.chunk.length; i++) {
|
|
||||||
var chunk = response.data.chunk[i];
|
|
||||||
updateMemberList(chunk);
|
|
||||||
}
|
|
||||||
},
|
|
||||||
function(error) {
|
|
||||||
$scope.feedback = "Failed get member list: " + error.data.error;
|
|
||||||
}
|
|
||||||
);
|
|
||||||
|
|
||||||
paginate(MESSAGES_PER_PAGINATION);
|
|
||||||
},
|
|
||||||
function(reason) {
|
|
||||||
$scope.feedback = "Can't join room: " + reason;
|
|
||||||
});
|
|
||||||
};
|
};
|
||||||
|
|
||||||
$scope.inviteUser = function(user_id) {
|
$scope.inviteUser = function(user_id) {
|
||||||
@ -429,4 +473,21 @@ angular.module('RoomController', ['ngSanitize', 'mFileInput', 'mUtilities'])
|
|||||||
$scope.loadMoreHistory = function() {
|
$scope.loadMoreHistory = function() {
|
||||||
paginate(MESSAGES_PER_PAGINATION);
|
paginate(MESSAGES_PER_PAGINATION);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
$scope.startVoiceCall = function() {
|
||||||
|
var call = new MatrixCall($scope.room_id);
|
||||||
|
call.onError = $scope.onCallError;
|
||||||
|
call.onHangup = $scope.onCallHangup;
|
||||||
|
call.placeCall();
|
||||||
|
$scope.currentCall = call;
|
||||||
|
}
|
||||||
|
|
||||||
|
$scope.onCallError = function(errStr) {
|
||||||
|
$scope.feedback = errStr;
|
||||||
|
}
|
||||||
|
|
||||||
|
$scope.onCallHangup = function() {
|
||||||
|
$scope.feedback = "Call ended";
|
||||||
|
$scope.currentCall = undefined;
|
||||||
|
}
|
||||||
}]);
|
}]);
|
||||||
|
@ -45,13 +45,13 @@
|
|||||||
</td>
|
</td>
|
||||||
<td ng-class="!msg.content.membership ? (msg.content.msgtype === 'm.emote' ? 'emote text' : 'text') : 'membership text'">
|
<td ng-class="!msg.content.membership ? (msg.content.msgtype === 'm.emote' ? 'emote text' : 'text') : 'membership text'">
|
||||||
<div class="bubble">
|
<div class="bubble">
|
||||||
<span ng-hide='msg.type !== "m.room.member"'>
|
<span ng-show='msg.type === "m.room.member"'>
|
||||||
{{ members[msg.user_id].displayname || msg.user_id }}
|
{{ members[msg.user_id].displayname || msg.user_id }}
|
||||||
{{ {"join": "joined", "leave": "left", "invite": "invited"}[msg.content.membership] }}
|
{{ {"join": "joined", "leave": "left", "invite": "invited"}[msg.content.membership] }}
|
||||||
{{ msg.content.membership === "invite" ? (msg.state_key || '') : '' }}
|
{{ msg.content.membership === "invite" ? (msg.state_key || '') : '' }}
|
||||||
</span>
|
</span>
|
||||||
<span ng-hide='msg.content.msgtype !== "m.emote"' ng-bind-html="'* ' + (members[msg.user_id].displayname || msg.user_id) + ' ' + msg.content.body | linky:'_blank'"/>
|
<span ng-show='msg.content.msgtype === "m.emote"' ng-bind-html="'* ' + (members[msg.user_id].displayname || msg.user_id) + ' ' + msg.content.body | linky:'_blank'"/>
|
||||||
<span ng-hide='msg.content.msgtype !== "m.text"' ng-bind-html="((msg.content.msgtype === 'm.text') ? msg.content.body : '') | linky:'_blank'"/>
|
<span ng-show='msg.content.msgtype === "m.text"' ng-bind-html="((msg.content.msgtype === 'm.text') ? msg.content.body : '') | linky:'_blank'"/>
|
||||||
<div ng-show='msg.content.msgtype === "m.image"'>
|
<div ng-show='msg.content.msgtype === "m.image"'>
|
||||||
<div ng-hide='msg.content.thumbnail_url' ng-style="msg.content.body.h && { 'height' : (msg.content.body.h < 320) ? msg.content.body.h : 320}">
|
<div ng-hide='msg.content.thumbnail_url' ng-style="msg.content.body.h && { 'height' : (msg.content.body.h < 320) ? msg.content.body.h : 320}">
|
||||||
<img class="image" ng-src="{{ msg.content.url }}"/>
|
<img class="image" ng-src="{{ msg.content.url }}"/>
|
||||||
@ -98,10 +98,18 @@
|
|||||||
<button ng-click="inviteUser(userIDToInvite)">Invite</button>
|
<button ng-click="inviteUser(userIDToInvite)">Invite</button>
|
||||||
</span>
|
</span>
|
||||||
<button ng-click="leaveRoom()">Leave</button>
|
<button ng-click="leaveRoom()">Leave</button>
|
||||||
|
<button ng-click="startVoiceCall()" ng-show="currentCall == undefined && memberCount() == 2">Voice Call</button>
|
||||||
|
<div ng-show="currentCall.state == 'ringing'">
|
||||||
|
Incoming call from {{ currentCall.user_id }}
|
||||||
|
<button ng-click="answerCall()">Answer</button>
|
||||||
|
<button ng-click="hangupCall()">Reject</button>
|
||||||
|
</div>
|
||||||
|
<button ng-click="hangupCall()" ng-show="currentCall && currentCall.state != 'ringing'">Hang up</button>
|
||||||
|
<span style="display: none; ">{{ currentCall.state }}</span>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{{ feedback }}
|
{{ feedback }}
|
||||||
<div ng-hide="!state.stream_failure">
|
<div ng-show="state.stream_failure">
|
||||||
{{ state.stream_failure.data.error || "Connection failure" }}
|
{{ state.stream_failure.data.error || "Connection failure" }}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
Loading…
Reference in New Issue
Block a user