mirror of
https://mau.dev/maunium/synapse.git
synced 2024-10-01 01:36:05 -04:00
Compare commits
33 Commits
58d466f3c9
...
ed27557276
Author | SHA1 | Date | |
---|---|---|---|
|
ed27557276 | ||
|
966a50bb63 | ||
|
d6125c583d | ||
|
da58e55a0b | ||
|
a5a454fc35 | ||
|
1caff75526 | ||
|
7b75922020 | ||
|
26c1330764 | ||
|
48303fcbcc | ||
|
53a3783750 | ||
|
b913aaa788 | ||
|
dab88a7b1f | ||
|
8678516e79 | ||
|
573c6d7e69 | ||
|
689641b903 | ||
|
e75a23a63d | ||
|
e563e4bdf3 | ||
|
f4032d3e71 | ||
|
8da16e55fe | ||
|
d9cc0faf4b | ||
|
cca77af68f | ||
|
48742da536 | ||
|
940b932405 | ||
|
a2b2f6d09b | ||
|
defd4aca67 | ||
|
b4d95409fb | ||
|
f1a1c7fc53 | ||
|
cb9fa062b7 | ||
|
74b75cfd54 | ||
|
87d13fd143 | ||
|
ad2cd9aefd | ||
|
ad0ee53993 | ||
|
92b38c1afd |
12
.github/workflows/fix_lint.yaml
vendored
12
.github/workflows/fix_lint.yaml
vendored
@ -29,17 +29,9 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
install-project: "false"
|
install-project: "false"
|
||||||
|
|
||||||
- name: Import order (isort)
|
- name: Run ruff
|
||||||
continue-on-error: true
|
continue-on-error: true
|
||||||
run: poetry run isort .
|
run: poetry run ruff check --fix .
|
||||||
|
|
||||||
- name: Code style (black)
|
|
||||||
continue-on-error: true
|
|
||||||
run: poetry run black .
|
|
||||||
|
|
||||||
- name: Semantic checks (ruff)
|
|
||||||
continue-on-error: true
|
|
||||||
run: poetry run ruff --fix .
|
|
||||||
|
|
||||||
- run: cargo clippy --all-features --fix -- -D warnings
|
- run: cargo clippy --all-features --fix -- -D warnings
|
||||||
continue-on-error: true
|
continue-on-error: true
|
||||||
|
11
.github/workflows/tests.yml
vendored
11
.github/workflows/tests.yml
vendored
@ -131,15 +131,8 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
install-project: "false"
|
install-project: "false"
|
||||||
|
|
||||||
- name: Import order (isort)
|
- name: Check style
|
||||||
run: poetry run isort --check --diff .
|
run: poetry run ruff check --output-format=github .
|
||||||
|
|
||||||
- name: Code style (black)
|
|
||||||
run: poetry run black --check --diff .
|
|
||||||
|
|
||||||
- name: Semantic checks (ruff)
|
|
||||||
# --quiet suppresses the update check.
|
|
||||||
run: poetry run ruff check --quiet .
|
|
||||||
|
|
||||||
lint-mypy:
|
lint-mypy:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
51
CHANGES.md
51
CHANGES.md
@ -1,3 +1,54 @@
|
|||||||
|
# Synapse 1.114.0rc3 (2024-08-30)
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
- Fix regression in v1.114.0rc2 that caused workers to fail to start. ([\#17626](https://github.com/element-hq/synapse/issues/17626))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Synapse 1.114.0rc2 (2024-08-30)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Improve cross-signing upload when using [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) to use a custom UIA flow stage, with web fallback support. ([\#17509](https://github.com/element-hq/synapse/issues/17509))
|
||||||
|
- Make `hash_password` script accept password input from stdin. ([\#17608](https://github.com/element-hq/synapse/issues/17608))
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
- Fix hierarchy returning 403 when room is accessible through federation. Contributed by Krishan (@kfiven). ([\#17194](https://github.com/element-hq/synapse/issues/17194))
|
||||||
|
- Fix content-length on federation `/thumbnail` responses. ([\#17532](https://github.com/element-hq/synapse/issues/17532))
|
||||||
|
- Fix authenticated media responses using a wrong limit when following redirects over federation. ([\#17543](https://github.com/element-hq/synapse/issues/17543))
|
||||||
|
|
||||||
|
### Internal Changes
|
||||||
|
|
||||||
|
- MSC3861: load the issuer and account management URLs from OIDC discovery. ([\#17407](https://github.com/element-hq/synapse/issues/17407))
|
||||||
|
- Refactor sliding sync class into multiple files. ([\#17595](https://github.com/element-hq/synapse/issues/17595))
|
||||||
|
- Store sliding sync per-connection state in the database. ([\#17599](https://github.com/element-hq/synapse/issues/17599))
|
||||||
|
- Make the sliding sync `PerConnectionState` class immutable. ([\#17600](https://github.com/element-hq/synapse/issues/17600))
|
||||||
|
- Add support to `@tag_args` for standalone functions. ([\#17604](https://github.com/element-hq/synapse/issues/17604))
|
||||||
|
- Speed up incremental syncs in sliding sync by adding some more caching. ([\#17606](https://github.com/element-hq/synapse/issues/17606))
|
||||||
|
- Always return the user's own read receipts in sliding sync. ([\#17617](https://github.com/element-hq/synapse/issues/17617))
|
||||||
|
- Replace `isort` and `black with `ruff`. ([\#17620](https://github.com/element-hq/synapse/issues/17620))
|
||||||
|
- Refactor sliding sync code to move room list logic out into a separate class. ([\#17622](https://github.com/element-hq/synapse/issues/17622))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Updates to locked dependencies
|
||||||
|
|
||||||
|
* Bump attrs from 23.2.0 to 24.2.0. ([\#17609](https://github.com/element-hq/synapse/issues/17609))
|
||||||
|
* Bump cryptography from 42.0.8 to 43.0.0. ([\#17584](https://github.com/element-hq/synapse/issues/17584))
|
||||||
|
* Bump phonenumbers from 8.13.43 to 8.13.44. ([\#17610](https://github.com/element-hq/synapse/issues/17610))
|
||||||
|
* Bump pygithub from 2.3.0 to 2.4.0. ([\#17612](https://github.com/element-hq/synapse/issues/17612))
|
||||||
|
* Bump pyyaml from 6.0.1 to 6.0.2. ([\#17611](https://github.com/element-hq/synapse/issues/17611))
|
||||||
|
* Bump sentry-sdk from 2.12.0 to 2.13.0. ([\#17585](https://github.com/element-hq/synapse/issues/17585))
|
||||||
|
* Bump serde from 1.0.206 to 1.0.208. ([\#17581](https://github.com/element-hq/synapse/issues/17581))
|
||||||
|
* Bump serde from 1.0.208 to 1.0.209. ([\#17613](https://github.com/element-hq/synapse/issues/17613))
|
||||||
|
* Bump serde_json from 1.0.124 to 1.0.125. ([\#17582](https://github.com/element-hq/synapse/issues/17582))
|
||||||
|
* Bump serde_json from 1.0.125 to 1.0.127. ([\#17614](https://github.com/element-hq/synapse/issues/17614))
|
||||||
|
* Bump types-jsonschema from 4.23.0.20240712 to 4.23.0.20240813. ([\#17583](https://github.com/element-hq/synapse/issues/17583))
|
||||||
|
* Bump types-setuptools from 71.1.0.20240726 to 71.1.0.20240818. ([\#17586](https://github.com/element-hq/synapse/issues/17586))
|
||||||
|
|
||||||
# Synapse 1.114.0rc1 (2024-08-20)
|
# Synapse 1.114.0rc1 (2024-08-20)
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
12
Cargo.lock
generated
12
Cargo.lock
generated
@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde"
|
name = "serde"
|
||||||
version = "1.0.206"
|
version = "1.0.209"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "5b3e4cd94123dd520a128bcd11e34d9e9e423e7e3e50425cb1b4b1e3549d0284"
|
checksum = "99fce0ffe7310761ca6bf9faf5115afbc19688edd00171d81b1bb1b116c63e09"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"serde_derive",
|
"serde_derive",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde_derive"
|
name = "serde_derive"
|
||||||
version = "1.0.206"
|
version = "1.0.209"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "fabfb6138d2383ea8208cf98ccf69cdfb1aff4088460681d84189aa259762f97"
|
checksum = "a5831b979fd7b5439637af1752d535ff49f4860c0f341d1baeb6faf0f4242170"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
"quote",
|
"quote",
|
||||||
@ -505,9 +505,9 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde_json"
|
name = "serde_json"
|
||||||
version = "1.0.124"
|
version = "1.0.127"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "66ad62847a56b3dba58cc891acd13884b9c61138d330c0d7b6181713d4fce38d"
|
checksum = "8043c06d9f82bd7271361ed64f415fe5e12a77fdb52e573e7f06a516dea329ad"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"itoa",
|
"itoa",
|
||||||
"memchr",
|
"memchr",
|
||||||
|
12
debian/changelog
vendored
12
debian/changelog
vendored
@ -1,3 +1,15 @@
|
|||||||
|
matrix-synapse-py3 (1.114.0~rc3) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.114.0rc3.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Fri, 30 Aug 2024 16:38:05 +0100
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.114.0~rc2) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.114.0rc2.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Fri, 30 Aug 2024 15:35:13 +0100
|
||||||
|
|
||||||
matrix-synapse-py3 (1.114.0~rc1) stable; urgency=medium
|
matrix-synapse-py3 (1.114.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
* New synapse release 1.114.0rc1.
|
* New synapse release 1.114.0rc1.
|
||||||
|
27
debian/hash_password.1
vendored
27
debian/hash_password.1
vendored
@ -1,10 +1,13 @@
|
|||||||
.\" generated with Ronn-NG/v0.8.0
|
.\" generated with Ronn-NG/v0.10.1
|
||||||
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
|
.\" http://github.com/apjanke/ronn-ng/tree/0.10.1
|
||||||
.TH "HASH_PASSWORD" "1" "July 2021" "" ""
|
.TH "HASH_PASSWORD" "1" "August 2024" ""
|
||||||
.SH "NAME"
|
.SH "NAME"
|
||||||
\fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset
|
\fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset
|
||||||
.SH "SYNOPSIS"
|
.SH "SYNOPSIS"
|
||||||
\fBhash_password\fR [\fB\-p\fR|\fB\-\-password\fR [password]] [\fB\-c\fR|\fB\-\-config\fR \fIfile\fR]
|
.TS
|
||||||
|
allbox;
|
||||||
|
\fBhash_password\fR [\fB\-p\fR \fB\-\-password\fR [password]] [\fB\-c\fR \fB\-\-config\fR \fIfile\fR]
|
||||||
|
.TE
|
||||||
.SH "DESCRIPTION"
|
.SH "DESCRIPTION"
|
||||||
\fBhash_password\fR calculates the hash of a supplied password using bcrypt\.
|
\fBhash_password\fR calculates the hash of a supplied password using bcrypt\.
|
||||||
.P
|
.P
|
||||||
@ -20,7 +23,7 @@ bcrypt_rounds: 17 password_config: pepper: "random hashing pepper"
|
|||||||
.SH "OPTIONS"
|
.SH "OPTIONS"
|
||||||
.TP
|
.TP
|
||||||
\fB\-p\fR, \fB\-\-password\fR
|
\fB\-p\fR, \fB\-\-password\fR
|
||||||
Read the password form the command line if [password] is supplied\. If not, prompt the user and read the password form the \fBSTDIN\fR\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
|
Read the password form the command line if [password] is supplied, or from \fBSTDIN\fR\. If not, prompt the user and read the password from the tty prompt\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
|
||||||
.TP
|
.TP
|
||||||
\fB\-c\fR, \fB\-\-config\fR
|
\fB\-c\fR, \fB\-\-config\fR
|
||||||
Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\.
|
Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\.
|
||||||
@ -33,7 +36,17 @@ $2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8\.X8fWFpum7SxZ9MFe
|
|||||||
.fi
|
.fi
|
||||||
.IP "" 0
|
.IP "" 0
|
||||||
.P
|
.P
|
||||||
Hash from the STDIN:
|
Hash from the stdin:
|
||||||
|
.IP "" 4
|
||||||
|
.nf
|
||||||
|
$ cat password_file | hash_password
|
||||||
|
Password:
|
||||||
|
Confirm password:
|
||||||
|
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX\.rcuAbM8ErLoUhybG
|
||||||
|
.fi
|
||||||
|
.IP "" 0
|
||||||
|
.P
|
||||||
|
Hash from the prompt:
|
||||||
.IP "" 4
|
.IP "" 4
|
||||||
.nf
|
.nf
|
||||||
$ hash_password
|
$ hash_password
|
||||||
@ -53,6 +66,6 @@ $2b$12$CwI\.wBNr\.w3kmiUlV3T5s\.GT2wH7uebDCovDrCOh18dFedlANK99O
|
|||||||
.fi
|
.fi
|
||||||
.IP "" 0
|
.IP "" 0
|
||||||
.SH "COPYRIGHT"
|
.SH "COPYRIGHT"
|
||||||
This man page was written by Rahul De <\fI\%mailto:rahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
|
This man page was written by Rahul De «rahulde@swecha\.net» for Debian GNU/Linux distribution\.
|
||||||
.SH "SEE ALSO"
|
.SH "SEE ALSO"
|
||||||
synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)
|
synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)
|
||||||
|
182
debian/hash_password.1.html
vendored
Normal file
182
debian/hash_password.1.html
vendored
Normal file
@ -0,0 +1,182 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<meta http-equiv='content-type' content='text/html;charset=utf-8'>
|
||||||
|
<meta name='generator' content='Ronn-NG/v0.10.1 (http://github.com/apjanke/ronn-ng/tree/0.10.1)'>
|
||||||
|
<title>hash_password(1) - Calculate the hash of a new password, so that passwords can be reset</title>
|
||||||
|
<style type='text/css' media='all'>
|
||||||
|
/* style: man */
|
||||||
|
body#manpage {margin:0}
|
||||||
|
.mp {max-width:100ex;padding:0 9ex 1ex 4ex}
|
||||||
|
.mp p,.mp pre,.mp ul,.mp ol,.mp dl {margin:0 0 20px 0}
|
||||||
|
.mp h2 {margin:10px 0 0 0}
|
||||||
|
.mp > p,.mp > pre,.mp > ul,.mp > ol,.mp > dl {margin-left:8ex}
|
||||||
|
.mp h3 {margin:0 0 0 4ex}
|
||||||
|
.mp dt {margin:0;clear:left}
|
||||||
|
.mp dt.flush {float:left;width:8ex}
|
||||||
|
.mp dd {margin:0 0 0 9ex}
|
||||||
|
.mp h1,.mp h2,.mp h3,.mp h4 {clear:left}
|
||||||
|
.mp pre {margin-bottom:20px}
|
||||||
|
.mp pre+h2,.mp pre+h3 {margin-top:22px}
|
||||||
|
.mp h2+pre,.mp h3+pre {margin-top:5px}
|
||||||
|
.mp img {display:block;margin:auto}
|
||||||
|
.mp h1.man-title {display:none}
|
||||||
|
.mp,.mp code,.mp pre,.mp tt,.mp kbd,.mp samp,.mp h3,.mp h4 {font-family:monospace;font-size:14px;line-height:1.42857142857143}
|
||||||
|
.mp h2 {font-size:16px;line-height:1.25}
|
||||||
|
.mp h1 {font-size:20px;line-height:2}
|
||||||
|
.mp {text-align:justify;background:#fff}
|
||||||
|
.mp,.mp code,.mp pre,.mp pre code,.mp tt,.mp kbd,.mp samp {color:#131211}
|
||||||
|
.mp h1,.mp h2,.mp h3,.mp h4 {color:#030201}
|
||||||
|
.mp u {text-decoration:underline}
|
||||||
|
.mp code,.mp strong,.mp b {font-weight:bold;color:#131211}
|
||||||
|
.mp em,.mp var {font-style:italic;color:#232221;text-decoration:none}
|
||||||
|
.mp a,.mp a:link,.mp a:hover,.mp a code,.mp a pre,.mp a tt,.mp a kbd,.mp a samp {color:#0000ff}
|
||||||
|
.mp b.man-ref {font-weight:normal;color:#434241}
|
||||||
|
.mp pre {padding:0 4ex}
|
||||||
|
.mp pre code {font-weight:normal;color:#434241}
|
||||||
|
.mp h2+pre,h3+pre {padding-left:0}
|
||||||
|
ol.man-decor,ol.man-decor li {margin:3px 0 10px 0;padding:0;float:left;width:33%;list-style-type:none;text-transform:uppercase;color:#999;letter-spacing:1px}
|
||||||
|
ol.man-decor {width:100%}
|
||||||
|
ol.man-decor li.tl {text-align:left}
|
||||||
|
ol.man-decor li.tc {text-align:center;letter-spacing:4px}
|
||||||
|
ol.man-decor li.tr {text-align:right;float:right}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<!--
|
||||||
|
The following styles are deprecated and will be removed at some point:
|
||||||
|
div#man, div#man ol.man, div#man ol.head, div#man ol.man.
|
||||||
|
|
||||||
|
The .man-page, .man-decor, .man-head, .man-foot, .man-title, and
|
||||||
|
.man-navigation should be used instead.
|
||||||
|
-->
|
||||||
|
<body id='manpage'>
|
||||||
|
<div class='mp' id='man'>
|
||||||
|
|
||||||
|
<div class='man-navigation' style='display:none'>
|
||||||
|
<a href="#NAME">NAME</a>
|
||||||
|
<a href="#SYNOPSIS">SYNOPSIS</a>
|
||||||
|
<a href="#DESCRIPTION">DESCRIPTION</a>
|
||||||
|
<a href="#FILES">FILES</a>
|
||||||
|
<a href="#OPTIONS">OPTIONS</a>
|
||||||
|
<a href="#EXAMPLES">EXAMPLES</a>
|
||||||
|
<a href="#COPYRIGHT">COPYRIGHT</a>
|
||||||
|
<a href="#SEE-ALSO">SEE ALSO</a>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<ol class='man-decor man-head man head'>
|
||||||
|
<li class='tl'>hash_password(1)</li>
|
||||||
|
<li class='tc'></li>
|
||||||
|
<li class='tr'>hash_password(1)</li>
|
||||||
|
</ol>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<h2 id="NAME">NAME</h2>
|
||||||
|
<p class="man-name">
|
||||||
|
<code>hash_password</code> - <span class="man-whatis">Calculate the hash of a new password, so that passwords can be reset</span>
|
||||||
|
</p>
|
||||||
|
<h2 id="SYNOPSIS">SYNOPSIS</h2>
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td>
|
||||||
|
<code>hash_password</code> [<code>-p</code>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<code>--password</code> [password]] [<code>-c</code>
|
||||||
|
</td>
|
||||||
|
<td>
|
||||||
|
<code>--config</code> <var>file</var>]</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<h2 id="DESCRIPTION">DESCRIPTION</h2>
|
||||||
|
|
||||||
|
<p><strong>hash_password</strong> calculates the hash of a supplied password using bcrypt.</p>
|
||||||
|
|
||||||
|
<p><code>hash_password</code> takes a password as an parameter either on the command line
|
||||||
|
or the <code>STDIN</code> if not supplied.</p>
|
||||||
|
|
||||||
|
<p>It accepts an YAML file which can be used to specify parameters like the
|
||||||
|
number of rounds for bcrypt and password_config section having the pepper
|
||||||
|
value used for the hashing. By default <code>bcrypt_rounds</code> is set to <strong>12</strong>.</p>
|
||||||
|
|
||||||
|
<p>The hashed password is written on the <code>STDOUT</code>.</p>
|
||||||
|
|
||||||
|
<h2 id="FILES">FILES</h2>
|
||||||
|
|
||||||
|
<p>A sample YAML file accepted by <code>hash_password</code> is described below:</p>
|
||||||
|
|
||||||
|
<p>bcrypt_rounds: 17
|
||||||
|
password_config:
|
||||||
|
pepper: "random hashing pepper"</p>
|
||||||
|
|
||||||
|
<h2 id="OPTIONS">OPTIONS</h2>
|
||||||
|
|
||||||
|
<dl>
|
||||||
|
<dt>
|
||||||
|
<code>-p</code>, <code>--password</code>
|
||||||
|
</dt>
|
||||||
|
<dd>Read the password form the command line if [password] is supplied, or from <code>STDIN</code>.
|
||||||
|
If not, prompt the user and read the password from the tty prompt.
|
||||||
|
It is not recommended to type the password on the command line
|
||||||
|
directly. Use the STDIN instead.</dd>
|
||||||
|
<dt>
|
||||||
|
<code>-c</code>, <code>--config</code>
|
||||||
|
</dt>
|
||||||
|
<dd>Read the supplied YAML <var>file</var> containing the options <code>bcrypt_rounds</code>
|
||||||
|
and the <code>password_config</code> section containing the <code>pepper</code> value.</dd>
|
||||||
|
</dl>
|
||||||
|
|
||||||
|
<h2 id="EXAMPLES">EXAMPLES</h2>
|
||||||
|
|
||||||
|
<p>Hash from the command line:</p>
|
||||||
|
|
||||||
|
<pre><code>$ hash_password -p "p@ssw0rd"
|
||||||
|
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
<p>Hash from the stdin:</p>
|
||||||
|
|
||||||
|
<pre><code>$ cat password_file | hash_password
|
||||||
|
Password:
|
||||||
|
Confirm password:
|
||||||
|
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
<p>Hash from the prompt:</p>
|
||||||
|
|
||||||
|
<pre><code>$ hash_password
|
||||||
|
Password:
|
||||||
|
Confirm password:
|
||||||
|
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
<p>Using a config file:</p>
|
||||||
|
|
||||||
|
<pre><code>$ hash_password -c config.yml
|
||||||
|
Password:
|
||||||
|
Confirm password:
|
||||||
|
$2b$12$CwI.wBNr.w3kmiUlV3T5s.GT2wH7uebDCovDrCOh18dFedlANK99O
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
<h2 id="COPYRIGHT">COPYRIGHT</h2>
|
||||||
|
|
||||||
|
<p>This man page was written by Rahul De «rahulde@swecha.net»
|
||||||
|
for Debian GNU/Linux distribution.</p>
|
||||||
|
|
||||||
|
<h2 id="SEE-ALSO">SEE ALSO</h2>
|
||||||
|
|
||||||
|
<p><span class="man-ref">synctl<span class="s">(1)</span></span>, <span class="man-ref">synapse_port_db<span class="s">(1)</span></span>, <span class="man-ref">register_new_matrix_user<span class="s">(1)</span></span>, <span class="man-ref">synapse_review_recent_signups<span class="s">(1)</span></span></p>
|
||||||
|
|
||||||
|
<ol class='man-decor man-foot man foot'>
|
||||||
|
<li class='tl'></li>
|
||||||
|
<li class='tc'>August 2024</li>
|
||||||
|
<li class='tr'>hash_password(1)</li>
|
||||||
|
</ol>
|
||||||
|
|
||||||
|
</div>
|
||||||
|
</body>
|
||||||
|
</html>
|
13
debian/hash_password.ronn
vendored
13
debian/hash_password.ronn
vendored
@ -29,8 +29,8 @@ A sample YAML file accepted by `hash_password` is described below:
|
|||||||
## OPTIONS
|
## OPTIONS
|
||||||
|
|
||||||
* `-p`, `--password`:
|
* `-p`, `--password`:
|
||||||
Read the password form the command line if [password] is supplied.
|
Read the password form the command line if [password] is supplied, or from `STDIN`.
|
||||||
If not, prompt the user and read the password form the `STDIN`.
|
If not, prompt the user and read the password from the tty prompt.
|
||||||
It is not recommended to type the password on the command line
|
It is not recommended to type the password on the command line
|
||||||
directly. Use the STDIN instead.
|
directly. Use the STDIN instead.
|
||||||
|
|
||||||
@ -45,7 +45,14 @@ Hash from the command line:
|
|||||||
$ hash_password -p "p@ssw0rd"
|
$ hash_password -p "p@ssw0rd"
|
||||||
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
|
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
|
||||||
|
|
||||||
Hash from the STDIN:
|
Hash from the stdin:
|
||||||
|
|
||||||
|
$ cat password_file | hash_password
|
||||||
|
Password:
|
||||||
|
Confirm password:
|
||||||
|
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
|
||||||
|
|
||||||
|
Hash from the prompt:
|
||||||
|
|
||||||
$ hash_password
|
$ hash_password
|
||||||
Password:
|
Password:
|
||||||
|
@ -8,9 +8,7 @@ errors in code.
|
|||||||
|
|
||||||
The necessary tools are:
|
The necessary tools are:
|
||||||
|
|
||||||
- [black](https://black.readthedocs.io/en/stable/), a source code formatter;
|
- [ruff](https://github.com/charliermarsh/ruff), which can spot common errors and enforce a consistent style; and
|
||||||
- [isort](https://pycqa.github.io/isort/), which organises each file's imports;
|
|
||||||
- [ruff](https://github.com/charliermarsh/ruff), which can spot common errors; and
|
|
||||||
- [mypy](https://mypy.readthedocs.io/en/stable/), a type checker.
|
- [mypy](https://mypy.readthedocs.io/en/stable/), a type checker.
|
||||||
|
|
||||||
See [the contributing guide](development/contributing_guide.md#run-the-linters) for instructions
|
See [the contributing guide](development/contributing_guide.md#run-the-linters) for instructions
|
||||||
|
337
poetry.lock
generated
337
poetry.lock
generated
@ -16,22 +16,22 @@ typing-extensions = {version = ">=4.0.0", markers = "python_version < \"3.9\""}
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "attrs"
|
name = "attrs"
|
||||||
version = "23.2.0"
|
version = "24.2.0"
|
||||||
description = "Classes Without Boilerplate"
|
description = "Classes Without Boilerplate"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "attrs-23.2.0-py3-none-any.whl", hash = "sha256:99b87a485a5820b23b879f04c2305b44b951b502fd64be915879d77a7e8fc6f1"},
|
{file = "attrs-24.2.0-py3-none-any.whl", hash = "sha256:81921eb96de3191c8258c199618104dd27ac608d9366f5e35d011eae1867ede2"},
|
||||||
{file = "attrs-23.2.0.tar.gz", hash = "sha256:935dc3b529c262f6cf76e50877d35a4bd3c1de194fd41f47a2b7ae8f19971f30"},
|
{file = "attrs-24.2.0.tar.gz", hash = "sha256:5cfb1b9148b5b086569baec03f20d7b6bf3bcacc9a42bebf87ffaaca362f6346"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
cov = ["attrs[tests]", "coverage[toml] (>=5.3)"]
|
benchmark = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||||
dev = ["attrs[tests]", "pre-commit"]
|
cov = ["cloudpickle", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||||
docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope-interface"]
|
dev = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pre-commit", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||||
tests = ["attrs[tests-no-zope]", "zope-interface"]
|
docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier (<24.7)"]
|
||||||
tests-mypy = ["mypy (>=1.6)", "pytest-mypy-plugins"]
|
tests = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
|
||||||
tests-no-zope = ["attrs[tests-mypy]", "cloudpickle", "hypothesis", "pympler", "pytest (>=4.3.0)", "pytest-xdist[psutil]"]
|
tests-mypy = ["mypy (>=1.11.1)", "pytest-mypy-plugins"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "authlib"
|
name = "authlib"
|
||||||
@ -105,52 +105,6 @@ files = [
|
|||||||
tests = ["pytest (>=3.2.1,!=3.3.0)"]
|
tests = ["pytest (>=3.2.1,!=3.3.0)"]
|
||||||
typecheck = ["mypy"]
|
typecheck = ["mypy"]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "black"
|
|
||||||
version = "24.8.0"
|
|
||||||
description = "The uncompromising code formatter."
|
|
||||||
optional = false
|
|
||||||
python-versions = ">=3.8"
|
|
||||||
files = [
|
|
||||||
{file = "black-24.8.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:09cdeb74d494ec023ded657f7092ba518e8cf78fa8386155e4a03fdcc44679e6"},
|
|
||||||
{file = "black-24.8.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:81c6742da39f33b08e791da38410f32e27d632260e599df7245cccee2064afeb"},
|
|
||||||
{file = "black-24.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:707a1ca89221bc8a1a64fb5e15ef39cd755633daa672a9db7498d1c19de66a42"},
|
|
||||||
{file = "black-24.8.0-cp310-cp310-win_amd64.whl", hash = "sha256:d6417535d99c37cee4091a2f24eb2b6d5ec42b144d50f1f2e436d9fe1916fe1a"},
|
|
||||||
{file = "black-24.8.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:fb6e2c0b86bbd43dee042e48059c9ad7830abd5c94b0bc518c0eeec57c3eddc1"},
|
|
||||||
{file = "black-24.8.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:837fd281f1908d0076844bc2b801ad2d369c78c45cf800cad7b61686051041af"},
|
|
||||||
{file = "black-24.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:62e8730977f0b77998029da7971fa896ceefa2c4c4933fcd593fa599ecbf97a4"},
|
|
||||||
{file = "black-24.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:72901b4913cbac8972ad911dc4098d5753704d1f3c56e44ae8dce99eecb0e3af"},
|
|
||||||
{file = "black-24.8.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:7c046c1d1eeb7aea9335da62472481d3bbf3fd986e093cffd35f4385c94ae368"},
|
|
||||||
{file = "black-24.8.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:649f6d84ccbae73ab767e206772cc2d7a393a001070a4c814a546afd0d423aed"},
|
|
||||||
{file = "black-24.8.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2b59b250fdba5f9a9cd9d0ece6e6d993d91ce877d121d161e4698af3eb9c1018"},
|
|
||||||
{file = "black-24.8.0-cp312-cp312-win_amd64.whl", hash = "sha256:6e55d30d44bed36593c3163b9bc63bf58b3b30e4611e4d88a0c3c239930ed5b2"},
|
|
||||||
{file = "black-24.8.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:505289f17ceda596658ae81b61ebbe2d9b25aa78067035184ed0a9d855d18afd"},
|
|
||||||
{file = "black-24.8.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b19c9ad992c7883ad84c9b22aaa73562a16b819c1d8db7a1a1a49fb7ec13c7d2"},
|
|
||||||
{file = "black-24.8.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1f13f7f386f86f8121d76599114bb8c17b69d962137fc70efe56137727c7047e"},
|
|
||||||
{file = "black-24.8.0-cp38-cp38-win_amd64.whl", hash = "sha256:f490dbd59680d809ca31efdae20e634f3fae27fba3ce0ba3208333b713bc3920"},
|
|
||||||
{file = "black-24.8.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:eab4dd44ce80dea27dc69db40dab62d4ca96112f87996bca68cd75639aeb2e4c"},
|
|
||||||
{file = "black-24.8.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3c4285573d4897a7610054af5a890bde7c65cb466040c5f0c8b732812d7f0e5e"},
|
|
||||||
{file = "black-24.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9e84e33b37be070ba135176c123ae52a51f82306def9f7d063ee302ecab2cf47"},
|
|
||||||
{file = "black-24.8.0-cp39-cp39-win_amd64.whl", hash = "sha256:73bbf84ed136e45d451a260c6b73ed674652f90a2b3211d6a35e78054563a9bb"},
|
|
||||||
{file = "black-24.8.0-py3-none-any.whl", hash = "sha256:972085c618ee94f402da1af548a4f218c754ea7e5dc70acb168bfaca4c2542ed"},
|
|
||||||
{file = "black-24.8.0.tar.gz", hash = "sha256:2500945420b6784c38b9ee885af039f5e7471ef284ab03fa35ecdde4688cd83f"},
|
|
||||||
]
|
|
||||||
|
|
||||||
[package.dependencies]
|
|
||||||
click = ">=8.0.0"
|
|
||||||
mypy-extensions = ">=0.4.3"
|
|
||||||
packaging = ">=22.0"
|
|
||||||
pathspec = ">=0.9.0"
|
|
||||||
platformdirs = ">=2"
|
|
||||||
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
|
|
||||||
typing-extensions = {version = ">=4.0.1", markers = "python_version < \"3.11\""}
|
|
||||||
|
|
||||||
[package.extras]
|
|
||||||
colorama = ["colorama (>=0.4.3)"]
|
|
||||||
d = ["aiohttp (>=3.7.4)", "aiohttp (>=3.7.4,!=3.9.0)"]
|
|
||||||
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
|
|
||||||
uvloop = ["uvloop (>=0.15.2)"]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "bleach"
|
name = "bleach"
|
||||||
version = "6.1.0"
|
version = "6.1.0"
|
||||||
@ -403,43 +357,38 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cryptography"
|
name = "cryptography"
|
||||||
version = "42.0.8"
|
version = "43.0.0"
|
||||||
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
|
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:81d8a521705787afe7a18d5bfb47ea9d9cc068206270aad0b96a725022e18d2e"},
|
{file = "cryptography-43.0.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:64c3f16e2a4fc51c0d06af28441881f98c5d91009b8caaff40cf3548089e9c74"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:961e61cefdcb06e0c6d7e3a1b22ebe8b996eb2bf50614e89384be54c48c6b63d"},
|
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3dcdedae5c7710b9f97ac6bba7e1052b95c7083c9d0e9df96e02a1932e777895"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e3ec3672626e1b9e55afd0df6d774ff0e953452886e06e0f1eb7eb0c832e8902"},
|
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d9a1eca329405219b605fac09ecfc09ac09e595d6def650a437523fcd08dd22"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e599b53fd95357d92304510fb7bda8523ed1f79ca98dce2f43c115950aa78801"},
|
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ea9e57f8ea880eeea38ab5abf9fbe39f923544d7884228ec67d666abd60f5a47"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5226d5d21ab681f432a9c1cf8b658c0cb02533eece706b155e5fbd8a0cdd3949"},
|
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9a8d6802e0825767476f62aafed40532bd435e8a5f7d23bd8b4f5fd04cc80ecf"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:6b7c4f03ce01afd3b76cf69a5455caa9cfa3de8c8f493e0d3ab7d20611c8dae9"},
|
{file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:cc70b4b581f28d0a254d006f26949245e3657d40d8857066c2ae22a61222ef55"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:2346b911eb349ab547076f47f2e035fc8ff2c02380a7cbbf8d87114fa0f1c583"},
|
{file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:4a997df8c1c2aae1e1e5ac49c2e4f610ad037fc5a3aadc7b64e39dea42249431"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:ad803773e9df0b92e0a817d22fd8a3675493f690b96130a5e24f1b8fabbea9c7"},
|
{file = "cryptography-43.0.0-cp37-abi3-win32.whl", hash = "sha256:6e2b11c55d260d03a8cf29ac9b5e0608d35f08077d8c087be96287f43af3ccdc"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2f66d9cd9147ee495a8374a45ca445819f8929a3efcd2e3df6428e46c3cbb10b"},
|
{file = "cryptography-43.0.0-cp37-abi3-win_amd64.whl", hash = "sha256:31e44a986ceccec3d0498e16f3d27b2ee5fdf69ce2ab89b52eaad1d2f33d8778"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d45b940883a03e19e944456a558b67a41160e367a719833c53de6911cabba2b7"},
|
{file = "cryptography-43.0.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:7b3f5fe74a5ca32d4d0f302ffe6680fcc5c28f8ef0dc0ae8f40c0f3a1b4fca66"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-win32.whl", hash = "sha256:a0c5b2b0585b6af82d7e385f55a8bc568abff8923af147ee3c07bd8b42cda8b2"},
|
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ac1955ce000cb29ab40def14fd1bbfa7af2017cca696ee696925615cafd0dce5"},
|
||||||
{file = "cryptography-42.0.8-cp37-abi3-win_amd64.whl", hash = "sha256:57080dee41209e556a9a4ce60d229244f7a66ef52750f813bfbe18959770cfba"},
|
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:299d3da8e00b7e2b54bb02ef58d73cd5f55fb31f33ebbf33bd00d9aa6807df7e"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:dea567d1b0e8bc5764b9443858b673b734100c2871dc93163f58c46a97a83d28"},
|
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ee0c405832ade84d4de74b9029bedb7b31200600fa524d218fc29bfa371e97f5"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4783183f7cb757b73b2ae9aed6599b96338eb957233c58ca8f49a49cc32fd5e"},
|
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:cb013933d4c127349b3948aa8aaf2f12c0353ad0eccd715ca789c8a0f671646f"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0608251135d0e03111152e41f0cc2392d1e74e35703960d4190b2e0f4ca9c70"},
|
{file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:fdcb265de28585de5b859ae13e3846a8e805268a823a12a4da2597f1f5afc9f0"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dc0fdf6787f37b1c6b08e6dfc892d9d068b5bdb671198c72072828b80bd5fe4c"},
|
{file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:2905ccf93a8a2a416f3ec01b1a7911c3fe4073ef35640e7ee5296754e30b762b"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9c0c1716c8447ee7dbf08d6db2e5c41c688544c61074b54fc4564196f55c25a7"},
|
{file = "cryptography-43.0.0-cp39-abi3-win32.whl", hash = "sha256:47ca71115e545954e6c1d207dd13461ab81f4eccfcb1345eac874828b5e3eaaf"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:fff12c88a672ab9c9c1cf7b0c80e3ad9e2ebd9d828d955c126be4fd3e5578c9e"},
|
{file = "cryptography-43.0.0-cp39-abi3-win_amd64.whl", hash = "sha256:0663585d02f76929792470451a5ba64424acc3cd5227b03921dab0e2f27b1709"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:cafb92b2bc622cd1aa6a1dce4b93307792633f4c5fe1f46c6b97cf67073ec961"},
|
{file = "cryptography-43.0.0-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2c6d112bf61c5ef44042c253e4859b3cbbb50df2f78fa8fae6747a7814484a70"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:31f721658a29331f895a5a54e7e82075554ccfb8b163a18719d342f5ffe5ecb1"},
|
{file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:844b6d608374e7d08f4f6e6f9f7b951f9256db41421917dfb2d003dde4cd6b66"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b297f90c5723d04bcc8265fc2a0f86d4ea2e0f7ab4b6994459548d3a6b992a14"},
|
{file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:51956cf8730665e2bdf8ddb8da0056f699c1a5715648c1b0144670c1ba00b48f"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-win32.whl", hash = "sha256:2f88d197e66c65be5e42cd72e5c18afbfae3f741742070e3019ac8f4ac57262c"},
|
{file = "cryptography-43.0.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:aae4d918f6b180a8ab8bf6511a419473d107df4dbb4225c7b48c5c9602c38c7f"},
|
||||||
{file = "cryptography-42.0.8-cp39-abi3-win_amd64.whl", hash = "sha256:fa76fbb7596cc5839320000cdd5d0955313696d9511debab7ee7278fc8b5c84a"},
|
{file = "cryptography-43.0.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:232ce02943a579095a339ac4b390fbbe97f5b5d5d107f8a08260ea2768be8cc2"},
|
||||||
{file = "cryptography-42.0.8-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:ba4f0a211697362e89ad822e667d8d340b4d8d55fae72cdd619389fb5912eefe"},
|
{file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5bcb8a5620008a8034d39bce21dc3e23735dfdb6a33a06974739bfa04f853947"},
|
||||||
{file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:81884c4d096c272f00aeb1f11cf62ccd39763581645b0812e99a91505fa48e0c"},
|
{file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:08a24a7070b2b6804c1940ff0f910ff728932a9d0e80e7814234269f9d46d069"},
|
||||||
{file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c9bb2ae11bfbab395bdd072985abde58ea9860ed84e59dbc0463a5d0159f5b71"},
|
{file = "cryptography-43.0.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:e9c5266c432a1e23738d178e51c2c7a5e2ddf790f248be939448c0ba2021f9d1"},
|
||||||
{file = "cryptography-42.0.8-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:7016f837e15b0a1c119d27ecd89b3515f01f90a8615ed5e9427e30d9cdbfed3d"},
|
{file = "cryptography-43.0.0.tar.gz", hash = "sha256:b88075ada2d51aa9f18283532c9f60e72170041bba88d7f37e49cbb10275299e"},
|
||||||
{file = "cryptography-42.0.8-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5a94eccb2a81a309806027e1670a358b99b8fe8bfe9f8d329f27d72c094dde8c"},
|
|
||||||
{file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:dec9b018df185f08483f294cae6ccac29e7a6e0678996587363dc352dc65c842"},
|
|
||||||
{file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:343728aac38decfdeecf55ecab3264b015be68fc2816ca800db649607aeee648"},
|
|
||||||
{file = "cryptography-42.0.8-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:013629ae70b40af70c9a7a5db40abe5d9054e6f4380e50ce769947b73bf3caad"},
|
|
||||||
{file = "cryptography-42.0.8.tar.gz", hash = "sha256:8d09d05439ce7baa8e9e95b07ec5b6c886f548deb7e0f69ef25f64b3bce842f2"},
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@ -452,7 +401,7 @@ nox = ["nox"]
|
|||||||
pep8test = ["check-sdist", "click", "mypy", "ruff"]
|
pep8test = ["check-sdist", "click", "mypy", "ruff"]
|
||||||
sdist = ["build"]
|
sdist = ["build"]
|
||||||
ssh = ["bcrypt (>=3.1.5)"]
|
ssh = ["bcrypt (>=3.1.5)"]
|
||||||
test = ["certifi", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
|
test = ["certifi", "cryptography-vectors (==43.0.0)", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
|
||||||
test-randomorder = ["pytest-randomly"]
|
test-randomorder = ["pytest-randomly"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@ -837,20 +786,6 @@ tomli = {version = "*", markers = "python_version < \"3.11\""}
|
|||||||
[package.extras]
|
[package.extras]
|
||||||
scripts = ["click (>=6.0)"]
|
scripts = ["click (>=6.0)"]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "isort"
|
|
||||||
version = "5.13.2"
|
|
||||||
description = "A Python utility / library to sort Python imports."
|
|
||||||
optional = false
|
|
||||||
python-versions = ">=3.8.0"
|
|
||||||
files = [
|
|
||||||
{file = "isort-5.13.2-py3-none-any.whl", hash = "sha256:8ca5e72a8d85860d5a3fa69b8745237f2939afe12dbf656afbcb47fe72d947a6"},
|
|
||||||
{file = "isort-5.13.2.tar.gz", hash = "sha256:48fdfcb9face5d58a4f6dde2e72a1fb8dcaf8ab26f95ab49fab84c2ddefb0109"},
|
|
||||||
]
|
|
||||||
|
|
||||||
[package.extras]
|
|
||||||
colors = ["colorama (>=0.4.6)"]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "jaeger-client"
|
name = "jaeger-client"
|
||||||
version = "4.8.0"
|
version = "4.8.0"
|
||||||
@ -1499,26 +1434,15 @@ files = [
|
|||||||
[package.extras]
|
[package.extras]
|
||||||
dev = ["jinja2"]
|
dev = ["jinja2"]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "pathspec"
|
|
||||||
version = "0.11.1"
|
|
||||||
description = "Utility library for gitignore style pattern matching of file paths."
|
|
||||||
optional = false
|
|
||||||
python-versions = ">=3.7"
|
|
||||||
files = [
|
|
||||||
{file = "pathspec-0.11.1-py3-none-any.whl", hash = "sha256:d8af70af76652554bd134c22b3e8a1cc46ed7d91edcdd721ef1a0c51a84a5293"},
|
|
||||||
{file = "pathspec-0.11.1.tar.gz", hash = "sha256:2798de800fa92780e33acca925945e9a19a133b715067cf165b8866c15a31687"},
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "phonenumbers"
|
name = "phonenumbers"
|
||||||
version = "8.13.43"
|
version = "8.13.44"
|
||||||
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
|
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "*"
|
python-versions = "*"
|
||||||
files = [
|
files = [
|
||||||
{file = "phonenumbers-8.13.43-py2.py3-none-any.whl", hash = "sha256:339e521403fe4dd9c664dbbeb2fe434f9ea5c81e54c0fdfadbaeb53b26a76c27"},
|
{file = "phonenumbers-8.13.44-py2.py3-none-any.whl", hash = "sha256:52cd02865dab1428ca9e89d442629b61d407c7dc687cfb80a3e8d068a584513c"},
|
||||||
{file = "phonenumbers-8.13.43.tar.gz", hash = "sha256:35b904e4a79226eee027fbb467a9aa6f1ab9ffc3c09c91bf14b885c154936726"},
|
{file = "phonenumbers-8.13.44.tar.gz", hash = "sha256:2175021e84ee4e41b43c890f2d0af51f18c6ca9ad525886d6d6e4ea882e46fac"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@ -1643,21 +1567,6 @@ files = [
|
|||||||
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
|
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "platformdirs"
|
|
||||||
version = "3.1.1"
|
|
||||||
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
|
|
||||||
optional = false
|
|
||||||
python-versions = ">=3.7"
|
|
||||||
files = [
|
|
||||||
{file = "platformdirs-3.1.1-py3-none-any.whl", hash = "sha256:e5986afb596e4bb5bde29a79ac9061aa955b94fca2399b7aaac4090860920dd8"},
|
|
||||||
{file = "platformdirs-3.1.1.tar.gz", hash = "sha256:024996549ee88ec1a9aa99ff7f8fc819bb59e2c3477b410d90a16d32d6e707aa"},
|
|
||||||
]
|
|
||||||
|
|
||||||
[package.extras]
|
|
||||||
docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.22,!=1.23.4)"]
|
|
||||||
test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "prometheus-client"
|
name = "prometheus-client"
|
||||||
version = "0.20.0"
|
version = "0.20.0"
|
||||||
@ -1882,13 +1791,13 @@ typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pygithub"
|
name = "pygithub"
|
||||||
version = "2.3.0"
|
version = "2.4.0"
|
||||||
description = "Use the full Github API v3"
|
description = "Use the full Github API v3"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "PyGithub-2.3.0-py3-none-any.whl", hash = "sha256:65b499728be3ce7b0cd2cd760da3b32f0f4d7bc55e5e0677617f90f6564e793e"},
|
{file = "PyGithub-2.4.0-py3-none-any.whl", hash = "sha256:81935aa4bdc939fba98fee1cb47422c09157c56a27966476ff92775602b9ee24"},
|
||||||
{file = "PyGithub-2.3.0.tar.gz", hash = "sha256:0148d7347a1cdeed99af905077010aef81a4dad988b0ba51d4108bf66b443f7e"},
|
{file = "pygithub-2.4.0.tar.gz", hash = "sha256:6601e22627e87bac192f1e2e39c6e6f69a43152cfb8f307cee575879320b3051"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@ -2089,51 +1998,64 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyyaml"
|
name = "pyyaml"
|
||||||
version = "6.0.1"
|
version = "6.0.2"
|
||||||
description = "YAML parser and emitter for Python"
|
description = "YAML parser and emitter for Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "PyYAML-6.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a"},
|
{file = "PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086"},
|
||||||
{file = "PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"},
|
{file = "PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf"},
|
||||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"},
|
{file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8824b5a04a04a047e72eea5cec3bc266db09e35de6bdfe34c9436ac5ee27d237"},
|
||||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"},
|
{file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7c36280e6fb8385e520936c3cb3b8042851904eba0e58d277dca80a5cfed590b"},
|
||||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"},
|
{file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec031d5d2feb36d1d1a24380e4db6d43695f3748343d99434e6f5f9156aaa2ed"},
|
||||||
{file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"},
|
{file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:936d68689298c36b53b29f23c6dbb74de12b4ac12ca6cfe0e047bedceea56180"},
|
||||||
{file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"},
|
{file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:23502f431948090f597378482b4812b0caae32c22213aecf3b55325e049a6c68"},
|
||||||
{file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"},
|
{file = "PyYAML-6.0.2-cp310-cp310-win32.whl", hash = "sha256:2e99c6826ffa974fe6e27cdb5ed0021786b03fc98e5ee3c5bfe1fd5015f42b99"},
|
||||||
{file = "PyYAML-6.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab"},
|
{file = "PyYAML-6.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:a4d3091415f010369ae4ed1fc6b79def9416358877534caf6a0fdd2146c87a3e"},
|
||||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"},
|
{file = "PyYAML-6.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc1c1159b3d456576af7a3e4d1ba7e6924cb39de8f67111c735f6fc832082774"},
|
||||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"},
|
{file = "PyYAML-6.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1e2120ef853f59c7419231f3bf4e7021f1b936f6ebd222406c3b60212205d2ee"},
|
||||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"},
|
{file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d225db5a45f21e78dd9358e58a98702a0302f2659a3c6cd320564b75b86f47c"},
|
||||||
{file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"},
|
{file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ac9328ec4831237bec75defaf839f7d4564be1e6b25ac710bd1a96321cc8317"},
|
||||||
{file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
|
{file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ad2a3decf9aaba3d29c8f537ac4b243e36bef957511b4766cb0057d32b0be85"},
|
||||||
{file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"},
|
{file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4"},
|
||||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"},
|
{file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:797b4f722ffa07cc8d62053e4cff1486fa6dc094105d13fea7b1de7d8bf71c9e"},
|
||||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"},
|
{file = "PyYAML-6.0.2-cp311-cp311-win32.whl", hash = "sha256:11d8f3dd2b9c1207dcaf2ee0bbbfd5991f571186ec9cc78427ba5bd32afae4b5"},
|
||||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd"},
|
{file = "PyYAML-6.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44"},
|
||||||
{file = "PyYAML-6.0.1-cp36-cp36m-win32.whl", hash = "sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585"},
|
{file = "PyYAML-6.0.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab"},
|
||||||
{file = "PyYAML-6.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa"},
|
{file = "PyYAML-6.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725"},
|
||||||
{file = "PyYAML-6.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3"},
|
{file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5"},
|
||||||
{file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27"},
|
{file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425"},
|
||||||
{file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3"},
|
{file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476"},
|
||||||
{file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c"},
|
{file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48"},
|
||||||
{file = "PyYAML-6.0.1-cp37-cp37m-win32.whl", hash = "sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba"},
|
{file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b"},
|
||||||
{file = "PyYAML-6.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867"},
|
{file = "PyYAML-6.0.2-cp312-cp312-win32.whl", hash = "sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4"},
|
||||||
{file = "PyYAML-6.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595"},
|
{file = "PyYAML-6.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8"},
|
||||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"},
|
{file = "PyYAML-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba"},
|
||||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"},
|
{file = "PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1"},
|
||||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"},
|
{file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133"},
|
||||||
{file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"},
|
{file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484"},
|
||||||
{file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"},
|
{file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5"},
|
||||||
{file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"},
|
{file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc"},
|
||||||
{file = "PyYAML-6.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859"},
|
{file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652"},
|
||||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"},
|
{file = "PyYAML-6.0.2-cp313-cp313-win32.whl", hash = "sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183"},
|
||||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"},
|
{file = "PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563"},
|
||||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"},
|
{file = "PyYAML-6.0.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:24471b829b3bf607e04e88d79542a9d48bb037c2267d7927a874e6c205ca7e9a"},
|
||||||
{file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"},
|
{file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7fded462629cfa4b685c5416b949ebad6cec74af5e2d42905d41e257e0869f5"},
|
||||||
{file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"},
|
{file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d84a1718ee396f54f3a086ea0a66d8e552b2ab2017ef8b420e92edbc841c352d"},
|
||||||
{file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"},
|
{file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9056c1ecd25795207ad294bcf39f2db3d845767be0ea6e6a34d856f006006083"},
|
||||||
|
{file = "PyYAML-6.0.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:82d09873e40955485746739bcb8b4586983670466c23382c19cffecbf1fd8706"},
|
||||||
|
{file = "PyYAML-6.0.2-cp38-cp38-win32.whl", hash = "sha256:43fa96a3ca0d6b1812e01ced1044a003533c47f6ee8aca31724f78e93ccc089a"},
|
||||||
|
{file = "PyYAML-6.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:01179a4a8559ab5de078078f37e5c1a30d76bb88519906844fd7bdea1b7729ff"},
|
||||||
|
{file = "PyYAML-6.0.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:688ba32a1cffef67fd2e9398a2efebaea461578b0923624778664cc1c914db5d"},
|
||||||
|
{file = "PyYAML-6.0.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a8786accb172bd8afb8be14490a16625cbc387036876ab6ba70912730faf8e1f"},
|
||||||
|
{file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8e03406cac8513435335dbab54c0d385e4a49e4945d2909a581c83647ca0290"},
|
||||||
|
{file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f753120cb8181e736c57ef7636e83f31b9c0d1722c516f7e86cf15b7aa57ff12"},
|
||||||
|
{file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b1fdb9dc17f5a7677423d508ab4f243a726dea51fa5e70992e59a7411c89d19"},
|
||||||
|
{file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0b69e4ce7a131fe56b7e4d770c67429700908fc0752af059838b1cfb41960e4e"},
|
||||||
|
{file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a9f8c2e67970f13b16084e04f134610fd1d374bf477b17ec1599185cf611d725"},
|
||||||
|
{file = "PyYAML-6.0.2-cp39-cp39-win32.whl", hash = "sha256:6395c297d42274772abc367baaa79683958044e5d3835486c16da75d2a694631"},
|
||||||
|
{file = "PyYAML-6.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:39693e1f8320ae4f43943590b49779ffb98acb81f788220ea932a6b6c51004d8"},
|
||||||
|
{file = "pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@ -2346,29 +2268,29 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ruff"
|
name = "ruff"
|
||||||
version = "0.5.5"
|
version = "0.6.2"
|
||||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "ruff-0.5.5-py3-none-linux_armv6l.whl", hash = "sha256:605d589ec35d1da9213a9d4d7e7a9c761d90bba78fc8790d1c5e65026c1b9eaf"},
|
{file = "ruff-0.6.2-py3-none-linux_armv6l.whl", hash = "sha256:5c8cbc6252deb3ea840ad6a20b0f8583caab0c5ef4f9cca21adc5a92b8f79f3c"},
|
||||||
{file = "ruff-0.5.5-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:00817603822a3e42b80f7c3298c8269e09f889ee94640cd1fc7f9329788d7bf8"},
|
{file = "ruff-0.6.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:17002fe241e76544448a8e1e6118abecbe8cd10cf68fde635dad480dba594570"},
|
||||||
{file = "ruff-0.5.5-py3-none-macosx_11_0_arm64.whl", hash = "sha256:187a60f555e9f865a2ff2c6984b9afeffa7158ba6e1eab56cb830404c942b0f3"},
|
{file = "ruff-0.6.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:3dbeac76ed13456f8158b8f4fe087bf87882e645c8e8b606dd17b0b66c2c1158"},
|
||||||
{file = "ruff-0.5.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fe26fc46fa8c6e0ae3f47ddccfbb136253c831c3289bba044befe68f467bfb16"},
|
{file = "ruff-0.6.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:094600ee88cda325988d3f54e3588c46de5c18dae09d683ace278b11f9d4d534"},
|
||||||
{file = "ruff-0.5.5-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4ad25dd9c5faac95c8e9efb13e15803cd8bbf7f4600645a60ffe17c73f60779b"},
|
{file = "ruff-0.6.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:316d418fe258c036ba05fbf7dfc1f7d3d4096db63431546163b472285668132b"},
|
||||||
{file = "ruff-0.5.5-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f70737c157d7edf749bcb952d13854e8f745cec695a01bdc6e29c29c288fc36e"},
|
{file = "ruff-0.6.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d72b8b3abf8a2d51b7b9944a41307d2f442558ccb3859bbd87e6ae9be1694a5d"},
|
||||||
{file = "ruff-0.5.5-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:cfd7de17cef6ab559e9f5ab859f0d3296393bc78f69030967ca4d87a541b97a0"},
|
{file = "ruff-0.6.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:2aed7e243be68487aa8982e91c6e260982d00da3f38955873aecd5a9204b1d66"},
|
||||||
{file = "ruff-0.5.5-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a09b43e02f76ac0145f86a08e045e2ea452066f7ba064fd6b0cdccb486f7c3e7"},
|
{file = "ruff-0.6.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d371f7fc9cec83497fe7cf5eaf5b76e22a8efce463de5f775a1826197feb9df8"},
|
||||||
{file = "ruff-0.5.5-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0b856cb19c60cd40198be5d8d4b556228e3dcd545b4f423d1ad812bfdca5884"},
|
{file = "ruff-0.6.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8f310d63af08f583363dfb844ba8f9417b558199c58a5999215082036d795a1"},
|
||||||
{file = "ruff-0.5.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3687d002f911e8a5faf977e619a034d159a8373514a587249cc00f211c67a091"},
|
{file = "ruff-0.6.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7db6880c53c56addb8638fe444818183385ec85eeada1d48fc5abe045301b2f1"},
|
||||||
{file = "ruff-0.5.5-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:ac9dc814e510436e30d0ba535f435a7f3dc97f895f844f5b3f347ec8c228a523"},
|
{file = "ruff-0.6.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:1175d39faadd9a50718f478d23bfc1d4da5743f1ab56af81a2b6caf0a2394f23"},
|
||||||
{file = "ruff-0.5.5-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:af9bdf6c389b5add40d89b201425b531e0a5cceb3cfdcc69f04d3d531c6be74f"},
|
{file = "ruff-0.6.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:5b939f9c86d51635fe486585389f54582f0d65b8238e08c327c1534844b3bb9a"},
|
||||||
{file = "ruff-0.5.5-py3-none-musllinux_1_2_i686.whl", hash = "sha256:d40a8533ed545390ef8315b8e25c4bb85739b90bd0f3fe1280a29ae364cc55d8"},
|
{file = "ruff-0.6.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:d0d62ca91219f906caf9b187dea50d17353f15ec9bb15aae4a606cd697b49b4c"},
|
||||||
{file = "ruff-0.5.5-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cab904683bf9e2ecbbe9ff235bfe056f0eba754d0168ad5407832928d579e7ab"},
|
{file = "ruff-0.6.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:7438a7288f9d67ed3c8ce4d059e67f7ed65e9fe3aa2ab6f5b4b3610e57e3cb56"},
|
||||||
{file = "ruff-0.5.5-py3-none-win32.whl", hash = "sha256:696f18463b47a94575db635ebb4c178188645636f05e934fdf361b74edf1bb2d"},
|
{file = "ruff-0.6.2-py3-none-win32.whl", hash = "sha256:279d5f7d86696df5f9549b56b9b6a7f6c72961b619022b5b7999b15db392a4da"},
|
||||||
{file = "ruff-0.5.5-py3-none-win_amd64.whl", hash = "sha256:50f36d77f52d4c9c2f1361ccbfbd09099a1b2ea5d2b2222c586ab08885cf3445"},
|
{file = "ruff-0.6.2-py3-none-win_amd64.whl", hash = "sha256:d9f3469c7dd43cd22eb1c3fc16926fb8258d50cb1b216658a07be95dd117b0f2"},
|
||||||
{file = "ruff-0.5.5-py3-none-win_arm64.whl", hash = "sha256:3191317d967af701f1b73a31ed5788795936e423b7acce82a2b63e26eb3e89d6"},
|
{file = "ruff-0.6.2-py3-none-win_arm64.whl", hash = "sha256:f28fcd2cd0e02bdf739297516d5643a945cc7caf09bd9bcb4d932540a5ea4fa9"},
|
||||||
{file = "ruff-0.5.5.tar.gz", hash = "sha256:cc5516bdb4858d972fbc31d246bdb390eab8df1a26e2353be2dbc0c2d7f5421a"},
|
{file = "ruff-0.6.2.tar.gz", hash = "sha256:239ee6beb9e91feb8e0ec384204a763f36cb53fb895a1a364618c6abb076b3be"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@ -2403,13 +2325,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "sentry-sdk"
|
name = "sentry-sdk"
|
||||||
version = "2.12.0"
|
version = "2.13.0"
|
||||||
description = "Python client for Sentry (https://sentry.io)"
|
description = "Python client for Sentry (https://sentry.io)"
|
||||||
optional = true
|
optional = true
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
files = [
|
files = [
|
||||||
{file = "sentry_sdk-2.12.0-py2.py3-none-any.whl", hash = "sha256:7a8d5163d2ba5c5f4464628c6b68f85e86972f7c636acc78aed45c61b98b7a5e"},
|
{file = "sentry_sdk-2.13.0-py2.py3-none-any.whl", hash = "sha256:6beede8fc2ab4043da7f69d95534e320944690680dd9a963178a49de71d726c6"},
|
||||||
{file = "sentry_sdk-2.12.0.tar.gz", hash = "sha256:8763840497b817d44c49b3fe3f5f7388d083f2337ffedf008b2cdb63b5c86dc6"},
|
{file = "sentry_sdk-2.13.0.tar.gz", hash = "sha256:8d4a576f7a98eb2fdb40e13106e41f330e5c79d72a68be1316e7852cf4995260"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@ -2436,6 +2358,7 @@ httpx = ["httpx (>=0.16.0)"]
|
|||||||
huey = ["huey (>=2)"]
|
huey = ["huey (>=2)"]
|
||||||
huggingface-hub = ["huggingface-hub (>=0.22)"]
|
huggingface-hub = ["huggingface-hub (>=0.22)"]
|
||||||
langchain = ["langchain (>=0.0.210)"]
|
langchain = ["langchain (>=0.0.210)"]
|
||||||
|
litestar = ["litestar (>=2.0.0)"]
|
||||||
loguru = ["loguru (>=0.5)"]
|
loguru = ["loguru (>=0.5)"]
|
||||||
openai = ["openai (>=1.0.0)", "tiktoken (>=0.3.0)"]
|
openai = ["openai (>=1.0.0)", "tiktoken (>=0.3.0)"]
|
||||||
opentelemetry = ["opentelemetry-distro (>=0.35b0)"]
|
opentelemetry = ["opentelemetry-distro (>=0.35b0)"]
|
||||||
@ -2802,13 +2725,13 @@ files = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-jsonschema"
|
name = "types-jsonschema"
|
||||||
version = "4.23.0.20240712"
|
version = "4.23.0.20240813"
|
||||||
description = "Typing stubs for jsonschema"
|
description = "Typing stubs for jsonschema"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "types-jsonschema-4.23.0.20240712.tar.gz", hash = "sha256:b20db728dcf7ea3e80e9bdeb55e8b8420c6c040cda14e8cf284465adee71d217"},
|
{file = "types-jsonschema-4.23.0.20240813.tar.gz", hash = "sha256:c93f48206f209a5bc4608d295ac39f172fb98b9e24159ce577dbd25ddb79a1c0"},
|
||||||
{file = "types_jsonschema-4.23.0.20240712-py3-none-any.whl", hash = "sha256:8c33177ce95336241c1d61ccb56a9964d4361b99d5f1cd81a1ab4909b0dd7cf4"},
|
{file = "types_jsonschema-4.23.0.20240813-py3-none-any.whl", hash = "sha256:be283e23f0b87547316c2ee6b0fd36d95ea30e921db06478029e10b5b6aa6ac3"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@ -2900,13 +2823,13 @@ urllib3 = ">=2"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-setuptools"
|
name = "types-setuptools"
|
||||||
version = "71.1.0.20240726"
|
version = "71.1.0.20240818"
|
||||||
description = "Typing stubs for setuptools"
|
description = "Typing stubs for setuptools"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "types-setuptools-71.1.0.20240726.tar.gz", hash = "sha256:85ba28e9461bb1be86ebba4db0f1c2408f2b11115b1966334ea9dc464e29303e"},
|
{file = "types-setuptools-71.1.0.20240818.tar.gz", hash = "sha256:f62eaffaa39774462c65fbb49368c4dc1d91a90a28371cb14e1af090ff0e41e3"},
|
||||||
{file = "types_setuptools-71.1.0.20240726-py3-none-any.whl", hash = "sha256:a7775376f36e0ff09bcad236bf265777590a66b11623e48c20bfc30f1444ea36"},
|
{file = "types_setuptools-71.1.0.20240818-py3-none-any.whl", hash = "sha256:c4f95302f88369ac0ac46c67ddbfc70c6c4dbbb184d9fed356244217a2934025"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@ -3181,4 +3104,4 @@ user-search = ["pyicu"]
|
|||||||
[metadata]
|
[metadata]
|
||||||
lock-version = "2.0"
|
lock-version = "2.0"
|
||||||
python-versions = "^3.8.0"
|
python-versions = "^3.8.0"
|
||||||
content-hash = "c165cdc1f6612c9f1b5bfd8063c23e2d595d717dd8ac1a468519e902be2cdf93"
|
content-hash = "2bf09e2b68f3abd1a0f9ff2227eb3026ac3d034845acfc120d0b1cb8167ea43b"
|
||||||
|
280
pylint.cfg
280
pylint.cfg
@ -1,280 +0,0 @@
|
|||||||
[MASTER]
|
|
||||||
|
|
||||||
# Specify a configuration file.
|
|
||||||
#rcfile=
|
|
||||||
|
|
||||||
# Python code to execute, usually for sys.path manipulation such as
|
|
||||||
# pygtk.require().
|
|
||||||
#init-hook=
|
|
||||||
|
|
||||||
# Profiled execution.
|
|
||||||
profile=no
|
|
||||||
|
|
||||||
# Add files or directories to the blacklist. They should be base names, not
|
|
||||||
# paths.
|
|
||||||
ignore=CVS
|
|
||||||
|
|
||||||
# Pickle collected data for later comparisons.
|
|
||||||
persistent=yes
|
|
||||||
|
|
||||||
# List of plugins (as comma separated values of python modules names) to load,
|
|
||||||
# usually to register additional checkers.
|
|
||||||
load-plugins=
|
|
||||||
|
|
||||||
|
|
||||||
[MESSAGES CONTROL]
|
|
||||||
|
|
||||||
# Enable the message, report, category or checker with the given id(s). You can
|
|
||||||
# either give multiple identifier separated by comma (,) or put this option
|
|
||||||
# multiple time. See also the "--disable" option for examples.
|
|
||||||
#enable=
|
|
||||||
|
|
||||||
# Disable the message, report, category or checker with the given id(s). You
|
|
||||||
# can either give multiple identifiers separated by comma (,) or put this
|
|
||||||
# option multiple times (only on the command line, not in the configuration
|
|
||||||
# file where it should appear only once).You can also use "--disable=all" to
|
|
||||||
# disable everything first and then reenable specific checks. For example, if
|
|
||||||
# you want to run only the similarities checker, you can use "--disable=all
|
|
||||||
# --enable=similarities". If you want to run only the classes checker, but have
|
|
||||||
# no Warning level messages displayed, use"--disable=all --enable=classes
|
|
||||||
# --disable=W"
|
|
||||||
disable=missing-docstring
|
|
||||||
|
|
||||||
|
|
||||||
[REPORTS]
|
|
||||||
|
|
||||||
# Set the output format. Available formats are text, parseable, colorized, msvs
|
|
||||||
# (visual studio) and html. You can also give a reporter class, eg
|
|
||||||
# mypackage.mymodule.MyReporterClass.
|
|
||||||
output-format=text
|
|
||||||
|
|
||||||
# Put messages in a separate file for each module / package specified on the
|
|
||||||
# command line instead of printing them on stdout. Reports (if any) will be
|
|
||||||
# written in a file name "pylint_global.[txt|html]".
|
|
||||||
files-output=no
|
|
||||||
|
|
||||||
# Tells whether to display a full report or only the messages
|
|
||||||
reports=yes
|
|
||||||
|
|
||||||
# Python expression which should return a note less than 10 (10 is the highest
|
|
||||||
# note). You have access to the variables errors warning, statement which
|
|
||||||
# respectively contain the number of errors / warnings messages and the total
|
|
||||||
# number of statements analyzed. This is used by the global evaluation report
|
|
||||||
# (RP0004).
|
|
||||||
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
|
|
||||||
|
|
||||||
# Add a comment according to your evaluation note. This is used by the global
|
|
||||||
# evaluation report (RP0004).
|
|
||||||
comment=no
|
|
||||||
|
|
||||||
# Template used to display messages. This is a python new-style format string
|
|
||||||
# used to format the message information. See doc for all details
|
|
||||||
#msg-template=
|
|
||||||
|
|
||||||
|
|
||||||
[TYPECHECK]
|
|
||||||
|
|
||||||
# Tells whether missing members accessed in mixin class should be ignored. A
|
|
||||||
# mixin class is detected if its name ends with "mixin" (case insensitive).
|
|
||||||
ignore-mixin-members=yes
|
|
||||||
|
|
||||||
# List of classes names for which member attributes should not be checked
|
|
||||||
# (useful for classes with attributes dynamically set).
|
|
||||||
ignored-classes=SQLObject
|
|
||||||
|
|
||||||
# When zope mode is activated, add a predefined set of Zope acquired attributes
|
|
||||||
# to generated-members.
|
|
||||||
zope=no
|
|
||||||
|
|
||||||
# List of members which are set dynamically and missed by pylint inference
|
|
||||||
# system, and so shouldn't trigger E0201 when accessed. Python regular
|
|
||||||
# expressions are accepted.
|
|
||||||
generated-members=REQUEST,acl_users,aq_parent
|
|
||||||
|
|
||||||
|
|
||||||
[MISCELLANEOUS]
|
|
||||||
|
|
||||||
# List of note tags to take in consideration, separated by a comma.
|
|
||||||
notes=FIXME,XXX,TODO
|
|
||||||
|
|
||||||
|
|
||||||
[SIMILARITIES]
|
|
||||||
|
|
||||||
# Minimum lines number of a similarity.
|
|
||||||
min-similarity-lines=4
|
|
||||||
|
|
||||||
# Ignore comments when computing similarities.
|
|
||||||
ignore-comments=yes
|
|
||||||
|
|
||||||
# Ignore docstrings when computing similarities.
|
|
||||||
ignore-docstrings=yes
|
|
||||||
|
|
||||||
# Ignore imports when computing similarities.
|
|
||||||
ignore-imports=no
|
|
||||||
|
|
||||||
|
|
||||||
[VARIABLES]
|
|
||||||
|
|
||||||
# Tells whether we should check for unused import in __init__ files.
|
|
||||||
init-import=no
|
|
||||||
|
|
||||||
# A regular expression matching the beginning of the name of dummy variables
|
|
||||||
# (i.e. not used).
|
|
||||||
dummy-variables-rgx=_$|dummy
|
|
||||||
|
|
||||||
# List of additional names supposed to be defined in builtins. Remember that
|
|
||||||
# you should avoid to define new builtins when possible.
|
|
||||||
additional-builtins=
|
|
||||||
|
|
||||||
|
|
||||||
[BASIC]
|
|
||||||
|
|
||||||
# Required attributes for module, separated by a comma
|
|
||||||
required-attributes=
|
|
||||||
|
|
||||||
# List of builtins function names that should not be used, separated by a comma
|
|
||||||
bad-functions=map,filter,apply,input
|
|
||||||
|
|
||||||
# Regular expression which should only match correct module names
|
|
||||||
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
|
|
||||||
|
|
||||||
# Regular expression which should only match correct module level names
|
|
||||||
const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$
|
|
||||||
|
|
||||||
# Regular expression which should only match correct class names
|
|
||||||
class-rgx=[A-Z_][a-zA-Z0-9]+$
|
|
||||||
|
|
||||||
# Regular expression which should only match correct function names
|
|
||||||
function-rgx=[a-z_][a-z0-9_]{2,30}$
|
|
||||||
|
|
||||||
# Regular expression which should only match correct method names
|
|
||||||
method-rgx=[a-z_][a-z0-9_]{2,30}$
|
|
||||||
|
|
||||||
# Regular expression which should only match correct instance attribute names
|
|
||||||
attr-rgx=[a-z_][a-z0-9_]{2,30}$
|
|
||||||
|
|
||||||
# Regular expression which should only match correct argument names
|
|
||||||
argument-rgx=[a-z_][a-z0-9_]{2,30}$
|
|
||||||
|
|
||||||
# Regular expression which should only match correct variable names
|
|
||||||
variable-rgx=[a-z_][a-z0-9_]{2,30}$
|
|
||||||
|
|
||||||
# Regular expression which should only match correct attribute names in class
|
|
||||||
# bodies
|
|
||||||
class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$
|
|
||||||
|
|
||||||
# Regular expression which should only match correct list comprehension /
|
|
||||||
# generator expression variable names
|
|
||||||
inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$
|
|
||||||
|
|
||||||
# Good variable names which should always be accepted, separated by a comma
|
|
||||||
good-names=i,j,k,ex,Run,_
|
|
||||||
|
|
||||||
# Bad variable names which should always be refused, separated by a comma
|
|
||||||
bad-names=foo,bar,baz,toto,tutu,tata
|
|
||||||
|
|
||||||
# Regular expression which should only match function or class names that do
|
|
||||||
# not require a docstring.
|
|
||||||
no-docstring-rgx=__.*__
|
|
||||||
|
|
||||||
# Minimum line length for functions/classes that require docstrings, shorter
|
|
||||||
# ones are exempt.
|
|
||||||
docstring-min-length=-1
|
|
||||||
|
|
||||||
|
|
||||||
[FORMAT]
|
|
||||||
|
|
||||||
# Maximum number of characters on a single line.
|
|
||||||
max-line-length=80
|
|
||||||
|
|
||||||
# Regexp for a line that is allowed to be longer than the limit.
|
|
||||||
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
|
|
||||||
|
|
||||||
# Allow the body of an if to be on the same line as the test if there is no
|
|
||||||
# else.
|
|
||||||
single-line-if-stmt=no
|
|
||||||
|
|
||||||
# List of optional constructs for which whitespace checking is disabled
|
|
||||||
no-space-check=trailing-comma,dict-separator
|
|
||||||
|
|
||||||
# Maximum number of lines in a module
|
|
||||||
max-module-lines=1000
|
|
||||||
|
|
||||||
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
|
|
||||||
# tab).
|
|
||||||
indent-string=' '
|
|
||||||
|
|
||||||
|
|
||||||
[DESIGN]
|
|
||||||
|
|
||||||
# Maximum number of arguments for function / method
|
|
||||||
max-args=5
|
|
||||||
|
|
||||||
# Argument names that match this expression will be ignored. Default to name
|
|
||||||
# with leading underscore
|
|
||||||
ignored-argument-names=_.*
|
|
||||||
|
|
||||||
# Maximum number of locals for function / method body
|
|
||||||
max-locals=15
|
|
||||||
|
|
||||||
# Maximum number of return / yield for function / method body
|
|
||||||
max-returns=6
|
|
||||||
|
|
||||||
# Maximum number of branch for function / method body
|
|
||||||
max-branches=12
|
|
||||||
|
|
||||||
# Maximum number of statements in function / method body
|
|
||||||
max-statements=50
|
|
||||||
|
|
||||||
# Maximum number of parents for a class (see R0901).
|
|
||||||
max-parents=7
|
|
||||||
|
|
||||||
# Maximum number of attributes for a class (see R0902).
|
|
||||||
max-attributes=7
|
|
||||||
|
|
||||||
# Minimum number of public methods for a class (see R0903).
|
|
||||||
min-public-methods=2
|
|
||||||
|
|
||||||
# Maximum number of public methods for a class (see R0904).
|
|
||||||
max-public-methods=20
|
|
||||||
|
|
||||||
|
|
||||||
[IMPORTS]
|
|
||||||
|
|
||||||
# Deprecated modules which should not be used, separated by a comma
|
|
||||||
deprecated-modules=regsub,TERMIOS,Bastion,rexec
|
|
||||||
|
|
||||||
# Create a graph of every (i.e. internal and external) dependencies in the
|
|
||||||
# given file (report RP0402 must not be disabled)
|
|
||||||
import-graph=
|
|
||||||
|
|
||||||
# Create a graph of external dependencies in the given file (report RP0402 must
|
|
||||||
# not be disabled)
|
|
||||||
ext-import-graph=
|
|
||||||
|
|
||||||
# Create a graph of internal dependencies in the given file (report RP0402 must
|
|
||||||
# not be disabled)
|
|
||||||
int-import-graph=
|
|
||||||
|
|
||||||
|
|
||||||
[CLASSES]
|
|
||||||
|
|
||||||
# List of interface methods to ignore, separated by a comma. This is used for
|
|
||||||
# instance to not check methods defines in Zope's Interface base class.
|
|
||||||
ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by
|
|
||||||
|
|
||||||
# List of method names used to declare (i.e. assign) instance attributes.
|
|
||||||
defining-attr-methods=__init__,__new__,setUp
|
|
||||||
|
|
||||||
# List of valid names for the first argument in a class method.
|
|
||||||
valid-classmethod-first-arg=cls
|
|
||||||
|
|
||||||
# List of valid names for the first argument in a metaclass class method.
|
|
||||||
valid-metaclass-classmethod-first-arg=mcs
|
|
||||||
|
|
||||||
|
|
||||||
[EXCEPTIONS]
|
|
||||||
|
|
||||||
# Exceptions that will emit a warning when being caught. Defaults to
|
|
||||||
# "Exception"
|
|
||||||
overgeneral-exceptions=Exception
|
|
@ -34,14 +34,9 @@
|
|||||||
name = "Internal Changes"
|
name = "Internal Changes"
|
||||||
showcontent = true
|
showcontent = true
|
||||||
|
|
||||||
[tool.black]
|
|
||||||
target-version = ['py38', 'py39', 'py310', 'py311']
|
|
||||||
# black ignores everything in .gitignore by default, see
|
|
||||||
# https://black.readthedocs.io/en/stable/usage_and_configuration/file_collection_and_discovery.html#gitignore
|
|
||||||
# Use `extend-exclude` if you want to exclude something in addition to this.
|
|
||||||
|
|
||||||
[tool.ruff]
|
[tool.ruff]
|
||||||
line-length = 88
|
line-length = 88
|
||||||
|
target-version = "py38"
|
||||||
|
|
||||||
[tool.ruff.lint]
|
[tool.ruff.lint]
|
||||||
# See https://beta.ruff.rs/docs/rules/#error-e
|
# See https://beta.ruff.rs/docs/rules/#error-e
|
||||||
@ -63,6 +58,8 @@ select = [
|
|||||||
"W",
|
"W",
|
||||||
# pyflakes
|
# pyflakes
|
||||||
"F",
|
"F",
|
||||||
|
# isort
|
||||||
|
"I001",
|
||||||
# flake8-bugbear
|
# flake8-bugbear
|
||||||
"B0",
|
"B0",
|
||||||
# flake8-comprehensions
|
# flake8-comprehensions
|
||||||
@ -79,17 +76,20 @@ select = [
|
|||||||
"EXE",
|
"EXE",
|
||||||
]
|
]
|
||||||
|
|
||||||
[tool.isort]
|
[tool.ruff.lint.isort]
|
||||||
line_length = 88
|
combine-as-imports = true
|
||||||
sections = ["FUTURE", "STDLIB", "THIRDPARTY", "TWISTED", "FIRSTPARTY", "TESTS", "LOCALFOLDER"]
|
section-order = ["future", "standard-library", "third-party", "twisted", "first-party", "testing", "local-folder"]
|
||||||
default_section = "THIRDPARTY"
|
known-first-party = ["synapse"]
|
||||||
known_first_party = ["synapse"]
|
|
||||||
known_tests = ["tests"]
|
[tool.ruff.lint.isort.sections]
|
||||||
known_twisted = ["twisted", "OpenSSL"]
|
twisted = ["twisted", "OpenSSL"]
|
||||||
multi_line_output = 3
|
testing = ["tests"]
|
||||||
include_trailing_comma = true
|
|
||||||
combine_as_imports = true
|
[tool.ruff.format]
|
||||||
skip_gitignore = true
|
quote-style = "double"
|
||||||
|
indent-style = "space"
|
||||||
|
skip-magic-trailing-comma = false
|
||||||
|
line-ending = "auto"
|
||||||
|
|
||||||
[tool.maturin]
|
[tool.maturin]
|
||||||
manifest-path = "rust/Cargo.toml"
|
manifest-path = "rust/Cargo.toml"
|
||||||
@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
|
|||||||
|
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "matrix-synapse"
|
name = "matrix-synapse"
|
||||||
version = "1.114.0rc1"
|
version = "1.114.0rc3"
|
||||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||||
license = "AGPL-3.0-or-later"
|
license = "AGPL-3.0-or-later"
|
||||||
@ -320,9 +320,7 @@ all = [
|
|||||||
# failing on new releases. Keeping lower bounds loose here means that dependabot
|
# failing on new releases. Keeping lower bounds loose here means that dependabot
|
||||||
# can bump versions without having to update the content-hash in the lockfile.
|
# can bump versions without having to update the content-hash in the lockfile.
|
||||||
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
||||||
isort = ">=5.10.1"
|
ruff = "0.6.2"
|
||||||
black = ">=22.7.0"
|
|
||||||
ruff = "0.5.5"
|
|
||||||
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
||||||
pydantic = "^2"
|
pydantic = "^2"
|
||||||
|
|
||||||
|
168
requirements.txt
168
requirements.txt
@ -1,9 +1,9 @@
|
|||||||
annotated-types==0.5.0 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
annotated-types==0.5.0 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:47cdc3490d9ac1506ce92c7aaa76c579dc3509ff11e098fc867e5130ab7be802 \
|
--hash=sha256:47cdc3490d9ac1506ce92c7aaa76c579dc3509ff11e098fc867e5130ab7be802 \
|
||||||
--hash=sha256:58da39888f92c276ad970249761ebea80ba544b77acddaa1a4d6cf78287d45fd
|
--hash=sha256:58da39888f92c276ad970249761ebea80ba544b77acddaa1a4d6cf78287d45fd
|
||||||
attrs==23.2.0 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
attrs==24.2.0 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:935dc3b529c262f6cf76e50877d35a4bd3c1de194fd41f47a2b7ae8f19971f30 \
|
--hash=sha256:5cfb1b9148b5b086569baec03f20d7b6bf3bcacc9a42bebf87ffaaca362f6346 \
|
||||||
--hash=sha256:99b87a485a5820b23b879f04c2305b44b951b502fd64be915879d77a7e8fc6f1
|
--hash=sha256:81921eb96de3191c8258c199618104dd27ac608d9366f5e35d011eae1867ede2
|
||||||
authlib==1.3.1 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
authlib==1.3.1 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:7ae843f03c06c5c0debd63c9db91f9fda64fa62a42a77419fa15fbb7e7a58917 \
|
--hash=sha256:7ae843f03c06c5c0debd63c9db91f9fda64fa62a42a77419fa15fbb7e7a58917 \
|
||||||
--hash=sha256:d35800b973099bbadc49b42b256ecb80041ad56b7fe1216a362c7943c088f377
|
--hash=sha256:d35800b973099bbadc49b42b256ecb80041ad56b7fe1216a362c7943c088f377
|
||||||
@ -191,39 +191,34 @@ charset-normalizer==3.1.0 ; python_version >= "3.8" and python_full_version < "4
|
|||||||
constantly==15.1.0 ; python_full_version >= "3.8.0" and python_full_version < "4.0.0" \
|
constantly==15.1.0 ; python_full_version >= "3.8.0" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:586372eb92059873e29eba4f9dec8381541b4d3834660707faf8ba59146dfc35 \
|
--hash=sha256:586372eb92059873e29eba4f9dec8381541b4d3834660707faf8ba59146dfc35 \
|
||||||
--hash=sha256:dd2fa9d6b1a51a83f0d7dd76293d734046aa176e384bf6e33b7e44880eb37c5d
|
--hash=sha256:dd2fa9d6b1a51a83f0d7dd76293d734046aa176e384bf6e33b7e44880eb37c5d
|
||||||
cryptography==42.0.8 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
cryptography==43.0.0 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:013629ae70b40af70c9a7a5db40abe5d9054e6f4380e50ce769947b73bf3caad \
|
--hash=sha256:0663585d02f76929792470451a5ba64424acc3cd5227b03921dab0e2f27b1709 \
|
||||||
--hash=sha256:2346b911eb349ab547076f47f2e035fc8ff2c02380a7cbbf8d87114fa0f1c583 \
|
--hash=sha256:08a24a7070b2b6804c1940ff0f910ff728932a9d0e80e7814234269f9d46d069 \
|
||||||
--hash=sha256:2f66d9cd9147ee495a8374a45ca445819f8929a3efcd2e3df6428e46c3cbb10b \
|
--hash=sha256:232ce02943a579095a339ac4b390fbbe97f5b5d5d107f8a08260ea2768be8cc2 \
|
||||||
--hash=sha256:2f88d197e66c65be5e42cd72e5c18afbfae3f741742070e3019ac8f4ac57262c \
|
--hash=sha256:2905ccf93a8a2a416f3ec01b1a7911c3fe4073ef35640e7ee5296754e30b762b \
|
||||||
--hash=sha256:31f721658a29331f895a5a54e7e82075554ccfb8b163a18719d342f5ffe5ecb1 \
|
--hash=sha256:299d3da8e00b7e2b54bb02ef58d73cd5f55fb31f33ebbf33bd00d9aa6807df7e \
|
||||||
--hash=sha256:343728aac38decfdeecf55ecab3264b015be68fc2816ca800db649607aeee648 \
|
--hash=sha256:2c6d112bf61c5ef44042c253e4859b3cbbb50df2f78fa8fae6747a7814484a70 \
|
||||||
--hash=sha256:5226d5d21ab681f432a9c1cf8b658c0cb02533eece706b155e5fbd8a0cdd3949 \
|
--hash=sha256:31e44a986ceccec3d0498e16f3d27b2ee5fdf69ce2ab89b52eaad1d2f33d8778 \
|
||||||
--hash=sha256:57080dee41209e556a9a4ce60d229244f7a66ef52750f813bfbe18959770cfba \
|
--hash=sha256:3d9a1eca329405219b605fac09ecfc09ac09e595d6def650a437523fcd08dd22 \
|
||||||
--hash=sha256:5a94eccb2a81a309806027e1670a358b99b8fe8bfe9f8d329f27d72c094dde8c \
|
--hash=sha256:3dcdedae5c7710b9f97ac6bba7e1052b95c7083c9d0e9df96e02a1932e777895 \
|
||||||
--hash=sha256:6b7c4f03ce01afd3b76cf69a5455caa9cfa3de8c8f493e0d3ab7d20611c8dae9 \
|
--hash=sha256:47ca71115e545954e6c1d207dd13461ab81f4eccfcb1345eac874828b5e3eaaf \
|
||||||
--hash=sha256:7016f837e15b0a1c119d27ecd89b3515f01f90a8615ed5e9427e30d9cdbfed3d \
|
--hash=sha256:4a997df8c1c2aae1e1e5ac49c2e4f610ad037fc5a3aadc7b64e39dea42249431 \
|
||||||
--hash=sha256:81884c4d096c272f00aeb1f11cf62ccd39763581645b0812e99a91505fa48e0c \
|
--hash=sha256:51956cf8730665e2bdf8ddb8da0056f699c1a5715648c1b0144670c1ba00b48f \
|
||||||
--hash=sha256:81d8a521705787afe7a18d5bfb47ea9d9cc068206270aad0b96a725022e18d2e \
|
--hash=sha256:5bcb8a5620008a8034d39bce21dc3e23735dfdb6a33a06974739bfa04f853947 \
|
||||||
--hash=sha256:8d09d05439ce7baa8e9e95b07ec5b6c886f548deb7e0f69ef25f64b3bce842f2 \
|
--hash=sha256:64c3f16e2a4fc51c0d06af28441881f98c5d91009b8caaff40cf3548089e9c74 \
|
||||||
--hash=sha256:961e61cefdcb06e0c6d7e3a1b22ebe8b996eb2bf50614e89384be54c48c6b63d \
|
--hash=sha256:6e2b11c55d260d03a8cf29ac9b5e0608d35f08077d8c087be96287f43af3ccdc \
|
||||||
--hash=sha256:9c0c1716c8447ee7dbf08d6db2e5c41c688544c61074b54fc4564196f55c25a7 \
|
--hash=sha256:7b3f5fe74a5ca32d4d0f302ffe6680fcc5c28f8ef0dc0ae8f40c0f3a1b4fca66 \
|
||||||
--hash=sha256:a0608251135d0e03111152e41f0cc2392d1e74e35703960d4190b2e0f4ca9c70 \
|
--hash=sha256:844b6d608374e7d08f4f6e6f9f7b951f9256db41421917dfb2d003dde4cd6b66 \
|
||||||
--hash=sha256:a0c5b2b0585b6af82d7e385f55a8bc568abff8923af147ee3c07bd8b42cda8b2 \
|
--hash=sha256:9a8d6802e0825767476f62aafed40532bd435e8a5f7d23bd8b4f5fd04cc80ecf \
|
||||||
--hash=sha256:ad803773e9df0b92e0a817d22fd8a3675493f690b96130a5e24f1b8fabbea9c7 \
|
--hash=sha256:aae4d918f6b180a8ab8bf6511a419473d107df4dbb4225c7b48c5c9602c38c7f \
|
||||||
--hash=sha256:b297f90c5723d04bcc8265fc2a0f86d4ea2e0f7ab4b6994459548d3a6b992a14 \
|
--hash=sha256:ac1955ce000cb29ab40def14fd1bbfa7af2017cca696ee696925615cafd0dce5 \
|
||||||
--hash=sha256:ba4f0a211697362e89ad822e667d8d340b4d8d55fae72cdd619389fb5912eefe \
|
--hash=sha256:b88075ada2d51aa9f18283532c9f60e72170041bba88d7f37e49cbb10275299e \
|
||||||
--hash=sha256:c4783183f7cb757b73b2ae9aed6599b96338eb957233c58ca8f49a49cc32fd5e \
|
--hash=sha256:cb013933d4c127349b3948aa8aaf2f12c0353ad0eccd715ca789c8a0f671646f \
|
||||||
--hash=sha256:c9bb2ae11bfbab395bdd072985abde58ea9860ed84e59dbc0463a5d0159f5b71 \
|
--hash=sha256:cc70b4b581f28d0a254d006f26949245e3657d40d8857066c2ae22a61222ef55 \
|
||||||
--hash=sha256:cafb92b2bc622cd1aa6a1dce4b93307792633f4c5fe1f46c6b97cf67073ec961 \
|
--hash=sha256:e9c5266c432a1e23738d178e51c2c7a5e2ddf790f248be939448c0ba2021f9d1 \
|
||||||
--hash=sha256:d45b940883a03e19e944456a558b67a41160e367a719833c53de6911cabba2b7 \
|
--hash=sha256:ea9e57f8ea880eeea38ab5abf9fbe39f923544d7884228ec67d666abd60f5a47 \
|
||||||
--hash=sha256:dc0fdf6787f37b1c6b08e6dfc892d9d068b5bdb671198c72072828b80bd5fe4c \
|
--hash=sha256:ee0c405832ade84d4de74b9029bedb7b31200600fa524d218fc29bfa371e97f5 \
|
||||||
--hash=sha256:dea567d1b0e8bc5764b9443858b673b734100c2871dc93163f58c46a97a83d28 \
|
--hash=sha256:fdcb265de28585de5b859ae13e3846a8e805268a823a12a4da2597f1f5afc9f0
|
||||||
--hash=sha256:dec9b018df185f08483f294cae6ccac29e7a6e0678996587363dc352dc65c842 \
|
|
||||||
--hash=sha256:e3ec3672626e1b9e55afd0df6d774ff0e953452886e06e0f1eb7eb0c832e8902 \
|
|
||||||
--hash=sha256:e599b53fd95357d92304510fb7bda8523ed1f79ca98dce2f43c115950aa78801 \
|
|
||||||
--hash=sha256:fa76fbb7596cc5839320000cdd5d0955313696d9511debab7ee7278fc8b5c84a \
|
|
||||||
--hash=sha256:fff12c88a672ab9c9c1cf7b0c80e3ad9e2ebd9d828d955c126be4fd3e5578c9e
|
|
||||||
hiredis==3.0.0 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
hiredis==3.0.0 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:00018f22f38530768b73ea86c11f47e8d4df65facd4e562bd78773bd1baef35e \
|
--hash=sha256:00018f22f38530768b73ea86c11f47e8d4df65facd4e562bd78773bd1baef35e \
|
||||||
--hash=sha256:034925b5fb514f7b11aac38cd55b3fd7e9d3af23bd6497f3f20aa5b8ba58e232 \
|
--hash=sha256:034925b5fb514f7b11aac38cd55b3fd7e9d3af23bd6497f3f20aa5b8ba58e232 \
|
||||||
@ -697,9 +692,9 @@ packaging==24.1 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
|||||||
parameterized==0.9.0 ; python_full_version >= "3.8.0" and python_full_version < "4.0.0" \
|
parameterized==0.9.0 ; python_full_version >= "3.8.0" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:4e0758e3d41bea3bbd05ec14fc2c24736723f243b28d702081aef438c9372b1b \
|
--hash=sha256:4e0758e3d41bea3bbd05ec14fc2c24736723f243b28d702081aef438c9372b1b \
|
||||||
--hash=sha256:7fc905272cefa4f364c1a3429cbbe9c0f98b793988efb5bf90aac80f08db09b1
|
--hash=sha256:7fc905272cefa4f364c1a3429cbbe9c0f98b793988efb5bf90aac80f08db09b1
|
||||||
phonenumbers==8.13.43 ; python_full_version >= "3.8.0" and python_full_version < "4.0.0" \
|
phonenumbers==8.13.44 ; python_full_version >= "3.8.0" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:339e521403fe4dd9c664dbbeb2fe434f9ea5c81e54c0fdfadbaeb53b26a76c27 \
|
--hash=sha256:2175021e84ee4e41b43c890f2d0af51f18c6ca9ad525886d6d6e4ea882e46fac \
|
||||||
--hash=sha256:35b904e4a79226eee027fbb467a9aa6f1ab9ffc3c09c91bf14b885c154936726
|
--hash=sha256:52cd02865dab1428ca9e89d442629b61d407c7dc687cfb80a3e8d068a584513c
|
||||||
pillow==10.4.0 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
pillow==10.4.0 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:02a2be69f9c9b8c1e97cf2713e789d4e398c751ecfd9967c18d0ce304efbf885 \
|
--hash=sha256:02a2be69f9c9b8c1e97cf2713e789d4e398c751ecfd9967c18d0ce304efbf885 \
|
||||||
--hash=sha256:030abdbe43ee02e0de642aee345efa443740aa4d828bfe8e2eb11922ea6a21ea \
|
--hash=sha256:030abdbe43ee02e0de642aee345efa443740aa4d828bfe8e2eb11922ea6a21ea \
|
||||||
@ -927,47 +922,60 @@ pyopenssl==24.2.1 ; python_full_version >= "3.8.0" and python_full_version < "4.
|
|||||||
python-multipart==0.0.9 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
python-multipart==0.0.9 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:03f54688c663f1b7977105f021043b0793151e4cb1c1a9d4a11fc13d622c4026 \
|
--hash=sha256:03f54688c663f1b7977105f021043b0793151e4cb1c1a9d4a11fc13d622c4026 \
|
||||||
--hash=sha256:97ca7b8ea7b05f977dc3849c3ba99d51689822fab725c3703af7c866a0c2b215
|
--hash=sha256:97ca7b8ea7b05f977dc3849c3ba99d51689822fab725c3703af7c866a0c2b215
|
||||||
pyyaml==6.0.1 ; python_full_version >= "3.8.0" and python_full_version < "4.0.0" \
|
pyyaml==6.0.2 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc \
|
--hash=sha256:01179a4a8559ab5de078078f37e5c1a30d76bb88519906844fd7bdea1b7729ff \
|
||||||
--hash=sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741 \
|
--hash=sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48 \
|
||||||
--hash=sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206 \
|
--hash=sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086 \
|
||||||
--hash=sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27 \
|
--hash=sha256:0b69e4ce7a131fe56b7e4d770c67429700908fc0752af059838b1cfb41960e4e \
|
||||||
--hash=sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595 \
|
--hash=sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133 \
|
||||||
--hash=sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62 \
|
--hash=sha256:11d8f3dd2b9c1207dcaf2ee0bbbfd5991f571186ec9cc78427ba5bd32afae4b5 \
|
||||||
--hash=sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98 \
|
--hash=sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484 \
|
||||||
--hash=sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696 \
|
--hash=sha256:1e2120ef853f59c7419231f3bf4e7021f1b936f6ebd222406c3b60212205d2ee \
|
||||||
--hash=sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d \
|
--hash=sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5 \
|
||||||
--hash=sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867 \
|
--hash=sha256:23502f431948090f597378482b4812b0caae32c22213aecf3b55325e049a6c68 \
|
||||||
--hash=sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47 \
|
--hash=sha256:24471b829b3bf607e04e88d79542a9d48bb037c2267d7927a874e6c205ca7e9a \
|
||||||
--hash=sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486 \
|
--hash=sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf \
|
||||||
--hash=sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6 \
|
--hash=sha256:2e99c6826ffa974fe6e27cdb5ed0021786b03fc98e5ee3c5bfe1fd5015f42b99 \
|
||||||
--hash=sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3 \
|
--hash=sha256:39693e1f8320ae4f43943590b49779ffb98acb81f788220ea932a6b6c51004d8 \
|
||||||
--hash=sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007 \
|
--hash=sha256:3ad2a3decf9aaba3d29c8f537ac4b243e36bef957511b4766cb0057d32b0be85 \
|
||||||
--hash=sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938 \
|
--hash=sha256:3b1fdb9dc17f5a7677423d508ab4f243a726dea51fa5e70992e59a7411c89d19 \
|
||||||
--hash=sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c \
|
--hash=sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc \
|
||||||
--hash=sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735 \
|
--hash=sha256:43fa96a3ca0d6b1812e01ced1044a003533c47f6ee8aca31724f78e93ccc089a \
|
||||||
--hash=sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d \
|
--hash=sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1 \
|
||||||
--hash=sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba \
|
--hash=sha256:5ac9328ec4831237bec75defaf839f7d4564be1e6b25ac710bd1a96321cc8317 \
|
||||||
--hash=sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8 \
|
--hash=sha256:5d225db5a45f21e78dd9358e58a98702a0302f2659a3c6cd320564b75b86f47c \
|
||||||
--hash=sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5 \
|
--hash=sha256:6395c297d42274772abc367baaa79683958044e5d3835486c16da75d2a694631 \
|
||||||
--hash=sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd \
|
--hash=sha256:688ba32a1cffef67fd2e9398a2efebaea461578b0923624778664cc1c914db5d \
|
||||||
--hash=sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3 \
|
--hash=sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652 \
|
||||||
--hash=sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0 \
|
--hash=sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5 \
|
||||||
--hash=sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515 \
|
--hash=sha256:797b4f722ffa07cc8d62053e4cff1486fa6dc094105d13fea7b1de7d8bf71c9e \
|
||||||
--hash=sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c \
|
--hash=sha256:7c36280e6fb8385e520936c3cb3b8042851904eba0e58d277dca80a5cfed590b \
|
||||||
--hash=sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c \
|
--hash=sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8 \
|
||||||
--hash=sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924 \
|
--hash=sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476 \
|
||||||
--hash=sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34 \
|
--hash=sha256:82d09873e40955485746739bcb8b4586983670466c23382c19cffecbf1fd8706 \
|
||||||
--hash=sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43 \
|
--hash=sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563 \
|
||||||
--hash=sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859 \
|
--hash=sha256:8824b5a04a04a047e72eea5cec3bc266db09e35de6bdfe34c9436ac5ee27d237 \
|
||||||
--hash=sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673 \
|
--hash=sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b \
|
||||||
--hash=sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a \
|
--hash=sha256:9056c1ecd25795207ad294bcf39f2db3d845767be0ea6e6a34d856f006006083 \
|
||||||
--hash=sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab \
|
--hash=sha256:936d68689298c36b53b29f23c6dbb74de12b4ac12ca6cfe0e047bedceea56180 \
|
||||||
--hash=sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa \
|
--hash=sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425 \
|
||||||
--hash=sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c \
|
--hash=sha256:a4d3091415f010369ae4ed1fc6b79def9416358877534caf6a0fdd2146c87a3e \
|
||||||
--hash=sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585 \
|
--hash=sha256:a8786accb172bd8afb8be14490a16625cbc387036876ab6ba70912730faf8e1f \
|
||||||
--hash=sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d \
|
--hash=sha256:a9f8c2e67970f13b16084e04f134610fd1d374bf477b17ec1599185cf611d725 \
|
||||||
--hash=sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f
|
--hash=sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183 \
|
||||||
|
--hash=sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab \
|
||||||
|
--hash=sha256:cc1c1159b3d456576af7a3e4d1ba7e6924cb39de8f67111c735f6fc832082774 \
|
||||||
|
--hash=sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725 \
|
||||||
|
--hash=sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e \
|
||||||
|
--hash=sha256:d7fded462629cfa4b685c5416b949ebad6cec74af5e2d42905d41e257e0869f5 \
|
||||||
|
--hash=sha256:d84a1718ee396f54f3a086ea0a66d8e552b2ab2017ef8b420e92edbc841c352d \
|
||||||
|
--hash=sha256:d8e03406cac8513435335dbab54c0d385e4a49e4945d2909a581c83647ca0290 \
|
||||||
|
--hash=sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44 \
|
||||||
|
--hash=sha256:ec031d5d2feb36d1d1a24380e4db6d43695f3748343d99434e6f5f9156aaa2ed \
|
||||||
|
--hash=sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4 \
|
||||||
|
--hash=sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba \
|
||||||
|
--hash=sha256:f753120cb8181e736c57ef7636e83f31b9c0d1722c516f7e86cf15b7aa57ff12 \
|
||||||
|
--hash=sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4
|
||||||
referencing==0.29.1 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
referencing==0.29.1 ; python_version >= "3.8" and python_full_version < "4.0.0" \
|
||||||
--hash=sha256:90cb53782d550ba28d2166ef3f55731f38397def8832baac5d45235f1995e35e \
|
--hash=sha256:90cb53782d550ba28d2166ef3f55731f38397def8832baac5d45235f1995e35e \
|
||||||
--hash=sha256:d3c8f323ee1480095da44d55917cfb8278d73d6b4d5f677e3e40eb21314ac67f
|
--hash=sha256:d3c8f323ee1480095da44d55917cfb8278d73d6b4d5f677e3e40eb21314ac67f
|
||||||
|
@ -1,8 +1,9 @@
|
|||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
#
|
#
|
||||||
# Runs linting scripts over the local Synapse checkout
|
# Runs linting scripts over the local Synapse checkout
|
||||||
# black - opinionated code formatter
|
|
||||||
# ruff - lints and finds mistakes
|
# ruff - lints and finds mistakes
|
||||||
|
# mypy - typechecks python code
|
||||||
|
# cargo clippy - lints rust code
|
||||||
|
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
@ -101,12 +102,6 @@ echo
|
|||||||
# Print out the commands being run
|
# Print out the commands being run
|
||||||
set -x
|
set -x
|
||||||
|
|
||||||
# Ensure the sort order of imports.
|
|
||||||
isort "${files[@]}"
|
|
||||||
|
|
||||||
# Ensure Python code conforms to an opinionated style.
|
|
||||||
python3 -m black "${files[@]}"
|
|
||||||
|
|
||||||
# Ensure the sample configuration file conforms to style checks.
|
# Ensure the sample configuration file conforms to style checks.
|
||||||
./scripts-dev/config-lint.sh
|
./scripts-dev/config-lint.sh
|
||||||
|
|
||||||
|
@ -38,6 +38,7 @@ from mypy.types import (
|
|||||||
NoneType,
|
NoneType,
|
||||||
TupleType,
|
TupleType,
|
||||||
TypeAliasType,
|
TypeAliasType,
|
||||||
|
TypeVarType,
|
||||||
UninhabitedType,
|
UninhabitedType,
|
||||||
UnionType,
|
UnionType,
|
||||||
)
|
)
|
||||||
@ -233,6 +234,7 @@ IMMUTABLE_CUSTOM_TYPES = {
|
|||||||
"synapse.synapse_rust.push.FilteredPushRules",
|
"synapse.synapse_rust.push.FilteredPushRules",
|
||||||
# This is technically not immutable, but close enough.
|
# This is technically not immutable, but close enough.
|
||||||
"signedjson.types.VerifyKey",
|
"signedjson.types.VerifyKey",
|
||||||
|
"synapse.types.StrCollection",
|
||||||
}
|
}
|
||||||
|
|
||||||
# Immutable containers only if the values are also immutable.
|
# Immutable containers only if the values are also immutable.
|
||||||
@ -298,7 +300,7 @@ def is_cacheable(
|
|||||||
|
|
||||||
elif rt.type.fullname in MUTABLE_CONTAINER_TYPES:
|
elif rt.type.fullname in MUTABLE_CONTAINER_TYPES:
|
||||||
# Mutable containers are mutable regardless of their underlying type.
|
# Mutable containers are mutable regardless of their underlying type.
|
||||||
return False, None
|
return False, f"container {rt.type.fullname} is mutable"
|
||||||
|
|
||||||
elif "attrs" in rt.type.metadata:
|
elif "attrs" in rt.type.metadata:
|
||||||
# attrs classes are only cachable iff it is frozen (immutable itself)
|
# attrs classes are only cachable iff it is frozen (immutable itself)
|
||||||
@ -318,6 +320,9 @@ def is_cacheable(
|
|||||||
else:
|
else:
|
||||||
return False, "non-frozen attrs class"
|
return False, "non-frozen attrs class"
|
||||||
|
|
||||||
|
elif rt.type.is_enum:
|
||||||
|
# We assume Enum values are immutable
|
||||||
|
return True, None
|
||||||
else:
|
else:
|
||||||
# Ensure we fail for unknown types, these generally means that the
|
# Ensure we fail for unknown types, these generally means that the
|
||||||
# above code is not complete.
|
# above code is not complete.
|
||||||
@ -326,6 +331,18 @@ def is_cacheable(
|
|||||||
f"Don't know how to handle {rt.type.fullname} return type instance",
|
f"Don't know how to handle {rt.type.fullname} return type instance",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
elif isinstance(rt, TypeVarType):
|
||||||
|
# We consider TypeVars immutable if they are bound to a set of immutable
|
||||||
|
# types.
|
||||||
|
if rt.values:
|
||||||
|
for value in rt.values:
|
||||||
|
ok, note = is_cacheable(value, signature, verbose)
|
||||||
|
if not ok:
|
||||||
|
return False, f"TypeVar bound not cacheable {value}"
|
||||||
|
return True, None
|
||||||
|
|
||||||
|
return False, "TypeVar is unbound"
|
||||||
|
|
||||||
elif isinstance(rt, NoneType):
|
elif isinstance(rt, NoneType):
|
||||||
# None is cachable.
|
# None is cachable.
|
||||||
return True, None
|
return True, None
|
||||||
|
@ -56,7 +56,9 @@ def main() -> None:
|
|||||||
password_pepper = password_config.get("pepper", password_pepper)
|
password_pepper = password_config.get("pepper", password_pepper)
|
||||||
password = args.password
|
password = args.password
|
||||||
|
|
||||||
if not password:
|
if not password and not sys.stdin.isatty():
|
||||||
|
password = sys.stdin.readline().strip()
|
||||||
|
elif not password:
|
||||||
password = prompt_for_pass()
|
password = prompt_for_pass()
|
||||||
|
|
||||||
# On Python 2, make sure we decode it to Unicode before we normalise it
|
# On Python 2, make sure we decode it to Unicode before we normalise it
|
||||||
|
@ -121,7 +121,9 @@ class MSC3861DelegatedAuth(BaseAuth):
|
|||||||
self._hostname = hs.hostname
|
self._hostname = hs.hostname
|
||||||
self._admin_token = self._config.admin_token
|
self._admin_token = self._config.admin_token
|
||||||
|
|
||||||
self._issuer_metadata = RetryOnExceptionCachedCall(self._load_metadata)
|
self._issuer_metadata = RetryOnExceptionCachedCall[OpenIDProviderMetadata](
|
||||||
|
self._load_metadata
|
||||||
|
)
|
||||||
|
|
||||||
if isinstance(auth_method, PrivateKeyJWTWithKid):
|
if isinstance(auth_method, PrivateKeyJWTWithKid):
|
||||||
# Use the JWK as the client secret when using the private_key_jwt method
|
# Use the JWK as the client secret when using the private_key_jwt method
|
||||||
@ -145,6 +147,33 @@ class MSC3861DelegatedAuth(BaseAuth):
|
|||||||
# metadata.validate_introspection_endpoint()
|
# metadata.validate_introspection_endpoint()
|
||||||
return metadata
|
return metadata
|
||||||
|
|
||||||
|
async def issuer(self) -> str:
|
||||||
|
"""
|
||||||
|
Get the configured issuer
|
||||||
|
|
||||||
|
This will use the issuer value set in the metadata,
|
||||||
|
falling back to the one set in the config if not set in the metadata
|
||||||
|
"""
|
||||||
|
metadata = await self._issuer_metadata.get()
|
||||||
|
return metadata.issuer or self._config.issuer
|
||||||
|
|
||||||
|
async def account_management_url(self) -> Optional[str]:
|
||||||
|
"""
|
||||||
|
Get the configured account management URL
|
||||||
|
|
||||||
|
This will discover the account management URL from the issuer if it's not set in the config
|
||||||
|
"""
|
||||||
|
if self._config.account_management_url is not None:
|
||||||
|
return self._config.account_management_url
|
||||||
|
|
||||||
|
try:
|
||||||
|
metadata = await self._issuer_metadata.get()
|
||||||
|
return metadata.get("account_management_uri", None)
|
||||||
|
# We don't want to raise here if we can't load the metadata
|
||||||
|
except Exception:
|
||||||
|
logger.warning("Failed to load metadata:", exc_info=True)
|
||||||
|
return None
|
||||||
|
|
||||||
async def _introspection_endpoint(self) -> str:
|
async def _introspection_endpoint(self) -> str:
|
||||||
"""
|
"""
|
||||||
Returns the introspection endpoint of the issuer
|
Returns the introspection endpoint of the issuer
|
||||||
@ -154,7 +183,7 @@ class MSC3861DelegatedAuth(BaseAuth):
|
|||||||
if self._config.introspection_endpoint is not None:
|
if self._config.introspection_endpoint is not None:
|
||||||
return self._config.introspection_endpoint
|
return self._config.introspection_endpoint
|
||||||
|
|
||||||
metadata = await self._load_metadata()
|
metadata = await self._issuer_metadata.get()
|
||||||
return metadata.get("introspection_endpoint")
|
return metadata.get("introspection_endpoint")
|
||||||
|
|
||||||
async def _introspect_token(self, token: str) -> IntrospectionToken:
|
async def _introspect_token(self, token: str) -> IntrospectionToken:
|
||||||
|
@ -98,6 +98,7 @@ from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
|
|||||||
from synapse.storage.databases.main.search import SearchStore
|
from synapse.storage.databases.main.search import SearchStore
|
||||||
from synapse.storage.databases.main.session import SessionStore
|
from synapse.storage.databases.main.session import SessionStore
|
||||||
from synapse.storage.databases.main.signatures import SignatureWorkerStore
|
from synapse.storage.databases.main.signatures import SignatureWorkerStore
|
||||||
|
from synapse.storage.databases.main.sliding_sync import SlidingSyncStore
|
||||||
from synapse.storage.databases.main.state import StateGroupWorkerStore
|
from synapse.storage.databases.main.state import StateGroupWorkerStore
|
||||||
from synapse.storage.databases.main.stats import StatsStore
|
from synapse.storage.databases.main.stats import StatsStore
|
||||||
from synapse.storage.databases.main.stream import StreamWorkerStore
|
from synapse.storage.databases.main.stream import StreamWorkerStore
|
||||||
@ -159,6 +160,7 @@ class GenericWorkerStore(
|
|||||||
SessionStore,
|
SessionStore,
|
||||||
TaskSchedulerWorkerStore,
|
TaskSchedulerWorkerStore,
|
||||||
ExperimentalFeaturesStore,
|
ExperimentalFeaturesStore,
|
||||||
|
SlidingSyncStore,
|
||||||
):
|
):
|
||||||
# Properties that multiple storage classes define. Tell mypy what the
|
# Properties that multiple storage classes define. Tell mypy what the
|
||||||
# expected type is.
|
# expected type is.
|
||||||
|
@ -183,8 +183,29 @@ class RoomSummaryHandler:
|
|||||||
) -> JsonDict:
|
) -> JsonDict:
|
||||||
"""See docstring for SpaceSummaryHandler.get_room_hierarchy."""
|
"""See docstring for SpaceSummaryHandler.get_room_hierarchy."""
|
||||||
|
|
||||||
# First of all, check that the room is accessible.
|
# If the room is available locally, quickly check that the user can access it.
|
||||||
if not await self._is_local_room_accessible(requested_room_id, requester):
|
local_room = await self._store.is_host_joined(
|
||||||
|
requested_room_id, self._server_name
|
||||||
|
)
|
||||||
|
if local_room and not await self._is_local_room_accessible(
|
||||||
|
requested_room_id, requester
|
||||||
|
):
|
||||||
|
raise UnstableSpecAuthError(
|
||||||
|
403,
|
||||||
|
"User %s not in room %s, and room previews are disabled"
|
||||||
|
% (requester, requested_room_id),
|
||||||
|
errcode=Codes.NOT_JOINED,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not local_room:
|
||||||
|
room_hierarchy = await self._summarize_remote_room_hierarchy(
|
||||||
|
_RoomQueueEntry(requested_room_id, ()),
|
||||||
|
False,
|
||||||
|
)
|
||||||
|
root_room_entry = room_hierarchy[0]
|
||||||
|
if not root_room_entry or not await self._is_remote_room_accessible(
|
||||||
|
requester, requested_room_id, root_room_entry.room
|
||||||
|
):
|
||||||
raise UnstableSpecAuthError(
|
raise UnstableSpecAuthError(
|
||||||
403,
|
403,
|
||||||
"User %s not in room %s, and room previews are disabled"
|
"User %s not in room %s, and room previews are disabled"
|
||||||
|
File diff suppressed because it is too large
Load Diff
1079
synapse/handlers/sliding_sync/__init__.py
Normal file
1079
synapse/handlers/sliding_sync/__init__.py
Normal file
File diff suppressed because it is too large
Load Diff
699
synapse/handlers/sliding_sync/extensions.py
Normal file
699
synapse/handlers/sliding_sync/extensions.py
Normal file
@ -0,0 +1,699 @@
|
|||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2023 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
|
||||||
|
import itertools
|
||||||
|
import logging
|
||||||
|
from typing import TYPE_CHECKING, AbstractSet, Dict, Mapping, Optional, Sequence, Set
|
||||||
|
|
||||||
|
from typing_extensions import assert_never
|
||||||
|
|
||||||
|
from synapse.api.constants import AccountDataTypes, EduTypes
|
||||||
|
from synapse.handlers.receipts import ReceiptEventSource
|
||||||
|
from synapse.logging.opentracing import trace
|
||||||
|
from synapse.storage.databases.main.receipts import ReceiptInRoom
|
||||||
|
from synapse.types import (
|
||||||
|
DeviceListUpdates,
|
||||||
|
JsonMapping,
|
||||||
|
MultiWriterStreamToken,
|
||||||
|
SlidingSyncStreamToken,
|
||||||
|
StrCollection,
|
||||||
|
StreamToken,
|
||||||
|
)
|
||||||
|
from synapse.types.handlers.sliding_sync import (
|
||||||
|
HaveSentRoomFlag,
|
||||||
|
MutablePerConnectionState,
|
||||||
|
OperationType,
|
||||||
|
PerConnectionState,
|
||||||
|
SlidingSyncConfig,
|
||||||
|
SlidingSyncResult,
|
||||||
|
)
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class SlidingSyncExtensionHandler:
|
||||||
|
"""Handles the extensions to sliding sync."""
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
self.event_sources = hs.get_event_sources()
|
||||||
|
self.device_handler = hs.get_device_handler()
|
||||||
|
self.push_rules_handler = hs.get_push_rules_handler()
|
||||||
|
|
||||||
|
@trace
|
||||||
|
async def get_extensions_response(
|
||||||
|
self,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
previous_connection_state: "PerConnectionState",
|
||||||
|
new_connection_state: "MutablePerConnectionState",
|
||||||
|
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
|
||||||
|
actual_room_ids: Set[str],
|
||||||
|
actual_room_response_map: Mapping[str, SlidingSyncResult.RoomResult],
|
||||||
|
to_token: StreamToken,
|
||||||
|
from_token: Optional[SlidingSyncStreamToken],
|
||||||
|
) -> SlidingSyncResult.Extensions:
|
||||||
|
"""Handle extension requests.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sync_config: Sync configuration
|
||||||
|
new_connection_state: Snapshot of the current per-connection state
|
||||||
|
new_per_connection_state: A mutable copy of the per-connection
|
||||||
|
state, used to record updates to the state during this request.
|
||||||
|
actual_lists: Sliding window API. A map of list key to list results in the
|
||||||
|
Sliding Sync response.
|
||||||
|
actual_room_ids: The actual room IDs in the the Sliding Sync response.
|
||||||
|
actual_room_response_map: A map of room ID to room results in the the
|
||||||
|
Sliding Sync response.
|
||||||
|
to_token: The point in the stream to sync up to.
|
||||||
|
from_token: The point in the stream to sync from.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if sync_config.extensions is None:
|
||||||
|
return SlidingSyncResult.Extensions()
|
||||||
|
|
||||||
|
to_device_response = None
|
||||||
|
if sync_config.extensions.to_device is not None:
|
||||||
|
to_device_response = await self.get_to_device_extension_response(
|
||||||
|
sync_config=sync_config,
|
||||||
|
to_device_request=sync_config.extensions.to_device,
|
||||||
|
to_token=to_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
e2ee_response = None
|
||||||
|
if sync_config.extensions.e2ee is not None:
|
||||||
|
e2ee_response = await self.get_e2ee_extension_response(
|
||||||
|
sync_config=sync_config,
|
||||||
|
e2ee_request=sync_config.extensions.e2ee,
|
||||||
|
to_token=to_token,
|
||||||
|
from_token=from_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
account_data_response = None
|
||||||
|
if sync_config.extensions.account_data is not None:
|
||||||
|
account_data_response = await self.get_account_data_extension_response(
|
||||||
|
sync_config=sync_config,
|
||||||
|
actual_lists=actual_lists,
|
||||||
|
actual_room_ids=actual_room_ids,
|
||||||
|
account_data_request=sync_config.extensions.account_data,
|
||||||
|
to_token=to_token,
|
||||||
|
from_token=from_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
receipts_response = None
|
||||||
|
if sync_config.extensions.receipts is not None:
|
||||||
|
receipts_response = await self.get_receipts_extension_response(
|
||||||
|
sync_config=sync_config,
|
||||||
|
previous_connection_state=previous_connection_state,
|
||||||
|
new_connection_state=new_connection_state,
|
||||||
|
actual_lists=actual_lists,
|
||||||
|
actual_room_ids=actual_room_ids,
|
||||||
|
actual_room_response_map=actual_room_response_map,
|
||||||
|
receipts_request=sync_config.extensions.receipts,
|
||||||
|
to_token=to_token,
|
||||||
|
from_token=from_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
typing_response = None
|
||||||
|
if sync_config.extensions.typing is not None:
|
||||||
|
typing_response = await self.get_typing_extension_response(
|
||||||
|
sync_config=sync_config,
|
||||||
|
actual_lists=actual_lists,
|
||||||
|
actual_room_ids=actual_room_ids,
|
||||||
|
actual_room_response_map=actual_room_response_map,
|
||||||
|
typing_request=sync_config.extensions.typing,
|
||||||
|
to_token=to_token,
|
||||||
|
from_token=from_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
return SlidingSyncResult.Extensions(
|
||||||
|
to_device=to_device_response,
|
||||||
|
e2ee=e2ee_response,
|
||||||
|
account_data=account_data_response,
|
||||||
|
receipts=receipts_response,
|
||||||
|
typing=typing_response,
|
||||||
|
)
|
||||||
|
|
||||||
|
def find_relevant_room_ids_for_extension(
|
||||||
|
self,
|
||||||
|
requested_lists: Optional[StrCollection],
|
||||||
|
requested_room_ids: Optional[StrCollection],
|
||||||
|
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
|
||||||
|
actual_room_ids: AbstractSet[str],
|
||||||
|
) -> Set[str]:
|
||||||
|
"""
|
||||||
|
Handle the reserved `lists`/`rooms` keys for extensions. Extensions should only
|
||||||
|
return results for rooms in the Sliding Sync response. This matches up the
|
||||||
|
requested rooms/lists with the actual lists/rooms in the Sliding Sync response.
|
||||||
|
|
||||||
|
{"lists": []} // Do not process any lists.
|
||||||
|
{"lists": ["rooms", "dms"]} // Process only a subset of lists.
|
||||||
|
{"lists": ["*"]} // Process all lists defined in the Sliding Window API. (This is the default.)
|
||||||
|
|
||||||
|
{"rooms": []} // Do not process any specific rooms.
|
||||||
|
{"rooms": ["!a:b", "!c:d"]} // Process only a subset of room subscriptions.
|
||||||
|
{"rooms": ["*"]} // Process all room subscriptions defined in the Room Subscription API. (This is the default.)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
requested_lists: The `lists` from the extension request.
|
||||||
|
requested_room_ids: The `rooms` from the extension request.
|
||||||
|
actual_lists: The actual lists from the Sliding Sync response.
|
||||||
|
actual_room_ids: The actual room subscriptions from the Sliding Sync request.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# We only want to include account data for rooms that are already in the sliding
|
||||||
|
# sync response AND that were requested in the account data request.
|
||||||
|
relevant_room_ids: Set[str] = set()
|
||||||
|
|
||||||
|
# See what rooms from the room subscriptions we should get account data for
|
||||||
|
if requested_room_ids is not None:
|
||||||
|
for room_id in requested_room_ids:
|
||||||
|
# A wildcard means we process all rooms from the room subscriptions
|
||||||
|
if room_id == "*":
|
||||||
|
relevant_room_ids.update(actual_room_ids)
|
||||||
|
break
|
||||||
|
|
||||||
|
if room_id in actual_room_ids:
|
||||||
|
relevant_room_ids.add(room_id)
|
||||||
|
|
||||||
|
# See what rooms from the sliding window lists we should get account data for
|
||||||
|
if requested_lists is not None:
|
||||||
|
for list_key in requested_lists:
|
||||||
|
# Just some typing because we share the variable name in multiple places
|
||||||
|
actual_list: Optional[SlidingSyncResult.SlidingWindowList] = None
|
||||||
|
|
||||||
|
# A wildcard means we process rooms from all lists
|
||||||
|
if list_key == "*":
|
||||||
|
for actual_list in actual_lists.values():
|
||||||
|
# We only expect a single SYNC operation for any list
|
||||||
|
assert len(actual_list.ops) == 1
|
||||||
|
sync_op = actual_list.ops[0]
|
||||||
|
assert sync_op.op == OperationType.SYNC
|
||||||
|
|
||||||
|
relevant_room_ids.update(sync_op.room_ids)
|
||||||
|
|
||||||
|
break
|
||||||
|
|
||||||
|
actual_list = actual_lists.get(list_key)
|
||||||
|
if actual_list is not None:
|
||||||
|
# We only expect a single SYNC operation for any list
|
||||||
|
assert len(actual_list.ops) == 1
|
||||||
|
sync_op = actual_list.ops[0]
|
||||||
|
assert sync_op.op == OperationType.SYNC
|
||||||
|
|
||||||
|
relevant_room_ids.update(sync_op.room_ids)
|
||||||
|
|
||||||
|
return relevant_room_ids
|
||||||
|
|
||||||
|
@trace
|
||||||
|
async def get_to_device_extension_response(
|
||||||
|
self,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
to_device_request: SlidingSyncConfig.Extensions.ToDeviceExtension,
|
||||||
|
to_token: StreamToken,
|
||||||
|
) -> Optional[SlidingSyncResult.Extensions.ToDeviceExtension]:
|
||||||
|
"""Handle to-device extension (MSC3885)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sync_config: Sync configuration
|
||||||
|
to_device_request: The to-device extension from the request
|
||||||
|
to_token: The point in the stream to sync up to.
|
||||||
|
"""
|
||||||
|
user_id = sync_config.user.to_string()
|
||||||
|
device_id = sync_config.requester.device_id
|
||||||
|
|
||||||
|
# Skip if the extension is not enabled
|
||||||
|
if not to_device_request.enabled:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Check that this request has a valid device ID (not all requests have
|
||||||
|
# to belong to a device, and so device_id is None)
|
||||||
|
if device_id is None:
|
||||||
|
return SlidingSyncResult.Extensions.ToDeviceExtension(
|
||||||
|
next_batch=f"{to_token.to_device_key}",
|
||||||
|
events=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
since_stream_id = 0
|
||||||
|
if to_device_request.since is not None:
|
||||||
|
# We've already validated this is an int.
|
||||||
|
since_stream_id = int(to_device_request.since)
|
||||||
|
|
||||||
|
if to_token.to_device_key < since_stream_id:
|
||||||
|
# The since token is ahead of our current token, so we return an
|
||||||
|
# empty response.
|
||||||
|
logger.warning(
|
||||||
|
"Got to-device.since from the future. since token: %r is ahead of our current to_device stream position: %r",
|
||||||
|
since_stream_id,
|
||||||
|
to_token.to_device_key,
|
||||||
|
)
|
||||||
|
return SlidingSyncResult.Extensions.ToDeviceExtension(
|
||||||
|
next_batch=to_device_request.since,
|
||||||
|
events=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Delete everything before the given since token, as we know the
|
||||||
|
# device must have received them.
|
||||||
|
deleted = await self.store.delete_messages_for_device(
|
||||||
|
user_id=user_id,
|
||||||
|
device_id=device_id,
|
||||||
|
up_to_stream_id=since_stream_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.debug(
|
||||||
|
"Deleted %d to-device messages up to %d for %s",
|
||||||
|
deleted,
|
||||||
|
since_stream_id,
|
||||||
|
user_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
messages, stream_id = await self.store.get_messages_for_device(
|
||||||
|
user_id=user_id,
|
||||||
|
device_id=device_id,
|
||||||
|
from_stream_id=since_stream_id,
|
||||||
|
to_stream_id=to_token.to_device_key,
|
||||||
|
limit=min(to_device_request.limit, 100), # Limit to at most 100 events
|
||||||
|
)
|
||||||
|
|
||||||
|
return SlidingSyncResult.Extensions.ToDeviceExtension(
|
||||||
|
next_batch=f"{stream_id}",
|
||||||
|
events=messages,
|
||||||
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
|
async def get_e2ee_extension_response(
|
||||||
|
self,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
e2ee_request: SlidingSyncConfig.Extensions.E2eeExtension,
|
||||||
|
to_token: StreamToken,
|
||||||
|
from_token: Optional[SlidingSyncStreamToken],
|
||||||
|
) -> Optional[SlidingSyncResult.Extensions.E2eeExtension]:
|
||||||
|
"""Handle E2EE device extension (MSC3884)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sync_config: Sync configuration
|
||||||
|
e2ee_request: The e2ee extension from the request
|
||||||
|
to_token: The point in the stream to sync up to.
|
||||||
|
from_token: The point in the stream to sync from.
|
||||||
|
"""
|
||||||
|
user_id = sync_config.user.to_string()
|
||||||
|
device_id = sync_config.requester.device_id
|
||||||
|
|
||||||
|
# Skip if the extension is not enabled
|
||||||
|
if not e2ee_request.enabled:
|
||||||
|
return None
|
||||||
|
|
||||||
|
device_list_updates: Optional[DeviceListUpdates] = None
|
||||||
|
if from_token is not None:
|
||||||
|
# TODO: This should take into account the `from_token` and `to_token`
|
||||||
|
device_list_updates = await self.device_handler.get_user_ids_changed(
|
||||||
|
user_id=user_id,
|
||||||
|
from_token=from_token.stream_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
device_one_time_keys_count: Mapping[str, int] = {}
|
||||||
|
device_unused_fallback_key_types: Sequence[str] = []
|
||||||
|
if device_id:
|
||||||
|
# TODO: We should have a way to let clients differentiate between the states of:
|
||||||
|
# * no change in OTK count since the provided since token
|
||||||
|
# * the server has zero OTKs left for this device
|
||||||
|
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
|
||||||
|
device_one_time_keys_count = await self.store.count_e2e_one_time_keys(
|
||||||
|
user_id, device_id
|
||||||
|
)
|
||||||
|
device_unused_fallback_key_types = (
|
||||||
|
await self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
|
||||||
|
)
|
||||||
|
|
||||||
|
return SlidingSyncResult.Extensions.E2eeExtension(
|
||||||
|
device_list_updates=device_list_updates,
|
||||||
|
device_one_time_keys_count=device_one_time_keys_count,
|
||||||
|
device_unused_fallback_key_types=device_unused_fallback_key_types,
|
||||||
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
|
async def get_account_data_extension_response(
|
||||||
|
self,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
|
||||||
|
actual_room_ids: Set[str],
|
||||||
|
account_data_request: SlidingSyncConfig.Extensions.AccountDataExtension,
|
||||||
|
to_token: StreamToken,
|
||||||
|
from_token: Optional[SlidingSyncStreamToken],
|
||||||
|
) -> Optional[SlidingSyncResult.Extensions.AccountDataExtension]:
|
||||||
|
"""Handle Account Data extension (MSC3959)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sync_config: Sync configuration
|
||||||
|
actual_lists: Sliding window API. A map of list key to list results in the
|
||||||
|
Sliding Sync response.
|
||||||
|
actual_room_ids: The actual room IDs in the the Sliding Sync response.
|
||||||
|
account_data_request: The account_data extension from the request
|
||||||
|
to_token: The point in the stream to sync up to.
|
||||||
|
from_token: The point in the stream to sync from.
|
||||||
|
"""
|
||||||
|
user_id = sync_config.user.to_string()
|
||||||
|
|
||||||
|
# Skip if the extension is not enabled
|
||||||
|
if not account_data_request.enabled:
|
||||||
|
return None
|
||||||
|
|
||||||
|
global_account_data_map: Mapping[str, JsonMapping] = {}
|
||||||
|
if from_token is not None:
|
||||||
|
# TODO: This should take into account the `from_token` and `to_token`
|
||||||
|
global_account_data_map = (
|
||||||
|
await self.store.get_updated_global_account_data_for_user(
|
||||||
|
user_id, from_token.stream_token.account_data_key
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
have_push_rules_changed = await self.store.have_push_rules_changed_for_user(
|
||||||
|
user_id, from_token.stream_token.push_rules_key
|
||||||
|
)
|
||||||
|
if have_push_rules_changed:
|
||||||
|
global_account_data_map = dict(global_account_data_map)
|
||||||
|
# TODO: This should take into account the `from_token` and `to_token`
|
||||||
|
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
|
||||||
|
await self.push_rules_handler.push_rules_for_user(sync_config.user)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# TODO: This should take into account the `to_token`
|
||||||
|
all_global_account_data = await self.store.get_global_account_data_for_user(
|
||||||
|
user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
global_account_data_map = dict(all_global_account_data)
|
||||||
|
# TODO: This should take into account the `to_token`
|
||||||
|
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
|
||||||
|
await self.push_rules_handler.push_rules_for_user(sync_config.user)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Fetch room account data
|
||||||
|
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]] = {}
|
||||||
|
relevant_room_ids = self.find_relevant_room_ids_for_extension(
|
||||||
|
requested_lists=account_data_request.lists,
|
||||||
|
requested_room_ids=account_data_request.rooms,
|
||||||
|
actual_lists=actual_lists,
|
||||||
|
actual_room_ids=actual_room_ids,
|
||||||
|
)
|
||||||
|
if len(relevant_room_ids) > 0:
|
||||||
|
if from_token is not None:
|
||||||
|
# TODO: This should take into account the `from_token` and `to_token`
|
||||||
|
account_data_by_room_map = (
|
||||||
|
await self.store.get_updated_room_account_data_for_user(
|
||||||
|
user_id, from_token.stream_token.account_data_key
|
||||||
|
)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# TODO: This should take into account the `to_token`
|
||||||
|
account_data_by_room_map = (
|
||||||
|
await self.store.get_room_account_data_for_user(user_id)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Filter down to the relevant rooms
|
||||||
|
account_data_by_room_map = {
|
||||||
|
room_id: account_data_map
|
||||||
|
for room_id, account_data_map in account_data_by_room_map.items()
|
||||||
|
if room_id in relevant_room_ids
|
||||||
|
}
|
||||||
|
|
||||||
|
return SlidingSyncResult.Extensions.AccountDataExtension(
|
||||||
|
global_account_data_map=global_account_data_map,
|
||||||
|
account_data_by_room_map=account_data_by_room_map,
|
||||||
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
|
async def get_receipts_extension_response(
|
||||||
|
self,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
previous_connection_state: "PerConnectionState",
|
||||||
|
new_connection_state: "MutablePerConnectionState",
|
||||||
|
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
|
||||||
|
actual_room_ids: Set[str],
|
||||||
|
actual_room_response_map: Mapping[str, SlidingSyncResult.RoomResult],
|
||||||
|
receipts_request: SlidingSyncConfig.Extensions.ReceiptsExtension,
|
||||||
|
to_token: StreamToken,
|
||||||
|
from_token: Optional[SlidingSyncStreamToken],
|
||||||
|
) -> Optional[SlidingSyncResult.Extensions.ReceiptsExtension]:
|
||||||
|
"""Handle Receipts extension (MSC3960)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sync_config: Sync configuration
|
||||||
|
previous_connection_state: The current per-connection state
|
||||||
|
new_connection_state: A mutable copy of the per-connection
|
||||||
|
state, used to record updates to the state.
|
||||||
|
actual_lists: Sliding window API. A map of list key to list results in the
|
||||||
|
Sliding Sync response.
|
||||||
|
actual_room_ids: The actual room IDs in the the Sliding Sync response.
|
||||||
|
actual_room_response_map: A map of room ID to room results in the the
|
||||||
|
Sliding Sync response.
|
||||||
|
account_data_request: The account_data extension from the request
|
||||||
|
to_token: The point in the stream to sync up to.
|
||||||
|
from_token: The point in the stream to sync from.
|
||||||
|
"""
|
||||||
|
# Skip if the extension is not enabled
|
||||||
|
if not receipts_request.enabled:
|
||||||
|
return None
|
||||||
|
|
||||||
|
relevant_room_ids = self.find_relevant_room_ids_for_extension(
|
||||||
|
requested_lists=receipts_request.lists,
|
||||||
|
requested_room_ids=receipts_request.rooms,
|
||||||
|
actual_lists=actual_lists,
|
||||||
|
actual_room_ids=actual_room_ids,
|
||||||
|
)
|
||||||
|
|
||||||
|
room_id_to_receipt_map: Dict[str, JsonMapping] = {}
|
||||||
|
if len(relevant_room_ids) > 0:
|
||||||
|
# We need to handle the different cases depending on if we have sent
|
||||||
|
# down receipts previously or not, so we split the relevant rooms
|
||||||
|
# up into different collections based on status.
|
||||||
|
live_rooms = set()
|
||||||
|
previously_rooms: Dict[str, MultiWriterStreamToken] = {}
|
||||||
|
initial_rooms = set()
|
||||||
|
|
||||||
|
for room_id in relevant_room_ids:
|
||||||
|
if not from_token:
|
||||||
|
initial_rooms.add(room_id)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# If we're sending down the room from scratch again for some
|
||||||
|
# reason, we should always resend the receipts as well
|
||||||
|
# (regardless of if we've sent them down before). This is to
|
||||||
|
# mimic the behaviour of what happens on initial sync, where you
|
||||||
|
# get a chunk of timeline with all of the corresponding receipts
|
||||||
|
# for the events in the timeline.
|
||||||
|
#
|
||||||
|
# We also resend down receipts when we "expand" the timeline,
|
||||||
|
# (see the "XXX: Odd behavior" in
|
||||||
|
# `synapse.handlers.sliding_sync`).
|
||||||
|
room_result = actual_room_response_map.get(room_id)
|
||||||
|
if room_result is not None:
|
||||||
|
if room_result.initial or room_result.unstable_expanded_timeline:
|
||||||
|
initial_rooms.add(room_id)
|
||||||
|
continue
|
||||||
|
|
||||||
|
room_status = previous_connection_state.receipts.have_sent_room(room_id)
|
||||||
|
if room_status.status == HaveSentRoomFlag.LIVE:
|
||||||
|
live_rooms.add(room_id)
|
||||||
|
elif room_status.status == HaveSentRoomFlag.PREVIOUSLY:
|
||||||
|
assert room_status.last_token is not None
|
||||||
|
previously_rooms[room_id] = room_status.last_token
|
||||||
|
elif room_status.status == HaveSentRoomFlag.NEVER:
|
||||||
|
initial_rooms.add(room_id)
|
||||||
|
else:
|
||||||
|
assert_never(room_status.status)
|
||||||
|
|
||||||
|
# The set of receipts that we fetched. Private receipts need to be
|
||||||
|
# filtered out before returning.
|
||||||
|
fetched_receipts = []
|
||||||
|
|
||||||
|
# For live rooms we just fetch all receipts in those rooms since the
|
||||||
|
# `since` token.
|
||||||
|
if live_rooms:
|
||||||
|
assert from_token is not None
|
||||||
|
receipts = await self.store.get_linearized_receipts_for_rooms(
|
||||||
|
room_ids=live_rooms,
|
||||||
|
from_key=from_token.stream_token.receipt_key,
|
||||||
|
to_key=to_token.receipt_key,
|
||||||
|
)
|
||||||
|
fetched_receipts.extend(receipts)
|
||||||
|
|
||||||
|
# For rooms we've previously sent down, but aren't up to date, we
|
||||||
|
# need to use the from token from the room status.
|
||||||
|
if previously_rooms:
|
||||||
|
for room_id, receipt_token in previously_rooms.items():
|
||||||
|
# TODO: Limit the number of receipts we're about to send down
|
||||||
|
# for the room, if its too many we should TODO
|
||||||
|
previously_receipts = (
|
||||||
|
await self.store.get_linearized_receipts_for_room(
|
||||||
|
room_id=room_id,
|
||||||
|
from_key=receipt_token,
|
||||||
|
to_key=to_token.receipt_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
fetched_receipts.extend(previously_receipts)
|
||||||
|
|
||||||
|
if initial_rooms:
|
||||||
|
# We also always send down receipts for the current user.
|
||||||
|
user_receipts = (
|
||||||
|
await self.store.get_linearized_receipts_for_user_in_rooms(
|
||||||
|
user_id=sync_config.user.to_string(),
|
||||||
|
room_ids=initial_rooms,
|
||||||
|
to_key=to_token.receipt_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# For rooms we haven't previously sent down, we could send all receipts
|
||||||
|
# from that room but we only want to include receipts for events
|
||||||
|
# in the timeline to avoid bloating and blowing up the sync response
|
||||||
|
# as the number of users in the room increases. (this behavior is part of the spec)
|
||||||
|
initial_rooms_and_event_ids = [
|
||||||
|
(room_id, event.event_id)
|
||||||
|
for room_id in initial_rooms
|
||||||
|
if room_id in actual_room_response_map
|
||||||
|
for event in actual_room_response_map[room_id].timeline_events
|
||||||
|
]
|
||||||
|
initial_receipts = await self.store.get_linearized_receipts_for_events(
|
||||||
|
room_and_event_ids=initial_rooms_and_event_ids,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Combine the receipts for a room and add them to
|
||||||
|
# `fetched_receipts`
|
||||||
|
for room_id in initial_receipts.keys() | user_receipts.keys():
|
||||||
|
receipt_content = ReceiptInRoom.merge_to_content(
|
||||||
|
list(
|
||||||
|
itertools.chain(
|
||||||
|
initial_receipts.get(room_id, []),
|
||||||
|
user_receipts.get(room_id, []),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
fetched_receipts.append(
|
||||||
|
{
|
||||||
|
"room_id": room_id,
|
||||||
|
"type": EduTypes.RECEIPT,
|
||||||
|
"content": receipt_content,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
fetched_receipts = ReceiptEventSource.filter_out_private_receipts(
|
||||||
|
fetched_receipts, sync_config.user.to_string()
|
||||||
|
)
|
||||||
|
|
||||||
|
for receipt in fetched_receipts:
|
||||||
|
# These fields should exist for every receipt
|
||||||
|
room_id = receipt["room_id"]
|
||||||
|
type = receipt["type"]
|
||||||
|
content = receipt["content"]
|
||||||
|
|
||||||
|
room_id_to_receipt_map[room_id] = {"type": type, "content": content}
|
||||||
|
|
||||||
|
# Now we update the per-connection state to track which receipts we have
|
||||||
|
# and haven't sent down.
|
||||||
|
new_connection_state.receipts.record_sent_rooms(relevant_room_ids)
|
||||||
|
|
||||||
|
if from_token:
|
||||||
|
# Now find the set of rooms that may have receipts that we're not sending
|
||||||
|
# down. We only need to check rooms that we have previously returned
|
||||||
|
# receipts for (in `previous_connection_state`) because we only care about
|
||||||
|
# updating `LIVE` rooms to `PREVIOUSLY`. The `PREVIOUSLY` rooms will just
|
||||||
|
# stay pointing at their previous position so we don't need to waste time
|
||||||
|
# checking those and since we default to `NEVER`, rooms that were `NEVER`
|
||||||
|
# sent before don't need to be recorded as we'll handle them correctly when
|
||||||
|
# they come into range for the first time.
|
||||||
|
rooms_no_receipts = [
|
||||||
|
room_id
|
||||||
|
for room_id, room_status in previous_connection_state.receipts._statuses.items()
|
||||||
|
if room_status.status == HaveSentRoomFlag.LIVE
|
||||||
|
and room_id not in relevant_room_ids
|
||||||
|
]
|
||||||
|
changed_rooms = await self.store.get_rooms_with_receipts_between(
|
||||||
|
rooms_no_receipts,
|
||||||
|
from_key=from_token.stream_token.receipt_key,
|
||||||
|
to_key=to_token.receipt_key,
|
||||||
|
)
|
||||||
|
new_connection_state.receipts.record_unsent_rooms(
|
||||||
|
changed_rooms, from_token.stream_token.receipt_key
|
||||||
|
)
|
||||||
|
|
||||||
|
return SlidingSyncResult.Extensions.ReceiptsExtension(
|
||||||
|
room_id_to_receipt_map=room_id_to_receipt_map,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def get_typing_extension_response(
|
||||||
|
self,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
|
||||||
|
actual_room_ids: Set[str],
|
||||||
|
actual_room_response_map: Mapping[str, SlidingSyncResult.RoomResult],
|
||||||
|
typing_request: SlidingSyncConfig.Extensions.TypingExtension,
|
||||||
|
to_token: StreamToken,
|
||||||
|
from_token: Optional[SlidingSyncStreamToken],
|
||||||
|
) -> Optional[SlidingSyncResult.Extensions.TypingExtension]:
|
||||||
|
"""Handle Typing Notification extension (MSC3961)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sync_config: Sync configuration
|
||||||
|
actual_lists: Sliding window API. A map of list key to list results in the
|
||||||
|
Sliding Sync response.
|
||||||
|
actual_room_ids: The actual room IDs in the the Sliding Sync response.
|
||||||
|
actual_room_response_map: A map of room ID to room results in the the
|
||||||
|
Sliding Sync response.
|
||||||
|
account_data_request: The account_data extension from the request
|
||||||
|
to_token: The point in the stream to sync up to.
|
||||||
|
from_token: The point in the stream to sync from.
|
||||||
|
"""
|
||||||
|
# Skip if the extension is not enabled
|
||||||
|
if not typing_request.enabled:
|
||||||
|
return None
|
||||||
|
|
||||||
|
relevant_room_ids = self.find_relevant_room_ids_for_extension(
|
||||||
|
requested_lists=typing_request.lists,
|
||||||
|
requested_room_ids=typing_request.rooms,
|
||||||
|
actual_lists=actual_lists,
|
||||||
|
actual_room_ids=actual_room_ids,
|
||||||
|
)
|
||||||
|
|
||||||
|
room_id_to_typing_map: Dict[str, JsonMapping] = {}
|
||||||
|
if len(relevant_room_ids) > 0:
|
||||||
|
# Note: We don't need to take connection tracking into account for typing
|
||||||
|
# notifications because they'll get anything still relevant and hasn't timed
|
||||||
|
# out when the room comes into range. We consider the gap where the room
|
||||||
|
# fell out of range, as long enough for any typing notifications to have
|
||||||
|
# timed out (it's not worth the 30 seconds of data we may have missed).
|
||||||
|
typing_source = self.event_sources.sources.typing
|
||||||
|
typing_notifications, _ = await typing_source.get_new_events(
|
||||||
|
user=sync_config.user,
|
||||||
|
from_key=(from_token.stream_token.typing_key if from_token else 0),
|
||||||
|
to_key=to_token.typing_key,
|
||||||
|
# This is a dummy value and isn't used in the function
|
||||||
|
limit=0,
|
||||||
|
room_ids=relevant_room_ids,
|
||||||
|
is_guest=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
for typing_notification in typing_notifications:
|
||||||
|
# These fields should exist for every typing notification
|
||||||
|
room_id = typing_notification["room_id"]
|
||||||
|
type = typing_notification["type"]
|
||||||
|
content = typing_notification["content"]
|
||||||
|
|
||||||
|
room_id_to_typing_map[room_id] = {"type": type, "content": content}
|
||||||
|
|
||||||
|
return SlidingSyncResult.Extensions.TypingExtension(
|
||||||
|
room_id_to_typing_map=room_id_to_typing_map,
|
||||||
|
)
|
1353
synapse/handlers/sliding_sync/room_lists.py
Normal file
1353
synapse/handlers/sliding_sync/room_lists.py
Normal file
File diff suppressed because it is too large
Load Diff
128
synapse/handlers/sliding_sync/store.py
Normal file
128
synapse/handlers/sliding_sync/store.py
Normal file
@ -0,0 +1,128 @@
|
|||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2023 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from typing import TYPE_CHECKING, Optional
|
||||||
|
|
||||||
|
import attr
|
||||||
|
|
||||||
|
from synapse.logging.opentracing import trace
|
||||||
|
from synapse.storage.databases.main import DataStore
|
||||||
|
from synapse.types import SlidingSyncStreamToken
|
||||||
|
from synapse.types.handlers.sliding_sync import (
|
||||||
|
MutablePerConnectionState,
|
||||||
|
PerConnectionState,
|
||||||
|
SlidingSyncConfig,
|
||||||
|
)
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
pass
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(auto_attribs=True)
|
||||||
|
class SlidingSyncConnectionStore:
|
||||||
|
"""In-memory store of per-connection state, including what rooms we have
|
||||||
|
previously sent down a sliding sync connection.
|
||||||
|
|
||||||
|
Note: This is NOT safe to run in a worker setup because connection positions will
|
||||||
|
point to different sets of rooms on different workers. e.g. for the same connection,
|
||||||
|
a connection position of 5 might have totally different states on worker A and
|
||||||
|
worker B.
|
||||||
|
|
||||||
|
One complication that we need to deal with here is needing to handle requests being
|
||||||
|
resent, i.e. if we sent down a room in a response that the client received, we must
|
||||||
|
consider the room *not* sent when we get the request again.
|
||||||
|
|
||||||
|
This is handled by using an integer "token", which is returned to the client
|
||||||
|
as part of the sync token. For each connection we store a mapping from
|
||||||
|
tokens to the room states, and create a new entry when we send down new
|
||||||
|
rooms.
|
||||||
|
|
||||||
|
Note that for any given sliding sync connection we will only store a maximum
|
||||||
|
of two different tokens: the previous token from the request and a new token
|
||||||
|
sent in the response. When we receive a request with a given token, we then
|
||||||
|
clear out all other entries with a different token.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
_connections: Mapping from `(user_id, conn_id)` to mapping of `token`
|
||||||
|
to mapping of room ID to `HaveSentRoom`.
|
||||||
|
"""
|
||||||
|
|
||||||
|
store: "DataStore"
|
||||||
|
|
||||||
|
async def get_and_clear_connection_positions(
|
||||||
|
self,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
from_token: Optional[SlidingSyncStreamToken],
|
||||||
|
) -> PerConnectionState:
|
||||||
|
"""Fetch the per-connection state for the token.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
SlidingSyncUnknownPosition if the connection_token is unknown
|
||||||
|
"""
|
||||||
|
# If this is our first request, there is no previous connection state to fetch out of the database
|
||||||
|
if from_token is None or from_token.connection_position == 0:
|
||||||
|
return PerConnectionState()
|
||||||
|
|
||||||
|
conn_id = sync_config.conn_id or ""
|
||||||
|
|
||||||
|
device_id = sync_config.requester.device_id
|
||||||
|
assert device_id is not None
|
||||||
|
|
||||||
|
return await self.store.get_and_clear_connection_positions(
|
||||||
|
sync_config.user.to_string(),
|
||||||
|
device_id,
|
||||||
|
conn_id,
|
||||||
|
from_token.connection_position,
|
||||||
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
|
async def record_new_state(
|
||||||
|
self,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
from_token: Optional[SlidingSyncStreamToken],
|
||||||
|
new_connection_state: MutablePerConnectionState,
|
||||||
|
) -> int:
|
||||||
|
"""Record updated per-connection state, returning the connection
|
||||||
|
position associated with the new state.
|
||||||
|
If there are no changes to the state this may return the same token as
|
||||||
|
the existing per-connection state.
|
||||||
|
"""
|
||||||
|
if not new_connection_state.has_updates():
|
||||||
|
if from_token is not None:
|
||||||
|
return from_token.connection_position
|
||||||
|
else:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
# A from token with a zero connection position means there was no
|
||||||
|
# previously stored connection state, so we treat a zero the same as
|
||||||
|
# there being no previous position.
|
||||||
|
previous_connection_position = None
|
||||||
|
if from_token is not None and from_token.connection_position != 0:
|
||||||
|
previous_connection_position = from_token.connection_position
|
||||||
|
|
||||||
|
conn_id = sync_config.conn_id or ""
|
||||||
|
|
||||||
|
device_id = sync_config.requester.device_id
|
||||||
|
assert device_id is not None
|
||||||
|
|
||||||
|
return await self.store.persist_per_connection_state(
|
||||||
|
sync_config.user.to_string(),
|
||||||
|
device_id,
|
||||||
|
conn_id,
|
||||||
|
previous_connection_position,
|
||||||
|
new_connection_state,
|
||||||
|
)
|
@ -1756,8 +1756,10 @@ class MatrixFederationHttpClient:
|
|||||||
request.destination,
|
request.destination,
|
||||||
str_url,
|
str_url,
|
||||||
)
|
)
|
||||||
|
# We don't know how large the response will be upfront, so limit it to
|
||||||
|
# the `max_size` config value.
|
||||||
length, headers, _, _ = await self._simple_http_client.get_file(
|
length, headers, _, _ = await self._simple_http_client.get_file(
|
||||||
str_url, output_stream, expected_size
|
str_url, output_stream, max_size
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
|
@ -1032,13 +1032,13 @@ def tag_args(func: Callable[P, R]) -> Callable[P, R]:
|
|||||||
def _wrapping_logic(
|
def _wrapping_logic(
|
||||||
_func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
|
_func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
|
||||||
) -> Generator[None, None, None]:
|
) -> Generator[None, None, None]:
|
||||||
# We use `[1:]` to skip the `self` object reference and `start=1` to
|
for i, arg in enumerate(args, start=0):
|
||||||
# make the index line up with `argspec.args`.
|
if argspec.args[i] in ("self", "cls"):
|
||||||
#
|
# Ignore `self` and `cls` values. Ideally we'd properly detect
|
||||||
# FIXME: We could update this to handle any type of function by ignoring the
|
# if we were wrapping a method, but that is really non-trivial
|
||||||
# first argument only if it's named `self` or `cls`. This isn't fool-proof
|
# and this is good enough.
|
||||||
# but handles the idiomatic cases.
|
continue
|
||||||
for i, arg in enumerate(args[1:], start=1):
|
|
||||||
set_tag(SynapseTags.FUNC_ARG_PREFIX + argspec.args[i], str(arg))
|
set_tag(SynapseTags.FUNC_ARG_PREFIX + argspec.args[i], str(arg))
|
||||||
set_tag(SynapseTags.FUNC_ARGS, str(args[len(argspec.args) :]))
|
set_tag(SynapseTags.FUNC_ARGS, str(args[len(argspec.args) :]))
|
||||||
set_tag(SynapseTags.FUNC_KWARGS, str(kwargs))
|
set_tag(SynapseTags.FUNC_KWARGS, str(kwargs))
|
||||||
|
@ -60,8 +60,6 @@ from synapse.util.stringutils import is_ascii
|
|||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
from synapse.storage.databases.main.media_repository import LocalMedia
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
@ -293,7 +291,9 @@ async def respond_with_multipart_responder(
|
|||||||
clock: Clock,
|
clock: Clock,
|
||||||
request: SynapseRequest,
|
request: SynapseRequest,
|
||||||
responder: "Optional[Responder]",
|
responder: "Optional[Responder]",
|
||||||
media_info: "LocalMedia",
|
media_type: str,
|
||||||
|
media_length: Optional[int],
|
||||||
|
upload_name: Optional[str],
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
Responds to requests originating from the federation media `/download` endpoint by
|
Responds to requests originating from the federation media `/download` endpoint by
|
||||||
@ -317,7 +317,7 @@ async def respond_with_multipart_responder(
|
|||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
if media_info.media_type.lower().split(";", 1)[0] in INLINE_CONTENT_TYPES:
|
if media_type.lower().split(";", 1)[0] in INLINE_CONTENT_TYPES:
|
||||||
disposition = "inline"
|
disposition = "inline"
|
||||||
else:
|
else:
|
||||||
disposition = "attachment"
|
disposition = "attachment"
|
||||||
@ -325,16 +325,16 @@ async def respond_with_multipart_responder(
|
|||||||
def _quote(x: str) -> str:
|
def _quote(x: str) -> str:
|
||||||
return urllib.parse.quote(x.encode("utf-8"))
|
return urllib.parse.quote(x.encode("utf-8"))
|
||||||
|
|
||||||
if media_info.upload_name:
|
if upload_name:
|
||||||
if _can_encode_filename_as_token(media_info.upload_name):
|
if _can_encode_filename_as_token(upload_name):
|
||||||
disposition = "%s; filename=%s" % (
|
disposition = "%s; filename=%s" % (
|
||||||
disposition,
|
disposition,
|
||||||
media_info.upload_name,
|
upload_name,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
disposition = "%s; filename*=utf-8''%s" % (
|
disposition = "%s; filename*=utf-8''%s" % (
|
||||||
disposition,
|
disposition,
|
||||||
_quote(media_info.upload_name),
|
_quote(upload_name),
|
||||||
)
|
)
|
||||||
|
|
||||||
from synapse.media.media_storage import MultipartFileConsumer
|
from synapse.media.media_storage import MultipartFileConsumer
|
||||||
@ -344,14 +344,14 @@ async def respond_with_multipart_responder(
|
|||||||
multipart_consumer = MultipartFileConsumer(
|
multipart_consumer = MultipartFileConsumer(
|
||||||
clock,
|
clock,
|
||||||
request,
|
request,
|
||||||
media_info.media_type,
|
media_type,
|
||||||
{},
|
{},
|
||||||
disposition,
|
disposition,
|
||||||
media_info.media_length,
|
media_length,
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.debug("Responding to media request with responder %s", responder)
|
logger.debug("Responding to media request with responder %s", responder)
|
||||||
if media_info.media_length is not None:
|
if media_length is not None:
|
||||||
content_length = multipart_consumer.content_length()
|
content_length = multipart_consumer.content_length()
|
||||||
assert content_length is not None
|
assert content_length is not None
|
||||||
request.setHeader(b"Content-Length", b"%d" % (content_length,))
|
request.setHeader(b"Content-Length", b"%d" % (content_length,))
|
||||||
|
@ -471,7 +471,7 @@ class MediaRepository:
|
|||||||
responder = await self.media_storage.fetch_media(file_info)
|
responder = await self.media_storage.fetch_media(file_info)
|
||||||
if federation:
|
if federation:
|
||||||
await respond_with_multipart_responder(
|
await respond_with_multipart_responder(
|
||||||
self.clock, request, responder, media_info
|
self.clock, request, responder, media_type, media_length, upload_name
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
await respond_with_responder(
|
await respond_with_responder(
|
||||||
@ -1008,7 +1008,7 @@ class MediaRepository:
|
|||||||
t_method: str,
|
t_method: str,
|
||||||
t_type: str,
|
t_type: str,
|
||||||
url_cache: bool,
|
url_cache: bool,
|
||||||
) -> Optional[str]:
|
) -> Optional[Tuple[str, FileInfo]]:
|
||||||
input_path = await self.media_storage.ensure_media_is_in_local_cache(
|
input_path = await self.media_storage.ensure_media_is_in_local_cache(
|
||||||
FileInfo(None, media_id, url_cache=url_cache)
|
FileInfo(None, media_id, url_cache=url_cache)
|
||||||
)
|
)
|
||||||
@ -1070,7 +1070,7 @@ class MediaRepository:
|
|||||||
t_len,
|
t_len,
|
||||||
)
|
)
|
||||||
|
|
||||||
return output_path
|
return output_path, file_info
|
||||||
|
|
||||||
# Could not generate thumbnail.
|
# Could not generate thumbnail.
|
||||||
return None
|
return None
|
||||||
|
@ -348,7 +348,12 @@ class ThumbnailProvider:
|
|||||||
if responder:
|
if responder:
|
||||||
if for_federation:
|
if for_federation:
|
||||||
await respond_with_multipart_responder(
|
await respond_with_multipart_responder(
|
||||||
self.hs.get_clock(), request, responder, media_info
|
self.hs.get_clock(),
|
||||||
|
request,
|
||||||
|
responder,
|
||||||
|
info.type,
|
||||||
|
info.length,
|
||||||
|
None,
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
@ -360,7 +365,7 @@ class ThumbnailProvider:
|
|||||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||||
|
|
||||||
# Okay, so we generate one.
|
# Okay, so we generate one.
|
||||||
file_path = await self.media_repo.generate_local_exact_thumbnail(
|
thumbnail_result = await self.media_repo.generate_local_exact_thumbnail(
|
||||||
media_id,
|
media_id,
|
||||||
desired_width,
|
desired_width,
|
||||||
desired_height,
|
desired_height,
|
||||||
@ -369,13 +374,18 @@ class ThumbnailProvider:
|
|||||||
url_cache=bool(media_info.url_cache),
|
url_cache=bool(media_info.url_cache),
|
||||||
)
|
)
|
||||||
|
|
||||||
if file_path:
|
if thumbnail_result:
|
||||||
|
file_path, file_info = thumbnail_result
|
||||||
|
assert file_info.thumbnail is not None
|
||||||
|
|
||||||
if for_federation:
|
if for_federation:
|
||||||
await respond_with_multipart_responder(
|
await respond_with_multipart_responder(
|
||||||
self.hs.get_clock(),
|
self.hs.get_clock(),
|
||||||
request,
|
request,
|
||||||
FileResponder(self.hs, open(file_path, "rb")),
|
FileResponder(self.hs, open(file_path, "rb")),
|
||||||
media_info,
|
file_info.thumbnail.type,
|
||||||
|
file_info.thumbnail.length,
|
||||||
|
None,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
await respond_with_file(self.hs, request, desired_type, file_path)
|
await respond_with_file(self.hs, request, desired_type, file_path)
|
||||||
@ -580,7 +590,12 @@ class ThumbnailProvider:
|
|||||||
if for_federation:
|
if for_federation:
|
||||||
assert media_info is not None
|
assert media_info is not None
|
||||||
await respond_with_multipart_responder(
|
await respond_with_multipart_responder(
|
||||||
self.hs.get_clock(), request, responder, media_info
|
self.hs.get_clock(),
|
||||||
|
request,
|
||||||
|
responder,
|
||||||
|
file_info.thumbnail.type,
|
||||||
|
file_info.thumbnail.length,
|
||||||
|
None,
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
@ -634,7 +649,12 @@ class ThumbnailProvider:
|
|||||||
if for_federation:
|
if for_federation:
|
||||||
assert media_info is not None
|
assert media_info is not None
|
||||||
await respond_with_multipart_responder(
|
await respond_with_multipart_responder(
|
||||||
self.hs.get_clock(), request, responder, media_info
|
self.hs.get_clock(),
|
||||||
|
request,
|
||||||
|
responder,
|
||||||
|
file_info.thumbnail.type,
|
||||||
|
file_info.thumbnail.length,
|
||||||
|
None,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
await respond_with_responder(
|
await respond_with_responder(
|
||||||
|
@ -20,14 +20,14 @@
|
|||||||
#
|
#
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING, cast
|
||||||
|
|
||||||
from twisted.web.server import Request
|
from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse.api.constants import LoginType
|
from synapse.api.constants import LoginType
|
||||||
from synapse.api.errors import LoginError, SynapseError
|
from synapse.api.errors import LoginError, SynapseError
|
||||||
from synapse.api.urls import CLIENT_API_PREFIX
|
from synapse.api.urls import CLIENT_API_PREFIX
|
||||||
from synapse.http.server import HttpServer, respond_with_html
|
from synapse.http.server import HttpServer, respond_with_html, respond_with_redirect
|
||||||
from synapse.http.servlet import RestServlet, parse_string
|
from synapse.http.servlet import RestServlet, parse_string
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
|
|
||||||
@ -66,6 +66,23 @@ class AuthRestServlet(RestServlet):
|
|||||||
if not session:
|
if not session:
|
||||||
raise SynapseError(400, "No session supplied")
|
raise SynapseError(400, "No session supplied")
|
||||||
|
|
||||||
|
if (
|
||||||
|
self.hs.config.experimental.msc3861.enabled
|
||||||
|
and stagetype == "org.matrix.cross_signing_reset"
|
||||||
|
):
|
||||||
|
# If MSC3861 is enabled, we can assume self._auth is an instance of MSC3861DelegatedAuth
|
||||||
|
# We import lazily here because of the authlib requirement
|
||||||
|
from synapse.api.auth.msc3861_delegated import MSC3861DelegatedAuth
|
||||||
|
|
||||||
|
auth = cast(MSC3861DelegatedAuth, self.auth)
|
||||||
|
|
||||||
|
url = await auth.account_management_url()
|
||||||
|
if url is not None:
|
||||||
|
url = f"{url}?action=org.matrix.cross_signing_reset"
|
||||||
|
else:
|
||||||
|
url = await auth.issuer()
|
||||||
|
respond_with_redirect(request, str.encode(url))
|
||||||
|
|
||||||
if stagetype == LoginType.RECAPTCHA:
|
if stagetype == LoginType.RECAPTCHA:
|
||||||
html = self.recaptcha_template.render(
|
html = self.recaptcha_template.render(
|
||||||
session=session,
|
session=session,
|
||||||
|
@ -13,7 +13,7 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
import typing
|
import typing
|
||||||
from typing import Tuple
|
from typing import Tuple, cast
|
||||||
|
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, SynapseError
|
||||||
from synapse.http.server import HttpServer
|
from synapse.http.server import HttpServer
|
||||||
@ -43,10 +43,16 @@ class AuthIssuerServlet(RestServlet):
|
|||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self._config = hs.config
|
self._config = hs.config
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
|
||||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
if self._config.experimental.msc3861.enabled:
|
if self._config.experimental.msc3861.enabled:
|
||||||
return 200, {"issuer": self._config.experimental.msc3861.issuer}
|
# If MSC3861 is enabled, we can assume self._auth is an instance of MSC3861DelegatedAuth
|
||||||
|
# We import lazily here because of the authlib requirement
|
||||||
|
from synapse.api.auth.msc3861_delegated import MSC3861DelegatedAuth
|
||||||
|
|
||||||
|
auth = cast(MSC3861DelegatedAuth, self._auth)
|
||||||
|
return 200, {"issuer": await auth.issuer()}
|
||||||
else:
|
else:
|
||||||
# Wouldn't expect this to be reached: the servelet shouldn't have been
|
# Wouldn't expect this to be reached: the servelet shouldn't have been
|
||||||
# registered. Still, fail gracefully if we are registered for some reason.
|
# registered. Still, fail gracefully if we are registered for some reason.
|
||||||
|
@ -23,10 +23,13 @@
|
|||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
from collections import Counter
|
from collections import Counter
|
||||||
from http import HTTPStatus
|
from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple, cast
|
||||||
from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple
|
|
||||||
|
|
||||||
from synapse.api.errors import Codes, InvalidAPICallError, SynapseError
|
from synapse.api.errors import (
|
||||||
|
InteractiveAuthIncompleteError,
|
||||||
|
InvalidAPICallError,
|
||||||
|
SynapseError,
|
||||||
|
)
|
||||||
from synapse.http.server import HttpServer
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
RestServlet,
|
RestServlet,
|
||||||
@ -403,17 +406,36 @@ class SigningKeyUploadServlet(RestServlet):
|
|||||||
# explicitly mark the master key as replaceable.
|
# explicitly mark the master key as replaceable.
|
||||||
if self.hs.config.experimental.msc3861.enabled:
|
if self.hs.config.experimental.msc3861.enabled:
|
||||||
if not master_key_updatable_without_uia:
|
if not master_key_updatable_without_uia:
|
||||||
config = self.hs.config.experimental.msc3861
|
# If MSC3861 is enabled, we can assume self.auth is an instance of MSC3861DelegatedAuth
|
||||||
if config.account_management_url is not None:
|
# We import lazily here because of the authlib requirement
|
||||||
url = f"{config.account_management_url}?action=org.matrix.cross_signing_reset"
|
from synapse.api.auth.msc3861_delegated import MSC3861DelegatedAuth
|
||||||
else:
|
|
||||||
url = config.issuer
|
|
||||||
|
|
||||||
raise SynapseError(
|
auth = cast(MSC3861DelegatedAuth, self.auth)
|
||||||
HTTPStatus.NOT_IMPLEMENTED,
|
|
||||||
"To reset your end-to-end encryption cross-signing identity, "
|
uri = await auth.account_management_url()
|
||||||
f"you first need to approve it at {url} and then try again.",
|
if uri is not None:
|
||||||
Codes.UNRECOGNIZED,
|
url = f"{uri}?action=org.matrix.cross_signing_reset"
|
||||||
|
else:
|
||||||
|
url = await auth.issuer()
|
||||||
|
|
||||||
|
# We use a dummy session ID as this isn't really a UIA flow, but we
|
||||||
|
# reuse the same API shape for better client compatibility.
|
||||||
|
raise InteractiveAuthIncompleteError(
|
||||||
|
"dummy",
|
||||||
|
{
|
||||||
|
"session": "dummy",
|
||||||
|
"flows": [
|
||||||
|
{"stages": ["org.matrix.cross_signing_reset"]},
|
||||||
|
],
|
||||||
|
"params": {
|
||||||
|
"org.matrix.cross_signing_reset": {
|
||||||
|
"url": url,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"msg": "To reset your end-to-end encryption cross-signing "
|
||||||
|
f"identity, you first need to approve it at {url} and "
|
||||||
|
"then try again.",
|
||||||
|
},
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
# Without MSC3861, we require UIA.
|
# Without MSC3861, we require UIA.
|
||||||
|
@ -268,7 +268,7 @@ class LoginRestServlet(RestServlet):
|
|||||||
approval_notice_medium=ApprovalNoticeMedium.NONE,
|
approval_notice_medium=ApprovalNoticeMedium.NONE,
|
||||||
)
|
)
|
||||||
|
|
||||||
well_known_data = self._well_known_builder.get_well_known()
|
well_known_data = await self._well_known_builder.get_well_known()
|
||||||
if well_known_data:
|
if well_known_data:
|
||||||
result["well_known"] = well_known_data
|
result["well_known"] = well_known_data
|
||||||
return 200, result
|
return 200, result
|
||||||
|
@ -21,7 +21,7 @@
|
|||||||
import itertools
|
import itertools
|
||||||
import logging
|
import logging
|
||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
|
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Tuple, Union
|
||||||
|
|
||||||
from synapse.api.constants import AccountDataTypes, EduTypes, Membership, PresenceState
|
from synapse.api.constants import AccountDataTypes, EduTypes, Membership, PresenceState
|
||||||
from synapse.api.errors import Codes, StoreError, SynapseError
|
from synapse.api.errors import Codes, StoreError, SynapseError
|
||||||
@ -975,7 +975,7 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
return response
|
return response
|
||||||
|
|
||||||
def encode_lists(
|
def encode_lists(
|
||||||
self, lists: Dict[str, SlidingSyncResult.SlidingWindowList]
|
self, lists: Mapping[str, SlidingSyncResult.SlidingWindowList]
|
||||||
) -> JsonDict:
|
) -> JsonDict:
|
||||||
def encode_operation(
|
def encode_operation(
|
||||||
operation: SlidingSyncResult.SlidingWindowList.Operation,
|
operation: SlidingSyncResult.SlidingWindowList.Operation,
|
||||||
|
@ -28,7 +28,7 @@ from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
|||||||
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
||||||
from pydantic.v1 import Extra, StrictInt, StrictStr
|
from pydantic.v1 import Extra, StrictInt, StrictStr
|
||||||
else:
|
else:
|
||||||
from pydantic import StrictInt, StrictStr, Extra
|
from pydantic import Extra, StrictInt, StrictStr
|
||||||
|
|
||||||
from signedjson.sign import sign_json
|
from signedjson.sign import sign_json
|
||||||
|
|
||||||
|
@ -18,12 +18,13 @@
|
|||||||
#
|
#
|
||||||
#
|
#
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Optional
|
from typing import TYPE_CHECKING, Optional, Tuple, cast
|
||||||
|
|
||||||
from twisted.web.resource import Resource
|
from twisted.web.resource import Resource
|
||||||
from twisted.web.server import Request
|
from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse.http.server import set_cors_headers
|
from synapse.api.errors import NotFoundError
|
||||||
|
from synapse.http.server import DirectServeJsonResource
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util import json_encoder
|
from synapse.util import json_encoder
|
||||||
@ -38,8 +39,9 @@ logger = logging.getLogger(__name__)
|
|||||||
class WellKnownBuilder:
|
class WellKnownBuilder:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self._config = hs.config
|
self._config = hs.config
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
|
||||||
def get_well_known(self) -> Optional[JsonDict]:
|
async def get_well_known(self) -> Optional[JsonDict]:
|
||||||
if not self._config.server.serve_client_wellknown:
|
if not self._config.server.serve_client_wellknown:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@ -52,13 +54,20 @@ class WellKnownBuilder:
|
|||||||
|
|
||||||
# We use the MSC3861 values as they are used by multiple MSCs
|
# We use the MSC3861 values as they are used by multiple MSCs
|
||||||
if self._config.experimental.msc3861.enabled:
|
if self._config.experimental.msc3861.enabled:
|
||||||
|
# If MSC3861 is enabled, we can assume self._auth is an instance of MSC3861DelegatedAuth
|
||||||
|
# We import lazily here because of the authlib requirement
|
||||||
|
from synapse.api.auth.msc3861_delegated import MSC3861DelegatedAuth
|
||||||
|
|
||||||
|
auth = cast(MSC3861DelegatedAuth, self._auth)
|
||||||
|
|
||||||
result["org.matrix.msc2965.authentication"] = {
|
result["org.matrix.msc2965.authentication"] = {
|
||||||
"issuer": self._config.experimental.msc3861.issuer
|
"issuer": await auth.issuer(),
|
||||||
}
|
}
|
||||||
if self._config.experimental.msc3861.account_management_url is not None:
|
account_management_url = await auth.account_management_url()
|
||||||
|
if account_management_url is not None:
|
||||||
result["org.matrix.msc2965.authentication"][
|
result["org.matrix.msc2965.authentication"][
|
||||||
"account"
|
"account"
|
||||||
] = self._config.experimental.msc3861.account_management_url
|
] = account_management_url
|
||||||
|
|
||||||
if self._config.server.extra_well_known_client_content:
|
if self._config.server.extra_well_known_client_content:
|
||||||
for (
|
for (
|
||||||
@ -71,26 +80,22 @@ class WellKnownBuilder:
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
class ClientWellKnownResource(Resource):
|
class ClientWellKnownResource(DirectServeJsonResource):
|
||||||
"""A Twisted web resource which renders the .well-known/matrix/client file"""
|
"""A Twisted web resource which renders the .well-known/matrix/client file"""
|
||||||
|
|
||||||
isLeaf = 1
|
isLeaf = 1
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
Resource.__init__(self)
|
super().__init__()
|
||||||
self._well_known_builder = WellKnownBuilder(hs)
|
self._well_known_builder = WellKnownBuilder(hs)
|
||||||
|
|
||||||
def render_GET(self, request: SynapseRequest) -> bytes:
|
async def _async_render_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
set_cors_headers(request)
|
r = await self._well_known_builder.get_well_known()
|
||||||
r = self._well_known_builder.get_well_known()
|
|
||||||
if not r:
|
if not r:
|
||||||
request.setResponseCode(404)
|
raise NotFoundError(".well-known not available")
|
||||||
request.setHeader(b"Content-Type", b"text/plain")
|
|
||||||
return b".well-known not available"
|
|
||||||
|
|
||||||
logger.debug("returning: %s", r)
|
logger.debug("returning: %s", r)
|
||||||
request.setHeader(b"Content-Type", b"application/json")
|
return 200, r
|
||||||
return json_encoder.encode(r).encode("utf-8")
|
|
||||||
|
|
||||||
|
|
||||||
class ServerWellKnownResource(Resource):
|
class ServerWellKnownResource(Resource):
|
||||||
|
@ -23,8 +23,11 @@ import logging
|
|||||||
from abc import ABCMeta
|
from abc import ABCMeta
|
||||||
from typing import TYPE_CHECKING, Any, Collection, Dict, Iterable, Optional, Union
|
from typing import TYPE_CHECKING, Any, Collection, Dict, Iterable, Optional, Union
|
||||||
|
|
||||||
from synapse.storage.database import make_in_list_sql_clause # noqa: F401; noqa: F401
|
from synapse.storage.database import (
|
||||||
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
|
DatabasePool,
|
||||||
|
LoggingDatabaseConnection,
|
||||||
|
make_in_list_sql_clause, # noqa: F401
|
||||||
|
)
|
||||||
from synapse.types import get_domain_from_id
|
from synapse.types import get_domain_from_id
|
||||||
from synapse.util import json_decoder
|
from synapse.util import json_decoder
|
||||||
from synapse.util.caches.descriptors import CachedFunction
|
from synapse.util.caches.descriptors import CachedFunction
|
||||||
|
@ -64,6 +64,7 @@ from synapse.metrics.background_process_metrics import run_as_background_process
|
|||||||
from synapse.storage.background_updates import BackgroundUpdater
|
from synapse.storage.background_updates import BackgroundUpdater
|
||||||
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
|
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
|
||||||
from synapse.storage.types import Connection, Cursor, SQLQueryParameters
|
from synapse.storage.types import Connection, Cursor, SQLQueryParameters
|
||||||
|
from synapse.types import StrCollection
|
||||||
from synapse.util.async_helpers import delay_cancellation
|
from synapse.util.async_helpers import delay_cancellation
|
||||||
from synapse.util.iterutils import batch_iter
|
from synapse.util.iterutils import batch_iter
|
||||||
|
|
||||||
@ -1095,6 +1096,48 @@ class DatabasePool:
|
|||||||
|
|
||||||
txn.execute(sql, vals)
|
txn.execute(sql, vals)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def simple_insert_returning_txn(
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
table: str,
|
||||||
|
values: Dict[str, Any],
|
||||||
|
returning: StrCollection,
|
||||||
|
) -> Tuple[Any, ...]:
|
||||||
|
"""Executes a `INSERT INTO... RETURNING...` statement (or equivalent for
|
||||||
|
SQLite versions that don't support it).
|
||||||
|
"""
|
||||||
|
|
||||||
|
if txn.database_engine.supports_returning:
|
||||||
|
sql = "INSERT INTO %s (%s) VALUES(%s) RETURNING %s" % (
|
||||||
|
table,
|
||||||
|
", ".join(k for k in values.keys()),
|
||||||
|
", ".join("?" for _ in values.keys()),
|
||||||
|
", ".join(k for k in returning),
|
||||||
|
)
|
||||||
|
|
||||||
|
txn.execute(sql, list(values.values()))
|
||||||
|
row = txn.fetchone()
|
||||||
|
assert row is not None
|
||||||
|
return row
|
||||||
|
else:
|
||||||
|
# For old versions of SQLite we do a standard insert and then can
|
||||||
|
# use `last_insert_rowid` to get at the row we just inserted
|
||||||
|
DatabasePool.simple_insert_txn(
|
||||||
|
txn,
|
||||||
|
table=table,
|
||||||
|
values=values,
|
||||||
|
)
|
||||||
|
txn.execute("SELECT last_insert_rowid()")
|
||||||
|
row = txn.fetchone()
|
||||||
|
assert row is not None
|
||||||
|
(rowid,) = row
|
||||||
|
|
||||||
|
row = DatabasePool.simple_select_one_txn(
|
||||||
|
txn, table=table, keyvalues={"rowid": rowid}, retcols=returning
|
||||||
|
)
|
||||||
|
assert row is not None
|
||||||
|
return row
|
||||||
|
|
||||||
async def simple_insert_many(
|
async def simple_insert_many(
|
||||||
self,
|
self,
|
||||||
table: str,
|
table: str,
|
||||||
|
@ -33,6 +33,7 @@ from synapse.storage.database import (
|
|||||||
LoggingDatabaseConnection,
|
LoggingDatabaseConnection,
|
||||||
LoggingTransaction,
|
LoggingTransaction,
|
||||||
)
|
)
|
||||||
|
from synapse.storage.databases.main.sliding_sync import SlidingSyncStore
|
||||||
from synapse.storage.databases.main.stats import UserSortOrder
|
from synapse.storage.databases.main.stats import UserSortOrder
|
||||||
from synapse.storage.engines import BaseDatabaseEngine
|
from synapse.storage.engines import BaseDatabaseEngine
|
||||||
from synapse.storage.types import Cursor
|
from synapse.storage.types import Cursor
|
||||||
@ -156,6 +157,7 @@ class DataStore(
|
|||||||
LockStore,
|
LockStore,
|
||||||
SessionStore,
|
SessionStore,
|
||||||
TaskSchedulerWorkerStore,
|
TaskSchedulerWorkerStore,
|
||||||
|
SlidingSyncStore,
|
||||||
):
|
):
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
|
@ -313,6 +313,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||||||
"get_unread_event_push_actions_by_room_for_user", (room_id,)
|
"get_unread_event_push_actions_by_room_for_user", (room_id,)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self._attempt_to_invalidate_cache("_get_max_event_pos", (room_id,))
|
||||||
|
|
||||||
# The `_get_membership_from_event_id` is immutable, except for the
|
# The `_get_membership_from_event_id` is immutable, except for the
|
||||||
# case where we look up an event *before* persisting it.
|
# case where we look up an event *before* persisting it.
|
||||||
self._attempt_to_invalidate_cache("_get_membership_from_event_id", (event_id,))
|
self._attempt_to_invalidate_cache("_get_membership_from_event_id", (event_id,))
|
||||||
@ -404,6 +406,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||||||
)
|
)
|
||||||
self._attempt_to_invalidate_cache("get_relations_for_event", (room_id,))
|
self._attempt_to_invalidate_cache("get_relations_for_event", (room_id,))
|
||||||
|
|
||||||
|
self._attempt_to_invalidate_cache("_get_max_event_pos", (room_id,))
|
||||||
|
|
||||||
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
|
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
|
||||||
self._attempt_to_invalidate_cache("get_applicable_edit", None)
|
self._attempt_to_invalidate_cache("get_applicable_edit", None)
|
||||||
self._attempt_to_invalidate_cache("get_thread_id", None)
|
self._attempt_to_invalidate_cache("get_thread_id", None)
|
||||||
@ -476,6 +480,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
||||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
||||||
|
|
||||||
|
self._attempt_to_invalidate_cache("_get_max_event_pos", (room_id,))
|
||||||
|
|
||||||
# And delete state caches.
|
# And delete state caches.
|
||||||
|
|
||||||
self._invalidate_state_caches_all(room_id)
|
self._invalidate_state_caches_all(room_id)
|
||||||
|
@ -30,10 +30,12 @@ from typing import (
|
|||||||
Mapping,
|
Mapping,
|
||||||
Optional,
|
Optional,
|
||||||
Sequence,
|
Sequence,
|
||||||
|
Set,
|
||||||
Tuple,
|
Tuple,
|
||||||
cast,
|
cast,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
import attr
|
||||||
from immutabledict import immutabledict
|
from immutabledict import immutabledict
|
||||||
|
|
||||||
from synapse.api.constants import EduTypes
|
from synapse.api.constants import EduTypes
|
||||||
@ -65,6 +67,57 @@ if TYPE_CHECKING:
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(auto_attribs=True, slots=True, frozen=True)
|
||||||
|
class ReceiptInRoom:
|
||||||
|
receipt_type: str
|
||||||
|
user_id: str
|
||||||
|
event_id: str
|
||||||
|
thread_id: Optional[str]
|
||||||
|
data: JsonMapping
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def merge_to_content(receipts: Collection["ReceiptInRoom"]) -> JsonMapping:
|
||||||
|
"""Merge the given set of receipts (in a room) into the receipt
|
||||||
|
content format.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A mapping of the combined receipts: event ID -> receipt type -> user
|
||||||
|
ID -> receipt data.
|
||||||
|
"""
|
||||||
|
# MSC4102: always replace threaded receipts with unthreaded ones if
|
||||||
|
# there is a clash. This means we will drop some receipts, but MSC4102
|
||||||
|
# is designed to drop semantically meaningless receipts, so this is
|
||||||
|
# okay. Previously, we would drop meaningful data!
|
||||||
|
#
|
||||||
|
# We do this by finding the unthreaded receipts, and then filtering out
|
||||||
|
# matching threaded receipts.
|
||||||
|
|
||||||
|
# Set of (user_id, event_id)
|
||||||
|
unthreaded_receipts: Set[Tuple[str, str]] = {
|
||||||
|
(receipt.user_id, receipt.event_id)
|
||||||
|
for receipt in receipts
|
||||||
|
if receipt.thread_id is None
|
||||||
|
}
|
||||||
|
|
||||||
|
# event_id -> receipt_type -> user_id -> receipt data
|
||||||
|
content: Dict[str, Dict[str, Dict[str, JsonMapping]]] = {}
|
||||||
|
for receipt in receipts:
|
||||||
|
data = receipt.data
|
||||||
|
if receipt.thread_id is not None:
|
||||||
|
if (receipt.user_id, receipt.event_id) in unthreaded_receipts:
|
||||||
|
# Ignore threaded receipts if we have an unthreaded one.
|
||||||
|
continue
|
||||||
|
|
||||||
|
data = dict(data)
|
||||||
|
data["thread_id"] = receipt.thread_id
|
||||||
|
|
||||||
|
content.setdefault(receipt.event_id, {}).setdefault(
|
||||||
|
receipt.receipt_type, {}
|
||||||
|
)[receipt.user_id] = data
|
||||||
|
|
||||||
|
return content
|
||||||
|
|
||||||
|
|
||||||
class ReceiptsWorkerStore(SQLBaseStore):
|
class ReceiptsWorkerStore(SQLBaseStore):
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
@ -401,7 +454,7 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||||||
|
|
||||||
def f(
|
def f(
|
||||||
txn: LoggingTransaction,
|
txn: LoggingTransaction,
|
||||||
) -> List[Tuple[str, str, str, str, Optional[str], str]]:
|
) -> Mapping[str, Sequence[ReceiptInRoom]]:
|
||||||
if from_key:
|
if from_key:
|
||||||
sql = """
|
sql = """
|
||||||
SELECT stream_id, instance_name, room_id, receipt_type,
|
SELECT stream_id, instance_name, room_id, receipt_type,
|
||||||
@ -431,50 +484,46 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||||||
|
|
||||||
txn.execute(sql + clause, [to_key.get_max_stream_pos()] + list(args))
|
txn.execute(sql + clause, [to_key.get_max_stream_pos()] + list(args))
|
||||||
|
|
||||||
return [
|
results: Dict[str, List[ReceiptInRoom]] = {}
|
||||||
(room_id, receipt_type, user_id, event_id, thread_id, data)
|
for (
|
||||||
for stream_id, instance_name, room_id, receipt_type, user_id, event_id, thread_id, data in txn
|
stream_id,
|
||||||
if MultiWriterStreamToken.is_stream_position_in_range(
|
instance_name,
|
||||||
|
room_id,
|
||||||
|
receipt_type,
|
||||||
|
user_id,
|
||||||
|
event_id,
|
||||||
|
thread_id,
|
||||||
|
data,
|
||||||
|
) in txn:
|
||||||
|
if not MultiWriterStreamToken.is_stream_position_in_range(
|
||||||
from_key, to_key, instance_name, stream_id
|
from_key, to_key, instance_name, stream_id
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
|
||||||
|
results.setdefault(room_id, []).append(
|
||||||
|
ReceiptInRoom(
|
||||||
|
receipt_type=receipt_type,
|
||||||
|
user_id=user_id,
|
||||||
|
event_id=event_id,
|
||||||
|
thread_id=thread_id,
|
||||||
|
data=db_to_json(data),
|
||||||
)
|
)
|
||||||
]
|
)
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
txn_results = await self.db_pool.runInteraction(
|
txn_results = await self.db_pool.runInteraction(
|
||||||
"_get_linearized_receipts_for_rooms", f
|
"_get_linearized_receipts_for_rooms", f
|
||||||
)
|
)
|
||||||
|
|
||||||
results: JsonDict = {}
|
results: JsonDict = {
|
||||||
for room_id, receipt_type, user_id, event_id, thread_id, data in txn_results:
|
room_id: {
|
||||||
# We want a single event per room, since we want to batch the
|
"room_id": room_id,
|
||||||
# receipts by room, event and type.
|
"type": EduTypes.RECEIPT,
|
||||||
room_event = results.setdefault(
|
"content": ReceiptInRoom.merge_to_content(receipts),
|
||||||
room_id,
|
}
|
||||||
{"type": EduTypes.RECEIPT, "room_id": room_id, "content": {}},
|
for room_id, receipts in txn_results.items()
|
||||||
)
|
}
|
||||||
|
|
||||||
# The content is of the form:
|
|
||||||
# {"$foo:bar": { "read": { "@user:host": <receipt> }, .. }, .. }
|
|
||||||
event_entry = room_event["content"].setdefault(event_id, {})
|
|
||||||
receipt_type_dict = event_entry.setdefault(receipt_type, {})
|
|
||||||
|
|
||||||
# MSC4102: always replace threaded receipts with unthreaded ones if there is a clash.
|
|
||||||
# Specifically:
|
|
||||||
# - if there is no existing receipt, great, set the data.
|
|
||||||
# - if there is an existing receipt, is it threaded (thread_id present)?
|
|
||||||
# YES: replace if this receipt has no thread id. NO: do not replace.
|
|
||||||
# This means we will drop some receipts, but MSC4102 is designed to drop semantically
|
|
||||||
# meaningless receipts, so this is okay. Previously, we would drop meaningful data!
|
|
||||||
receipt_data = db_to_json(data)
|
|
||||||
if user_id in receipt_type_dict: # existing receipt
|
|
||||||
# is the existing receipt threaded and we are currently processing an unthreaded one?
|
|
||||||
if "thread_id" in receipt_type_dict[user_id] and not thread_id:
|
|
||||||
receipt_type_dict[user_id] = (
|
|
||||||
receipt_data # replace with unthreaded one
|
|
||||||
)
|
|
||||||
else: # receipt does not exist, just set it
|
|
||||||
receipt_type_dict[user_id] = receipt_data
|
|
||||||
if thread_id:
|
|
||||||
receipt_type_dict[user_id]["thread_id"] = thread_id
|
|
||||||
|
|
||||||
results = {
|
results = {
|
||||||
room_id: [results[room_id]] if room_id in results else []
|
room_id: [results[room_id]] if room_id in results else []
|
||||||
@ -485,7 +534,7 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||||||
async def get_linearized_receipts_for_events(
|
async def get_linearized_receipts_for_events(
|
||||||
self,
|
self,
|
||||||
room_and_event_ids: Collection[Tuple[str, str]],
|
room_and_event_ids: Collection[Tuple[str, str]],
|
||||||
) -> Sequence[JsonMapping]:
|
) -> Mapping[str, Sequence[ReceiptInRoom]]:
|
||||||
"""Get all receipts for the given set of events.
|
"""Get all receipts for the given set of events.
|
||||||
|
|
||||||
Arguments:
|
Arguments:
|
||||||
@ -495,6 +544,8 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||||||
Returns:
|
Returns:
|
||||||
A list of receipts, one per room.
|
A list of receipts, one per room.
|
||||||
"""
|
"""
|
||||||
|
if not room_and_event_ids:
|
||||||
|
return {}
|
||||||
|
|
||||||
def get_linearized_receipts_for_events_txn(
|
def get_linearized_receipts_for_events_txn(
|
||||||
txn: LoggingTransaction,
|
txn: LoggingTransaction,
|
||||||
@ -514,8 +565,8 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||||||
|
|
||||||
return txn.fetchall()
|
return txn.fetchall()
|
||||||
|
|
||||||
# room_id -> event_id -> receipt_type -> user_id -> receipt data
|
# room_id -> receipts
|
||||||
room_to_content: Dict[str, Dict[str, Dict[str, Dict[str, JsonMapping]]]] = {}
|
room_to_receipts: Dict[str, List[ReceiptInRoom]] = {}
|
||||||
for batch in batch_iter(room_and_event_ids, 1000):
|
for batch in batch_iter(room_and_event_ids, 1000):
|
||||||
batch_results = await self.db_pool.runInteraction(
|
batch_results = await self.db_pool.runInteraction(
|
||||||
"get_linearized_receipts_for_events",
|
"get_linearized_receipts_for_events",
|
||||||
@ -531,33 +582,17 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||||||
thread_id,
|
thread_id,
|
||||||
data,
|
data,
|
||||||
) in batch_results:
|
) in batch_results:
|
||||||
content = room_to_content.setdefault(room_id, {})
|
room_to_receipts.setdefault(room_id, []).append(
|
||||||
user_receipts = content.setdefault(event_id, {}).setdefault(
|
ReceiptInRoom(
|
||||||
receipt_type, {}
|
receipt_type=receipt_type,
|
||||||
|
user_id=user_id,
|
||||||
|
event_id=event_id,
|
||||||
|
thread_id=thread_id,
|
||||||
|
data=db_to_json(data),
|
||||||
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
receipt_data = db_to_json(data)
|
return room_to_receipts
|
||||||
if thread_id is not None:
|
|
||||||
receipt_data["thread_id"] = thread_id
|
|
||||||
|
|
||||||
# MSC4102: always replace threaded receipts with unthreaded ones
|
|
||||||
# if there is a clash. Specifically:
|
|
||||||
# - if there is no existing receipt, great, set the data.
|
|
||||||
# - if there is an existing receipt, is it threaded (thread_id
|
|
||||||
# present)? YES: replace if this receipt has no thread id.
|
|
||||||
# NO: do not replace. This means we will drop some receipts, but
|
|
||||||
# MSC4102 is designed to drop semantically meaningless receipts,
|
|
||||||
# so this is okay. Previously, we would drop meaningful data!
|
|
||||||
if user_id in user_receipts:
|
|
||||||
if "thread_id" in user_receipts[user_id] and not thread_id:
|
|
||||||
user_receipts[user_id] = receipt_data
|
|
||||||
else:
|
|
||||||
user_receipts[user_id] = receipt_data
|
|
||||||
|
|
||||||
return [
|
|
||||||
{"type": EduTypes.RECEIPT, "room_id": room_id, "content": content}
|
|
||||||
for room_id, content in room_to_content.items()
|
|
||||||
]
|
|
||||||
|
|
||||||
@cached(
|
@cached(
|
||||||
num_args=2,
|
num_args=2,
|
||||||
@ -630,6 +665,74 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
|||||||
|
|
||||||
return results
|
return results
|
||||||
|
|
||||||
|
async def get_linearized_receipts_for_user_in_rooms(
|
||||||
|
self, user_id: str, room_ids: StrCollection, to_key: MultiWriterStreamToken
|
||||||
|
) -> Mapping[str, Sequence[ReceiptInRoom]]:
|
||||||
|
"""Fetch all receipts for the user in the given room.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A dict from room ID to receipts in the room.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def get_linearized_receipts_for_user_in_rooms_txn(
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
batch_room_ids: StrCollection,
|
||||||
|
) -> List[Tuple[str, str, str, str, Optional[str], str]]:
|
||||||
|
clause, args = make_in_list_sql_clause(
|
||||||
|
self.database_engine, "room_id", batch_room_ids
|
||||||
|
)
|
||||||
|
|
||||||
|
sql = f"""
|
||||||
|
SELECT instance_name, stream_id, room_id, receipt_type, user_id, event_id, thread_id, data
|
||||||
|
FROM receipts_linearized
|
||||||
|
WHERE {clause} AND user_id = ? AND stream_id <= ?
|
||||||
|
"""
|
||||||
|
|
||||||
|
args.append(user_id)
|
||||||
|
args.append(to_key.get_max_stream_pos())
|
||||||
|
|
||||||
|
txn.execute(sql, args)
|
||||||
|
|
||||||
|
return [
|
||||||
|
(room_id, receipt_type, user_id, event_id, thread_id, data)
|
||||||
|
for instance_name, stream_id, room_id, receipt_type, user_id, event_id, thread_id, data in txn
|
||||||
|
if MultiWriterStreamToken.is_stream_position_in_range(
|
||||||
|
low=None,
|
||||||
|
high=to_key,
|
||||||
|
instance_name=instance_name,
|
||||||
|
pos=stream_id,
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
# room_id -> receipts
|
||||||
|
room_to_receipts: Dict[str, List[ReceiptInRoom]] = {}
|
||||||
|
for batch in batch_iter(room_ids, 1000):
|
||||||
|
batch_results = await self.db_pool.runInteraction(
|
||||||
|
"get_linearized_receipts_for_events",
|
||||||
|
get_linearized_receipts_for_user_in_rooms_txn,
|
||||||
|
batch,
|
||||||
|
)
|
||||||
|
|
||||||
|
for (
|
||||||
|
room_id,
|
||||||
|
receipt_type,
|
||||||
|
user_id,
|
||||||
|
event_id,
|
||||||
|
thread_id,
|
||||||
|
data,
|
||||||
|
) in batch_results:
|
||||||
|
room_to_receipts.setdefault(room_id, []).append(
|
||||||
|
ReceiptInRoom(
|
||||||
|
receipt_type=receipt_type,
|
||||||
|
user_id=user_id,
|
||||||
|
event_id=event_id,
|
||||||
|
thread_id=thread_id,
|
||||||
|
data=db_to_json(data),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return room_to_receipts
|
||||||
|
|
||||||
async def get_rooms_with_receipts_between(
|
async def get_rooms_with_receipts_between(
|
||||||
self,
|
self,
|
||||||
room_ids: StrCollection,
|
room_ids: StrCollection,
|
||||||
|
491
synapse/storage/databases/main/sliding_sync.py
Normal file
491
synapse/storage/databases/main/sliding_sync.py
Normal file
@ -0,0 +1,491 @@
|
|||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2023 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Set, cast
|
||||||
|
|
||||||
|
import attr
|
||||||
|
|
||||||
|
from synapse.api.errors import SlidingSyncUnknownPosition
|
||||||
|
from synapse.logging.opentracing import log_kv
|
||||||
|
from synapse.storage._base import SQLBaseStore, db_to_json
|
||||||
|
from synapse.storage.database import LoggingTransaction
|
||||||
|
from synapse.types import MultiWriterStreamToken, RoomStreamToken
|
||||||
|
from synapse.types.handlers.sliding_sync import (
|
||||||
|
HaveSentRoom,
|
||||||
|
HaveSentRoomFlag,
|
||||||
|
MutablePerConnectionState,
|
||||||
|
PerConnectionState,
|
||||||
|
RoomStatusMap,
|
||||||
|
RoomSyncConfig,
|
||||||
|
)
|
||||||
|
from synapse.util import json_encoder
|
||||||
|
from synapse.util.caches.descriptors import cached
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.storage.databases.main import DataStore
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class SlidingSyncStore(SQLBaseStore):
|
||||||
|
async def persist_per_connection_state(
|
||||||
|
self,
|
||||||
|
user_id: str,
|
||||||
|
device_id: str,
|
||||||
|
conn_id: str,
|
||||||
|
previous_connection_position: Optional[int],
|
||||||
|
per_connection_state: "MutablePerConnectionState",
|
||||||
|
) -> int:
|
||||||
|
"""Persist updates to the per-connection state for a sliding sync
|
||||||
|
connection.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The connection position of the newly persisted state.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# This cast is safe because the downstream code only cares about
|
||||||
|
# `store.get_id_for_instance(...)` and `StreamWorkerStore` is mixed
|
||||||
|
# alongside `SlidingSyncStore` wherever we create a store.
|
||||||
|
store = cast("DataStore", self)
|
||||||
|
|
||||||
|
return await self.db_pool.runInteraction(
|
||||||
|
"persist_per_connection_state",
|
||||||
|
self.persist_per_connection_state_txn,
|
||||||
|
user_id=user_id,
|
||||||
|
device_id=device_id,
|
||||||
|
conn_id=conn_id,
|
||||||
|
previous_connection_position=previous_connection_position,
|
||||||
|
per_connection_state=await PerConnectionStateDB.from_state(
|
||||||
|
per_connection_state, store
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
def persist_per_connection_state_txn(
|
||||||
|
self,
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
user_id: str,
|
||||||
|
device_id: str,
|
||||||
|
conn_id: str,
|
||||||
|
previous_connection_position: Optional[int],
|
||||||
|
per_connection_state: "PerConnectionStateDB",
|
||||||
|
) -> int:
|
||||||
|
# First we fetch (or create) the connection key associated with the
|
||||||
|
# previous connection position.
|
||||||
|
if previous_connection_position is not None:
|
||||||
|
# The `previous_connection_position` is a user-supplied value, so we
|
||||||
|
# need to make sure that the one they supplied is actually theirs.
|
||||||
|
sql = """
|
||||||
|
SELECT connection_key
|
||||||
|
FROM sliding_sync_connection_positions
|
||||||
|
INNER JOIN sliding_sync_connections USING (connection_key)
|
||||||
|
WHERE
|
||||||
|
connection_position = ?
|
||||||
|
AND user_id = ? AND effective_device_id = ? AND conn_id = ?
|
||||||
|
"""
|
||||||
|
txn.execute(
|
||||||
|
sql, (previous_connection_position, user_id, device_id, conn_id)
|
||||||
|
)
|
||||||
|
row = txn.fetchone()
|
||||||
|
if row is None:
|
||||||
|
raise SlidingSyncUnknownPosition()
|
||||||
|
|
||||||
|
(connection_key,) = row
|
||||||
|
else:
|
||||||
|
# We're restarting the connection, so we clear the previous existing data we
|
||||||
|
# used to track it. We do this here to ensure that if we get lots of
|
||||||
|
# one-shot requests we don't stack up lots of entries. We have `ON DELETE
|
||||||
|
# CASCADE` setup on the dependent tables so this will clear out all the
|
||||||
|
# associated data.
|
||||||
|
self.db_pool.simple_delete_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connections",
|
||||||
|
keyvalues={
|
||||||
|
"user_id": user_id,
|
||||||
|
"effective_device_id": device_id,
|
||||||
|
"conn_id": conn_id,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
(connection_key,) = self.db_pool.simple_insert_returning_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connections",
|
||||||
|
values={
|
||||||
|
"user_id": user_id,
|
||||||
|
"effective_device_id": device_id,
|
||||||
|
"conn_id": conn_id,
|
||||||
|
"created_ts": self._clock.time_msec(),
|
||||||
|
},
|
||||||
|
returning=("connection_key",),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Define a new connection position for the updates
|
||||||
|
(connection_position,) = self.db_pool.simple_insert_returning_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connection_positions",
|
||||||
|
values={
|
||||||
|
"connection_key": connection_key,
|
||||||
|
"created_ts": self._clock.time_msec(),
|
||||||
|
},
|
||||||
|
returning=("connection_position",),
|
||||||
|
)
|
||||||
|
|
||||||
|
# We need to deduplicate the `required_state` JSON. We do this by
|
||||||
|
# fetching all JSON associated with the connection and comparing that
|
||||||
|
# with the updates to `required_state`
|
||||||
|
|
||||||
|
# Dict from required state json -> required state ID
|
||||||
|
required_state_to_id: Dict[str, int] = {}
|
||||||
|
if previous_connection_position is not None:
|
||||||
|
rows = self.db_pool.simple_select_list_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connection_required_state",
|
||||||
|
keyvalues={"connection_key": connection_key},
|
||||||
|
retcols=("required_state_id", "required_state"),
|
||||||
|
)
|
||||||
|
for required_state_id, required_state in rows:
|
||||||
|
required_state_to_id[required_state] = required_state_id
|
||||||
|
|
||||||
|
room_to_state_ids: Dict[str, int] = {}
|
||||||
|
unique_required_state: Dict[str, List[str]] = {}
|
||||||
|
for room_id, room_state in per_connection_state.room_configs.items():
|
||||||
|
serialized_state = json_encoder.encode(
|
||||||
|
# We store the required state as a sorted list of event type /
|
||||||
|
# state key tuples.
|
||||||
|
sorted(
|
||||||
|
(event_type, state_key)
|
||||||
|
for event_type, state_keys in room_state.required_state_map.items()
|
||||||
|
for state_key in state_keys
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
existing_state_id = required_state_to_id.get(serialized_state)
|
||||||
|
if existing_state_id is not None:
|
||||||
|
room_to_state_ids[room_id] = existing_state_id
|
||||||
|
else:
|
||||||
|
unique_required_state.setdefault(serialized_state, []).append(room_id)
|
||||||
|
|
||||||
|
# Insert any new `required_state` json we haven't previously seen.
|
||||||
|
for serialized_required_state, room_ids in unique_required_state.items():
|
||||||
|
(required_state_id,) = self.db_pool.simple_insert_returning_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connection_required_state",
|
||||||
|
values={
|
||||||
|
"connection_key": connection_key,
|
||||||
|
"required_state": serialized_required_state,
|
||||||
|
},
|
||||||
|
returning=("required_state_id",),
|
||||||
|
)
|
||||||
|
for room_id in room_ids:
|
||||||
|
room_to_state_ids[room_id] = required_state_id
|
||||||
|
|
||||||
|
# Copy over state from the previous connection position (we'll overwrite
|
||||||
|
# these rows with any changes).
|
||||||
|
if previous_connection_position is not None:
|
||||||
|
sql = """
|
||||||
|
INSERT INTO sliding_sync_connection_streams
|
||||||
|
(connection_position, stream, room_id, room_status, last_token)
|
||||||
|
SELECT ?, stream, room_id, room_status, last_token
|
||||||
|
FROM sliding_sync_connection_streams
|
||||||
|
WHERE connection_position = ?
|
||||||
|
"""
|
||||||
|
txn.execute(sql, (connection_position, previous_connection_position))
|
||||||
|
|
||||||
|
sql = """
|
||||||
|
INSERT INTO sliding_sync_connection_room_configs
|
||||||
|
(connection_position, room_id, timeline_limit, required_state_id)
|
||||||
|
SELECT ?, room_id, timeline_limit, required_state_id
|
||||||
|
FROM sliding_sync_connection_room_configs
|
||||||
|
WHERE connection_position = ?
|
||||||
|
"""
|
||||||
|
txn.execute(sql, (connection_position, previous_connection_position))
|
||||||
|
|
||||||
|
# We now upsert the changes to the various streams.
|
||||||
|
key_values = []
|
||||||
|
value_values = []
|
||||||
|
for room_id, have_sent_room in per_connection_state.rooms._statuses.items():
|
||||||
|
key_values.append((connection_position, "rooms", room_id))
|
||||||
|
value_values.append(
|
||||||
|
(have_sent_room.status.value, have_sent_room.last_token)
|
||||||
|
)
|
||||||
|
|
||||||
|
for room_id, have_sent_room in per_connection_state.receipts._statuses.items():
|
||||||
|
key_values.append((connection_position, "receipts", room_id))
|
||||||
|
value_values.append(
|
||||||
|
(have_sent_room.status.value, have_sent_room.last_token)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.db_pool.simple_upsert_many_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connection_streams",
|
||||||
|
key_names=(
|
||||||
|
"connection_position",
|
||||||
|
"stream",
|
||||||
|
"room_id",
|
||||||
|
),
|
||||||
|
key_values=key_values,
|
||||||
|
value_names=(
|
||||||
|
"room_status",
|
||||||
|
"last_token",
|
||||||
|
),
|
||||||
|
value_values=value_values,
|
||||||
|
)
|
||||||
|
|
||||||
|
# ... and upsert changes to the room configs.
|
||||||
|
keys = []
|
||||||
|
values = []
|
||||||
|
for room_id, room_config in per_connection_state.room_configs.items():
|
||||||
|
keys.append((connection_position, room_id))
|
||||||
|
values.append((room_config.timeline_limit, room_to_state_ids[room_id]))
|
||||||
|
|
||||||
|
self.db_pool.simple_upsert_many_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connection_room_configs",
|
||||||
|
key_names=(
|
||||||
|
"connection_position",
|
||||||
|
"room_id",
|
||||||
|
),
|
||||||
|
key_values=keys,
|
||||||
|
value_names=(
|
||||||
|
"timeline_limit",
|
||||||
|
"required_state_id",
|
||||||
|
),
|
||||||
|
value_values=values,
|
||||||
|
)
|
||||||
|
|
||||||
|
return connection_position
|
||||||
|
|
||||||
|
@cached(iterable=True, max_entries=100000)
|
||||||
|
async def get_and_clear_connection_positions(
|
||||||
|
self, user_id: str, device_id: str, conn_id: str, connection_position: int
|
||||||
|
) -> "PerConnectionState":
|
||||||
|
"""Get the per-connection state for the given connection position."""
|
||||||
|
|
||||||
|
per_connection_state_db = await self.db_pool.runInteraction(
|
||||||
|
"get_and_clear_connection_positions",
|
||||||
|
self._get_and_clear_connection_positions_txn,
|
||||||
|
user_id=user_id,
|
||||||
|
device_id=device_id,
|
||||||
|
conn_id=conn_id,
|
||||||
|
connection_position=connection_position,
|
||||||
|
)
|
||||||
|
|
||||||
|
# This cast is safe because the downstream code only cares about
|
||||||
|
# `store.get_id_for_instance(...)` and `StreamWorkerStore` is mixed
|
||||||
|
# alongside `SlidingSyncStore` wherever we create a store.
|
||||||
|
store = cast("DataStore", self)
|
||||||
|
|
||||||
|
return await per_connection_state_db.to_state(store)
|
||||||
|
|
||||||
|
def _get_and_clear_connection_positions_txn(
|
||||||
|
self,
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
user_id: str,
|
||||||
|
device_id: str,
|
||||||
|
conn_id: str,
|
||||||
|
connection_position: int,
|
||||||
|
) -> "PerConnectionStateDB":
|
||||||
|
# The `previous_connection_position` is a user-supplied value, so we
|
||||||
|
# need to make sure that the one they supplied is actually theirs.
|
||||||
|
sql = """
|
||||||
|
SELECT connection_key
|
||||||
|
FROM sliding_sync_connection_positions
|
||||||
|
INNER JOIN sliding_sync_connections USING (connection_key)
|
||||||
|
WHERE
|
||||||
|
connection_position = ?
|
||||||
|
AND user_id = ? AND effective_device_id = ? AND conn_id = ?
|
||||||
|
"""
|
||||||
|
txn.execute(sql, (connection_position, user_id, device_id, conn_id))
|
||||||
|
row = txn.fetchone()
|
||||||
|
if row is None:
|
||||||
|
raise SlidingSyncUnknownPosition()
|
||||||
|
|
||||||
|
(connection_key,) = row
|
||||||
|
|
||||||
|
# Now that we have seen the client has received and used the connection
|
||||||
|
# position, we can delete all the other connection positions.
|
||||||
|
sql = """
|
||||||
|
DELETE FROM sliding_sync_connection_positions
|
||||||
|
WHERE connection_key = ? AND connection_position != ?
|
||||||
|
"""
|
||||||
|
txn.execute(sql, (connection_key, connection_position))
|
||||||
|
|
||||||
|
# Fetch and create a mapping from required state ID to the actual
|
||||||
|
# required state for the connection.
|
||||||
|
rows = self.db_pool.simple_select_list_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connection_required_state",
|
||||||
|
keyvalues={"connection_key": connection_key},
|
||||||
|
retcols=(
|
||||||
|
"required_state_id",
|
||||||
|
"required_state",
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
required_state_map: Dict[int, Dict[str, Set[str]]] = {}
|
||||||
|
for row in rows:
|
||||||
|
state = required_state_map[row[0]] = {}
|
||||||
|
for event_type, state_keys in db_to_json(row[1]):
|
||||||
|
state[event_type] = set(state_keys)
|
||||||
|
|
||||||
|
# Get all the room configs, looking up the required state from the map
|
||||||
|
# above.
|
||||||
|
room_config_rows = self.db_pool.simple_select_list_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connection_room_configs",
|
||||||
|
keyvalues={"connection_position": connection_position},
|
||||||
|
retcols=(
|
||||||
|
"room_id",
|
||||||
|
"timeline_limit",
|
||||||
|
"required_state_id",
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
room_configs: Dict[str, RoomSyncConfig] = {}
|
||||||
|
for (
|
||||||
|
room_id,
|
||||||
|
timeline_limit,
|
||||||
|
required_state_id,
|
||||||
|
) in room_config_rows:
|
||||||
|
room_configs[room_id] = RoomSyncConfig(
|
||||||
|
timeline_limit=timeline_limit,
|
||||||
|
required_state_map=required_state_map[required_state_id],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Now look up the per-room stream data.
|
||||||
|
rooms: Dict[str, HaveSentRoom[str]] = {}
|
||||||
|
receipts: Dict[str, HaveSentRoom[str]] = {}
|
||||||
|
|
||||||
|
receipt_rows = self.db_pool.simple_select_list_txn(
|
||||||
|
txn,
|
||||||
|
table="sliding_sync_connection_streams",
|
||||||
|
keyvalues={"connection_position": connection_position},
|
||||||
|
retcols=(
|
||||||
|
"stream",
|
||||||
|
"room_id",
|
||||||
|
"room_status",
|
||||||
|
"last_token",
|
||||||
|
),
|
||||||
|
)
|
||||||
|
for stream, room_id, room_status, last_token in receipt_rows:
|
||||||
|
have_sent_room: HaveSentRoom[str] = HaveSentRoom(
|
||||||
|
status=HaveSentRoomFlag(room_status), last_token=last_token
|
||||||
|
)
|
||||||
|
if stream == "rooms":
|
||||||
|
rooms[room_id] = have_sent_room
|
||||||
|
elif stream == "receipts":
|
||||||
|
receipts[room_id] = have_sent_room
|
||||||
|
else:
|
||||||
|
# For forwards compatibility we ignore unknown streams, as in
|
||||||
|
# future we want to be able to easily add more stream types.
|
||||||
|
logger.warning("Unrecognized sliding sync stream in DB %r", stream)
|
||||||
|
|
||||||
|
return PerConnectionStateDB(
|
||||||
|
rooms=RoomStatusMap(rooms),
|
||||||
|
receipts=RoomStatusMap(receipts),
|
||||||
|
room_configs=room_configs,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(auto_attribs=True, frozen=True)
|
||||||
|
class PerConnectionStateDB:
|
||||||
|
"""An equivalent to `PerConnectionState` that holds data in a format stored
|
||||||
|
in the DB.
|
||||||
|
|
||||||
|
The principle difference is that the tokens for the different streams are
|
||||||
|
serialized to strings.
|
||||||
|
|
||||||
|
When persisting this *only* contains updates to the state.
|
||||||
|
"""
|
||||||
|
|
||||||
|
rooms: "RoomStatusMap[str]"
|
||||||
|
receipts: "RoomStatusMap[str]"
|
||||||
|
|
||||||
|
room_configs: Mapping[str, "RoomSyncConfig"]
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
async def from_state(
|
||||||
|
per_connection_state: "MutablePerConnectionState", store: "DataStore"
|
||||||
|
) -> "PerConnectionStateDB":
|
||||||
|
"""Convert from a standard `PerConnectionState`"""
|
||||||
|
rooms = {
|
||||||
|
room_id: HaveSentRoom(
|
||||||
|
status=status.status,
|
||||||
|
last_token=(
|
||||||
|
await status.last_token.to_string(store)
|
||||||
|
if status.last_token is not None
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
)
|
||||||
|
for room_id, status in per_connection_state.rooms.get_updates().items()
|
||||||
|
}
|
||||||
|
|
||||||
|
receipts = {
|
||||||
|
room_id: HaveSentRoom(
|
||||||
|
status=status.status,
|
||||||
|
last_token=(
|
||||||
|
await status.last_token.to_string(store)
|
||||||
|
if status.last_token is not None
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
)
|
||||||
|
for room_id, status in per_connection_state.receipts.get_updates().items()
|
||||||
|
}
|
||||||
|
|
||||||
|
log_kv(
|
||||||
|
{
|
||||||
|
"rooms": rooms,
|
||||||
|
"receipts": receipts,
|
||||||
|
"room_configs": per_connection_state.room_configs.maps[0],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return PerConnectionStateDB(
|
||||||
|
rooms=RoomStatusMap(rooms),
|
||||||
|
receipts=RoomStatusMap(receipts),
|
||||||
|
room_configs=per_connection_state.room_configs.maps[0],
|
||||||
|
)
|
||||||
|
|
||||||
|
async def to_state(self, store: "DataStore") -> "PerConnectionState":
|
||||||
|
"""Convert into a standard `PerConnectionState`"""
|
||||||
|
rooms = {
|
||||||
|
room_id: HaveSentRoom(
|
||||||
|
status=status.status,
|
||||||
|
last_token=(
|
||||||
|
await RoomStreamToken.parse(store, status.last_token)
|
||||||
|
if status.last_token is not None
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
)
|
||||||
|
for room_id, status in self.rooms._statuses.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
receipts = {
|
||||||
|
room_id: HaveSentRoom(
|
||||||
|
status=status.status,
|
||||||
|
last_token=(
|
||||||
|
await MultiWriterStreamToken.parse(store, status.last_token)
|
||||||
|
if status.last_token is not None
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
)
|
||||||
|
for room_id, status in self.receipts._statuses.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
return PerConnectionState(
|
||||||
|
rooms=RoomStatusMap(rooms),
|
||||||
|
receipts=RoomStatusMap(receipts),
|
||||||
|
room_configs=self.room_configs,
|
||||||
|
)
|
@ -50,6 +50,7 @@ from typing import (
|
|||||||
Dict,
|
Dict,
|
||||||
Iterable,
|
Iterable,
|
||||||
List,
|
List,
|
||||||
|
Mapping,
|
||||||
Optional,
|
Optional,
|
||||||
Protocol,
|
Protocol,
|
||||||
Set,
|
Set,
|
||||||
@ -80,7 +81,7 @@ from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
|||||||
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
|
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
|
||||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||||
from synapse.types import PersistedEventPosition, RoomStreamToken, StrCollection
|
from synapse.types import PersistedEventPosition, RoomStreamToken, StrCollection
|
||||||
from synapse.util.caches.descriptors import cached
|
from synapse.util.caches.descriptors import cached, cachedList
|
||||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||||
from synapse.util.cancellation import cancellable
|
from synapse.util.cancellation import cancellable
|
||||||
from synapse.util.iterutils import batch_iter
|
from synapse.util.iterutils import batch_iter
|
||||||
@ -1381,63 +1382,18 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||||||
rooms
|
rooms
|
||||||
"""
|
"""
|
||||||
|
|
||||||
min_token = end_token.stream
|
# First we just get the latest positions for the room, as the vast
|
||||||
max_token = end_token.get_max_stream_pos()
|
# majority of them will be before the given end token anyway. By doing
|
||||||
results: Dict[str, int] = {}
|
# this we can cache most rooms.
|
||||||
|
uncapped_results = await self._bulk_get_max_event_pos(room_ids)
|
||||||
# First, we check for the rooms in the stream change cache to see if we
|
|
||||||
# can just use the latest position from it.
|
|
||||||
missing_room_ids: Set[str] = set()
|
|
||||||
for room_id in room_ids:
|
|
||||||
stream_pos = self._events_stream_cache.get_max_pos_of_last_change(room_id)
|
|
||||||
if stream_pos and stream_pos <= min_token:
|
|
||||||
results[room_id] = stream_pos
|
|
||||||
else:
|
|
||||||
missing_room_ids.add(room_id)
|
|
||||||
|
|
||||||
# Next, we query the stream position from the DB. At first we fetch all
|
|
||||||
# positions less than the *max* stream pos in the token, then filter
|
|
||||||
# them down. We do this as a) this is a cheaper query, and b) the vast
|
|
||||||
# majority of rooms will have a latest token from before the min stream
|
|
||||||
# pos.
|
|
||||||
|
|
||||||
def bulk_get_last_event_pos_txn(
|
|
||||||
txn: LoggingTransaction, batch_room_ids: StrCollection
|
|
||||||
) -> Dict[str, int]:
|
|
||||||
# This query fetches the latest stream position in the rooms before
|
|
||||||
# the given max position.
|
|
||||||
clause, args = make_in_list_sql_clause(
|
|
||||||
self.database_engine, "room_id", batch_room_ids
|
|
||||||
)
|
|
||||||
sql = f"""
|
|
||||||
SELECT room_id, (
|
|
||||||
SELECT stream_ordering FROM events AS e
|
|
||||||
LEFT JOIN rejections USING (event_id)
|
|
||||||
WHERE e.room_id = r.room_id
|
|
||||||
AND stream_ordering <= ?
|
|
||||||
AND NOT outlier
|
|
||||||
AND rejection_reason IS NULL
|
|
||||||
ORDER BY stream_ordering DESC
|
|
||||||
LIMIT 1
|
|
||||||
)
|
|
||||||
FROM rooms AS r
|
|
||||||
WHERE {clause}
|
|
||||||
"""
|
|
||||||
txn.execute(sql, [max_token] + args)
|
|
||||||
return {row[0]: row[1] for row in txn}
|
|
||||||
|
|
||||||
recheck_rooms: Set[str] = set()
|
|
||||||
for batched in batch_iter(missing_room_ids, 1000):
|
|
||||||
result = await self.db_pool.runInteraction(
|
|
||||||
"bulk_get_last_event_pos_in_room_before_stream_ordering",
|
|
||||||
bulk_get_last_event_pos_txn,
|
|
||||||
batched,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Check that the stream position for the rooms are from before the
|
# Check that the stream position for the rooms are from before the
|
||||||
# minimum position of the token. If not then we need to fetch more
|
# minimum position of the token. If not then we need to fetch more
|
||||||
# rows.
|
# rows.
|
||||||
for room_id, stream in result.items():
|
results: Dict[str, int] = {}
|
||||||
|
recheck_rooms: Set[str] = set()
|
||||||
|
min_token = end_token.stream
|
||||||
|
for room_id, stream in uncapped_results.items():
|
||||||
if stream <= min_token:
|
if stream <= min_token:
|
||||||
results[room_id] = stream
|
results[room_id] = stream
|
||||||
else:
|
else:
|
||||||
@ -1446,49 +1402,96 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
|||||||
if not recheck_rooms:
|
if not recheck_rooms:
|
||||||
return results
|
return results
|
||||||
|
|
||||||
# For the remaining rooms we need to fetch all rows between the min and
|
# There shouldn't be many rooms that we need to recheck, so we do them
|
||||||
# max stream positions in the end token, and filter out the rows that
|
# one-by-one.
|
||||||
# are after the end token.
|
for room_id in recheck_rooms:
|
||||||
#
|
result = await self.get_last_event_pos_in_room_before_stream_ordering(
|
||||||
# This query should be fast as the range between the min and max should
|
room_id, end_token
|
||||||
# be small.
|
)
|
||||||
|
if result is not None:
|
||||||
|
results[room_id] = result[1].stream
|
||||||
|
|
||||||
def bulk_get_last_event_pos_recheck_txn(
|
return results
|
||||||
txn: LoggingTransaction, batch_room_ids: StrCollection
|
|
||||||
|
@cached()
|
||||||
|
async def _get_max_event_pos(self, room_id: str) -> int:
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
@cachedList(cached_method_name="_get_max_event_pos", list_name="room_ids")
|
||||||
|
async def _bulk_get_max_event_pos(
|
||||||
|
self, room_ids: StrCollection
|
||||||
|
) -> Mapping[str, int]:
|
||||||
|
"""Fetch the max position of a persisted event in the room."""
|
||||||
|
|
||||||
|
# We need to be careful not to return positions ahead of the current
|
||||||
|
# positions, so we get the current token now and cap our queries to it.
|
||||||
|
now_token = self.get_room_max_token()
|
||||||
|
max_pos = now_token.get_max_stream_pos()
|
||||||
|
|
||||||
|
results: Dict[str, int] = {}
|
||||||
|
|
||||||
|
# First, we check for the rooms in the stream change cache to see if we
|
||||||
|
# can just use the latest position from it.
|
||||||
|
missing_room_ids: Set[str] = set()
|
||||||
|
for room_id in room_ids:
|
||||||
|
stream_pos = self._events_stream_cache.get_max_pos_of_last_change(room_id)
|
||||||
|
if stream_pos is not None:
|
||||||
|
results[room_id] = stream_pos
|
||||||
|
else:
|
||||||
|
missing_room_ids.add(room_id)
|
||||||
|
|
||||||
|
if not missing_room_ids:
|
||||||
|
return results
|
||||||
|
|
||||||
|
# Next, we query the stream position from the DB. At first we fetch all
|
||||||
|
# positions less than the *max* stream pos in the token, then filter
|
||||||
|
# them down. We do this as a) this is a cheaper query, and b) the vast
|
||||||
|
# majority of rooms will have a latest token from before the min stream
|
||||||
|
# pos.
|
||||||
|
|
||||||
|
def bulk_get_max_event_pos_txn(
|
||||||
|
txn: LoggingTransaction, batched_room_ids: StrCollection
|
||||||
) -> Dict[str, int]:
|
) -> Dict[str, int]:
|
||||||
clause, args = make_in_list_sql_clause(
|
clause, args = make_in_list_sql_clause(
|
||||||
self.database_engine, "room_id", batch_room_ids
|
self.database_engine, "room_id", batched_room_ids
|
||||||
)
|
)
|
||||||
sql = f"""
|
sql = f"""
|
||||||
SELECT room_id, instance_name, stream_ordering
|
SELECT room_id, (
|
||||||
FROM events
|
SELECT stream_ordering FROM events AS e
|
||||||
WHERE ? < stream_ordering AND stream_ordering <= ?
|
LEFT JOIN rejections USING (event_id)
|
||||||
|
WHERE e.room_id = r.room_id
|
||||||
|
AND e.stream_ordering <= ?
|
||||||
AND NOT outlier
|
AND NOT outlier
|
||||||
AND rejection_reason IS NULL
|
AND rejection_reason IS NULL
|
||||||
AND {clause}
|
ORDER BY stream_ordering DESC
|
||||||
ORDER BY stream_ordering ASC
|
LIMIT 1
|
||||||
"""
|
|
||||||
txn.execute(sql, [min_token, max_token] + args)
|
|
||||||
|
|
||||||
# We take the max stream ordering that is less than the token. Since
|
|
||||||
# we ordered by stream ordering we just need to iterate through and
|
|
||||||
# take the last matching stream ordering.
|
|
||||||
txn_results: Dict[str, int] = {}
|
|
||||||
for row in txn:
|
|
||||||
room_id = row[0]
|
|
||||||
event_pos = PersistedEventPosition(row[1], row[2])
|
|
||||||
if not event_pos.persisted_after(end_token):
|
|
||||||
txn_results[room_id] = event_pos.stream
|
|
||||||
|
|
||||||
return txn_results
|
|
||||||
|
|
||||||
for batched in batch_iter(recheck_rooms, 1000):
|
|
||||||
recheck_result = await self.db_pool.runInteraction(
|
|
||||||
"bulk_get_last_event_pos_in_room_before_stream_ordering_recheck",
|
|
||||||
bulk_get_last_event_pos_recheck_txn,
|
|
||||||
batched,
|
|
||||||
)
|
)
|
||||||
results.update(recheck_result)
|
FROM rooms AS r
|
||||||
|
WHERE {clause}
|
||||||
|
"""
|
||||||
|
txn.execute(sql, [max_pos] + args)
|
||||||
|
return {row[0]: row[1] for row in txn}
|
||||||
|
|
||||||
|
recheck_rooms: Set[str] = set()
|
||||||
|
for batched in batch_iter(room_ids, 1000):
|
||||||
|
batch_results = await self.db_pool.runInteraction(
|
||||||
|
"_bulk_get_max_event_pos", bulk_get_max_event_pos_txn, batched
|
||||||
|
)
|
||||||
|
for room_id, stream_ordering in batch_results.items():
|
||||||
|
if stream_ordering <= now_token.stream:
|
||||||
|
results.update(batch_results)
|
||||||
|
else:
|
||||||
|
recheck_rooms.add(room_id)
|
||||||
|
|
||||||
|
# We now need to handle rooms where the above query returned a stream
|
||||||
|
# position that was potentially too new. This should happen very rarely
|
||||||
|
# so we just query the rooms one-by-one
|
||||||
|
for room_id in recheck_rooms:
|
||||||
|
result = await self.get_last_event_pos_in_room_before_stream_ordering(
|
||||||
|
room_id, now_token
|
||||||
|
)
|
||||||
|
if result is not None:
|
||||||
|
results[room_id] = result[1].stream
|
||||||
|
|
||||||
return results
|
return results
|
||||||
|
|
||||||
|
@ -28,6 +28,11 @@ if TYPE_CHECKING:
|
|||||||
from synapse.storage.database import LoggingDatabaseConnection
|
from synapse.storage.database import LoggingDatabaseConnection
|
||||||
|
|
||||||
|
|
||||||
|
# A string that will be replaced with the appropriate auto increment directive
|
||||||
|
# for the database engine, expands to an auto incrementing integer primary key.
|
||||||
|
AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER = "$%AUTO_INCREMENT_PRIMARY_KEY%$"
|
||||||
|
|
||||||
|
|
||||||
class IsolationLevel(IntEnum):
|
class IsolationLevel(IntEnum):
|
||||||
READ_COMMITTED: int = 1
|
READ_COMMITTED: int = 1
|
||||||
REPEATABLE_READ: int = 2
|
REPEATABLE_READ: int = 2
|
||||||
|
@ -25,6 +25,7 @@ from typing import TYPE_CHECKING, Any, Mapping, NoReturn, Optional, Tuple, cast
|
|||||||
import psycopg2.extensions
|
import psycopg2.extensions
|
||||||
|
|
||||||
from synapse.storage.engines._base import (
|
from synapse.storage.engines._base import (
|
||||||
|
AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER,
|
||||||
BaseDatabaseEngine,
|
BaseDatabaseEngine,
|
||||||
IncorrectDatabaseSetup,
|
IncorrectDatabaseSetup,
|
||||||
IsolationLevel,
|
IsolationLevel,
|
||||||
@ -256,4 +257,10 @@ class PostgresEngine(
|
|||||||
executing the script in its own transaction. The script transaction is
|
executing the script in its own transaction. The script transaction is
|
||||||
left open and it is the responsibility of the caller to commit it.
|
left open and it is the responsibility of the caller to commit it.
|
||||||
"""
|
"""
|
||||||
|
# Replace auto increment placeholder with the appropriate directive
|
||||||
|
script = script.replace(
|
||||||
|
AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER,
|
||||||
|
"BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY",
|
||||||
|
)
|
||||||
|
|
||||||
cursor.execute(f"COMMIT; BEGIN TRANSACTION; {script}")
|
cursor.execute(f"COMMIT; BEGIN TRANSACTION; {script}")
|
||||||
|
@ -25,6 +25,7 @@ import threading
|
|||||||
from typing import TYPE_CHECKING, Any, List, Mapping, Optional
|
from typing import TYPE_CHECKING, Any, List, Mapping, Optional
|
||||||
|
|
||||||
from synapse.storage.engines import BaseDatabaseEngine
|
from synapse.storage.engines import BaseDatabaseEngine
|
||||||
|
from synapse.storage.engines._base import AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER
|
||||||
from synapse.storage.types import Cursor
|
from synapse.storage.types import Cursor
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
@ -168,6 +169,11 @@ class Sqlite3Engine(BaseDatabaseEngine[sqlite3.Connection, sqlite3.Cursor]):
|
|||||||
> first. No other implicit transaction control is performed; any transaction
|
> first. No other implicit transaction control is performed; any transaction
|
||||||
> control must be added to sql_script.
|
> control must be added to sql_script.
|
||||||
"""
|
"""
|
||||||
|
# Replace auto increment placeholder with the appropriate directive
|
||||||
|
script = script.replace(
|
||||||
|
AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER, "INTEGER PRIMARY KEY AUTOINCREMENT"
|
||||||
|
)
|
||||||
|
|
||||||
# The implementation of `executescript` can be found at
|
# The implementation of `executescript` can be found at
|
||||||
# https://github.com/python/cpython/blob/3.11/Modules/_sqlite/cursor.c#L1035.
|
# https://github.com/python/cpython/blob/3.11/Modules/_sqlite/cursor.c#L1035.
|
||||||
cursor.executescript(f"BEGIN TRANSACTION; {script}")
|
cursor.executescript(f"BEGIN TRANSACTION; {script}")
|
||||||
|
@ -19,7 +19,7 @@
|
|||||||
#
|
#
|
||||||
#
|
#
|
||||||
|
|
||||||
SCHEMA_VERSION = 86 # remember to update the list below when updating
|
SCHEMA_VERSION = 87 # remember to update the list below when updating
|
||||||
"""Represents the expectations made by the codebase about the database schema
|
"""Represents the expectations made by the codebase about the database schema
|
||||||
|
|
||||||
This should be incremented whenever the codebase changes its requirements on the
|
This should be incremented whenever the codebase changes its requirements on the
|
||||||
@ -142,6 +142,11 @@ Changes in SCHEMA_VERSION = 85
|
|||||||
|
|
||||||
Changes in SCHEMA_VERSION = 86
|
Changes in SCHEMA_VERSION = 86
|
||||||
- Add a column `authenticated` to the tables `local_media_repository` and `remote_media_cache`
|
- Add a column `authenticated` to the tables `local_media_repository` and `remote_media_cache`
|
||||||
|
|
||||||
|
Changes in SCHEMA_VERSION = 87
|
||||||
|
- Add tables for storing the per-connection state for sliding sync requests:
|
||||||
|
sliding_sync_connections, sliding_sync_connection_positions, sliding_sync_connection_required_state,
|
||||||
|
sliding_sync_connection_room_configs, sliding_sync_connection_streams
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
@ -0,0 +1,81 @@
|
|||||||
|
--
|
||||||
|
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
--
|
||||||
|
-- Copyright (C) 2024 New Vector, Ltd
|
||||||
|
--
|
||||||
|
-- This program is free software: you can redistribute it and/or modify
|
||||||
|
-- it under the terms of the GNU Affero General Public License as
|
||||||
|
-- published by the Free Software Foundation, either version 3 of the
|
||||||
|
-- License, or (at your option) any later version.
|
||||||
|
--
|
||||||
|
-- See the GNU Affero General Public License for more details:
|
||||||
|
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
|
||||||
|
|
||||||
|
-- Table to track active sliding sync connections.
|
||||||
|
--
|
||||||
|
-- A new connection will be created for every sliding sync request without a
|
||||||
|
-- `since` token for a given `conn_id` for a device.#
|
||||||
|
--
|
||||||
|
-- Once a new connection is created and used we delete all other connections for
|
||||||
|
-- the `conn_id`.
|
||||||
|
CREATE TABLE sliding_sync_connections(
|
||||||
|
connection_key $%AUTO_INCREMENT_PRIMARY_KEY%$,
|
||||||
|
user_id TEXT NOT NULL,
|
||||||
|
-- Generally the device ID, but may be something else for e.g. puppeted accounts.
|
||||||
|
effective_device_id TEXT NOT NULL,
|
||||||
|
conn_id TEXT NOT NULL,
|
||||||
|
created_ts BIGINT NOT NULL
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX sliding_sync_connections_idx ON sliding_sync_connections(user_id, effective_device_id, conn_id);
|
||||||
|
CREATE INDEX sliding_sync_connections_ts_idx ON sliding_sync_connections(created_ts);
|
||||||
|
|
||||||
|
-- We track per-connection state by associating changes to the state with
|
||||||
|
-- connection positions. This ensures that we correctly track state even if we
|
||||||
|
-- see retries of requests.
|
||||||
|
--
|
||||||
|
-- If the client starts a "new" connection (by not specifying a since token),
|
||||||
|
-- we'll clear out the other connections (to ensure that we don't end up with
|
||||||
|
-- lots of connection keys).
|
||||||
|
CREATE TABLE sliding_sync_connection_positions(
|
||||||
|
connection_position $%AUTO_INCREMENT_PRIMARY_KEY%$,
|
||||||
|
connection_key BIGINT NOT NULL REFERENCES sliding_sync_connections(connection_key) ON DELETE CASCADE,
|
||||||
|
created_ts BIGINT NOT NULL
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX sliding_sync_connection_positions_key ON sliding_sync_connection_positions(connection_key);
|
||||||
|
CREATE INDEX sliding_sync_connection_positions_ts_idx ON sliding_sync_connection_positions(created_ts);
|
||||||
|
|
||||||
|
|
||||||
|
-- To save space we deduplicate the `required_state` json by assigning IDs to
|
||||||
|
-- different values.
|
||||||
|
CREATE TABLE sliding_sync_connection_required_state(
|
||||||
|
required_state_id $%AUTO_INCREMENT_PRIMARY_KEY%$,
|
||||||
|
connection_key BIGINT NOT NULL REFERENCES sliding_sync_connections(connection_key) ON DELETE CASCADE,
|
||||||
|
required_state TEXT NOT NULL -- We store this as a json list of event type / state key tuples.
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX sliding_sync_connection_required_state_conn_pos ON sliding_sync_connection_required_state(connection_key);
|
||||||
|
|
||||||
|
|
||||||
|
-- Stores the room configs we have seen for rooms in a connection.
|
||||||
|
CREATE TABLE sliding_sync_connection_room_configs(
|
||||||
|
connection_position BIGINT NOT NULL REFERENCES sliding_sync_connection_positions(connection_position) ON DELETE CASCADE,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
timeline_limit BIGINT NOT NULL,
|
||||||
|
required_state_id BIGINT NOT NULL REFERENCES sliding_sync_connection_required_state(required_state_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX sliding_sync_connection_room_configs_idx ON sliding_sync_connection_room_configs(connection_position, room_id);
|
||||||
|
|
||||||
|
-- Stores what data we have sent for given streams down given connections.
|
||||||
|
CREATE TABLE sliding_sync_connection_streams(
|
||||||
|
connection_position BIGINT NOT NULL REFERENCES sliding_sync_connection_positions(connection_position) ON DELETE CASCADE,
|
||||||
|
stream TEXT NOT NULL, -- e.g. "events" or "receipts"
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
room_status TEXT NOT NULL, -- "live" or "previously", i.e. the `HaveSentRoomFlag` value
|
||||||
|
last_token TEXT -- For "previously" the token for the stream we have sent up to.
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX sliding_sync_connection_streams_idx ON sliding_sync_connection_streams(connection_position, room_id, stream);
|
@ -17,33 +17,9 @@
|
|||||||
# [This file includes modifications made by New Vector Limited]
|
# [This file includes modifications made by New Vector Limited]
|
||||||
#
|
#
|
||||||
#
|
#
|
||||||
from enum import Enum
|
|
||||||
from typing import TYPE_CHECKING, Dict, Final, List, Mapping, Optional, Sequence, Tuple
|
|
||||||
|
|
||||||
import attr
|
|
||||||
from typing_extensions import TypedDict
|
|
||||||
|
|
||||||
from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
from typing import List, Optional, TypedDict
|
||||||
|
|
||||||
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
|
||||||
from pydantic.v1 import Extra
|
|
||||||
else:
|
|
||||||
from pydantic import Extra
|
|
||||||
|
|
||||||
from synapse.events import EventBase
|
|
||||||
from synapse.types import (
|
|
||||||
DeviceListUpdates,
|
|
||||||
JsonDict,
|
|
||||||
JsonMapping,
|
|
||||||
Requester,
|
|
||||||
SlidingSyncStreamToken,
|
|
||||||
StreamToken,
|
|
||||||
UserID,
|
|
||||||
)
|
|
||||||
from synapse.types.rest.client import SlidingSyncBody
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from synapse.handlers.relations import BundledAggregations
|
|
||||||
|
|
||||||
|
|
||||||
class ShutdownRoomParams(TypedDict):
|
class ShutdownRoomParams(TypedDict):
|
||||||
@ -101,335 +77,3 @@ class ShutdownRoomResponse(TypedDict):
|
|||||||
failed_to_kick_users: List[str]
|
failed_to_kick_users: List[str]
|
||||||
local_aliases: List[str]
|
local_aliases: List[str]
|
||||||
new_room_id: Optional[str]
|
new_room_id: Optional[str]
|
||||||
|
|
||||||
|
|
||||||
class SlidingSyncConfig(SlidingSyncBody):
|
|
||||||
"""
|
|
||||||
Inherit from `SlidingSyncBody` since we need all of the same fields and add a few
|
|
||||||
extra fields that we need in the handler
|
|
||||||
"""
|
|
||||||
|
|
||||||
user: UserID
|
|
||||||
requester: Requester
|
|
||||||
|
|
||||||
# Pydantic config
|
|
||||||
class Config:
|
|
||||||
# By default, ignore fields that we don't recognise.
|
|
||||||
extra = Extra.ignore
|
|
||||||
# By default, don't allow fields to be reassigned after parsing.
|
|
||||||
allow_mutation = False
|
|
||||||
# Allow custom types like `UserID` to be used in the model
|
|
||||||
arbitrary_types_allowed = True
|
|
||||||
|
|
||||||
|
|
||||||
class OperationType(Enum):
|
|
||||||
"""
|
|
||||||
Represents the operation types in a Sliding Sync window.
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
SYNC: Sets a range of entries. Clients SHOULD discard what they previous knew about
|
|
||||||
entries in this range.
|
|
||||||
INSERT: Sets a single entry. If the position is not empty then clients MUST move
|
|
||||||
entries to the left or the right depending on where the closest empty space is.
|
|
||||||
DELETE: Remove a single entry. Often comes before an INSERT to allow entries to move
|
|
||||||
places.
|
|
||||||
INVALIDATE: Remove a range of entries. Clients MAY persist the invalidated range for
|
|
||||||
offline support, but they should be treated as empty when additional operations
|
|
||||||
which concern indexes in the range arrive from the server.
|
|
||||||
"""
|
|
||||||
|
|
||||||
SYNC: Final = "SYNC"
|
|
||||||
INSERT: Final = "INSERT"
|
|
||||||
DELETE: Final = "DELETE"
|
|
||||||
INVALIDATE: Final = "INVALIDATE"
|
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class SlidingSyncResult:
|
|
||||||
"""
|
|
||||||
The Sliding Sync result to be serialized to JSON for a response.
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
next_pos: The next position token in the sliding window to request (next_batch).
|
|
||||||
lists: Sliding window API. A map of list key to list results.
|
|
||||||
rooms: Room subscription API. A map of room ID to room results.
|
|
||||||
extensions: Extensions API. A map of extension key to extension results.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class RoomResult:
|
|
||||||
"""
|
|
||||||
Attributes:
|
|
||||||
name: Room name or calculated room name.
|
|
||||||
avatar: Room avatar
|
|
||||||
heroes: List of stripped membership events (containing `user_id` and optionally
|
|
||||||
`avatar_url` and `displayname`) for the users used to calculate the room name.
|
|
||||||
is_dm: Flag to specify whether the room is a direct-message room (most likely
|
|
||||||
between two people).
|
|
||||||
initial: Flag which is set when this is the first time the server is sending this
|
|
||||||
data on this connection. Clients can use this flag to replace or update
|
|
||||||
their local state. When there is an update, servers MUST omit this flag
|
|
||||||
entirely and NOT send "initial":false as this is wasteful on bandwidth. The
|
|
||||||
absence of this flag means 'false'.
|
|
||||||
unstable_expanded_timeline: Flag which is set if we're returning more historic
|
|
||||||
events due to the timeline limit having increased. See "XXX: Odd behavior"
|
|
||||||
comment ing `synapse.handlers.sliding_sync`.
|
|
||||||
required_state: The current state of the room
|
|
||||||
timeline: Latest events in the room. The last event is the most recent.
|
|
||||||
bundled_aggregations: A mapping of event ID to the bundled aggregations for
|
|
||||||
the timeline events above. This allows clients to show accurate reaction
|
|
||||||
counts (or edits, threads), even if some of the reaction events were skipped
|
|
||||||
over in a gappy sync.
|
|
||||||
stripped_state: Stripped state events (for rooms where the usre is
|
|
||||||
invited/knocked). Same as `rooms.invite.$room_id.invite_state` in sync v2,
|
|
||||||
absent on joined/left rooms
|
|
||||||
prev_batch: A token that can be passed as a start parameter to the
|
|
||||||
`/rooms/<room_id>/messages` API to retrieve earlier messages.
|
|
||||||
limited: True if there are more events than `timeline_limit` looking
|
|
||||||
backwards from the `response.pos` to the `request.pos`.
|
|
||||||
num_live: The number of timeline events which have just occurred and are not historical.
|
|
||||||
The last N events are 'live' and should be treated as such. This is mostly
|
|
||||||
useful to determine whether a given @mention event should make a noise or not.
|
|
||||||
Clients cannot rely solely on the absence of `initial: true` to determine live
|
|
||||||
events because if a room not in the sliding window bumps into the window because
|
|
||||||
of an @mention it will have `initial: true` yet contain a single live event
|
|
||||||
(with potentially other old events in the timeline).
|
|
||||||
bump_stamp: The `stream_ordering` of the last event according to the
|
|
||||||
`bump_event_types`. This helps clients sort more readily without them
|
|
||||||
needing to pull in a bunch of the timeline to determine the last activity.
|
|
||||||
`bump_event_types` is a thing because for example, we don't want display
|
|
||||||
name changes to mark the room as unread and bump it to the top. For
|
|
||||||
encrypted rooms, we just have to consider any activity as a bump because we
|
|
||||||
can't see the content and the client has to figure it out for themselves.
|
|
||||||
joined_count: The number of users with membership of join, including the client's
|
|
||||||
own user ID. (same as sync `v2 m.joined_member_count`)
|
|
||||||
invited_count: The number of users with membership of invite. (same as sync v2
|
|
||||||
`m.invited_member_count`)
|
|
||||||
notification_count: The total number of unread notifications for this room. (same
|
|
||||||
as sync v2)
|
|
||||||
highlight_count: The number of unread notifications for this room with the highlight
|
|
||||||
flag set. (same as sync v2)
|
|
||||||
"""
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class StrippedHero:
|
|
||||||
user_id: str
|
|
||||||
display_name: Optional[str]
|
|
||||||
avatar_url: Optional[str]
|
|
||||||
|
|
||||||
name: Optional[str]
|
|
||||||
avatar: Optional[str]
|
|
||||||
heroes: Optional[List[StrippedHero]]
|
|
||||||
is_dm: bool
|
|
||||||
initial: bool
|
|
||||||
unstable_expanded_timeline: bool
|
|
||||||
# Should be empty for invite/knock rooms with `stripped_state`
|
|
||||||
required_state: List[EventBase]
|
|
||||||
# Should be empty for invite/knock rooms with `stripped_state`
|
|
||||||
timeline_events: List[EventBase]
|
|
||||||
bundled_aggregations: Optional[Dict[str, "BundledAggregations"]]
|
|
||||||
# Optional because it's only relevant to invite/knock rooms
|
|
||||||
stripped_state: List[JsonDict]
|
|
||||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
|
||||||
prev_batch: Optional[StreamToken]
|
|
||||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
|
||||||
limited: Optional[bool]
|
|
||||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
|
||||||
num_live: Optional[int]
|
|
||||||
bump_stamp: int
|
|
||||||
joined_count: int
|
|
||||||
invited_count: int
|
|
||||||
notification_count: int
|
|
||||||
highlight_count: int
|
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
|
||||||
return (
|
|
||||||
# If this is the first time the client is seeing the room, we should not filter it out
|
|
||||||
# under any circumstance.
|
|
||||||
self.initial
|
|
||||||
# We need to let the client know if there are any new events
|
|
||||||
or bool(self.required_state)
|
|
||||||
or bool(self.timeline_events)
|
|
||||||
or bool(self.stripped_state)
|
|
||||||
)
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class SlidingWindowList:
|
|
||||||
"""
|
|
||||||
Attributes:
|
|
||||||
count: The total number of entries in the list. Always present if this list
|
|
||||||
is.
|
|
||||||
ops: The sliding list operations to perform.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class Operation:
|
|
||||||
"""
|
|
||||||
Attributes:
|
|
||||||
op: The operation type to perform.
|
|
||||||
range: Which index positions are affected by this operation. These are
|
|
||||||
both inclusive.
|
|
||||||
room_ids: Which room IDs are affected by this operation. These IDs match
|
|
||||||
up to the positions in the `range`, so the last room ID in this list
|
|
||||||
matches the 9th index. The room data is held in a separate object.
|
|
||||||
"""
|
|
||||||
|
|
||||||
op: OperationType
|
|
||||||
range: Tuple[int, int]
|
|
||||||
room_ids: List[str]
|
|
||||||
|
|
||||||
count: int
|
|
||||||
ops: List[Operation]
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class Extensions:
|
|
||||||
"""Responses for extensions
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
to_device: The to-device extension (MSC3885)
|
|
||||||
e2ee: The E2EE device extension (MSC3884)
|
|
||||||
"""
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class ToDeviceExtension:
|
|
||||||
"""The to-device extension (MSC3885)
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
next_batch: The to-device stream token the client should use
|
|
||||||
to get more results
|
|
||||||
events: A list of to-device messages for the client
|
|
||||||
"""
|
|
||||||
|
|
||||||
next_batch: str
|
|
||||||
events: Sequence[JsonMapping]
|
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
|
||||||
return bool(self.events)
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class E2eeExtension:
|
|
||||||
"""The E2EE device extension (MSC3884)
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
device_list_updates: List of user_ids whose devices have changed or left (only
|
|
||||||
present on incremental syncs).
|
|
||||||
device_one_time_keys_count: Map from key algorithm to the number of
|
|
||||||
unclaimed one-time keys currently held on the server for this device. If
|
|
||||||
an algorithm is unlisted, the count for that algorithm is assumed to be
|
|
||||||
zero. If this entire parameter is missing, the count for all algorithms
|
|
||||||
is assumed to be zero.
|
|
||||||
device_unused_fallback_key_types: List of unused fallback key algorithms
|
|
||||||
for this device.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Only present on incremental syncs
|
|
||||||
device_list_updates: Optional[DeviceListUpdates]
|
|
||||||
device_one_time_keys_count: Mapping[str, int]
|
|
||||||
device_unused_fallback_key_types: Sequence[str]
|
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
|
||||||
# Note that "signed_curve25519" is always returned in key count responses
|
|
||||||
# regardless of whether we uploaded any keys for it. This is necessary until
|
|
||||||
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
|
|
||||||
#
|
|
||||||
# Also related:
|
|
||||||
# https://github.com/element-hq/element-android/issues/3725 and
|
|
||||||
# https://github.com/matrix-org/synapse/issues/10456
|
|
||||||
default_otk = self.device_one_time_keys_count.get("signed_curve25519")
|
|
||||||
more_than_default_otk = len(self.device_one_time_keys_count) > 1 or (
|
|
||||||
default_otk is not None and default_otk > 0
|
|
||||||
)
|
|
||||||
|
|
||||||
return bool(
|
|
||||||
more_than_default_otk
|
|
||||||
or self.device_list_updates
|
|
||||||
or self.device_unused_fallback_key_types
|
|
||||||
)
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class AccountDataExtension:
|
|
||||||
"""The Account Data extension (MSC3959)
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
global_account_data_map: Mapping from `type` to `content` of global account
|
|
||||||
data events.
|
|
||||||
account_data_by_room_map: Mapping from room_id to mapping of `type` to
|
|
||||||
`content` of room account data events.
|
|
||||||
"""
|
|
||||||
|
|
||||||
global_account_data_map: Mapping[str, JsonMapping]
|
|
||||||
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]]
|
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
|
||||||
return bool(
|
|
||||||
self.global_account_data_map or self.account_data_by_room_map
|
|
||||||
)
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class ReceiptsExtension:
|
|
||||||
"""The Receipts extension (MSC3960)
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
room_id_to_receipt_map: Mapping from room_id to `m.receipt` ephemeral
|
|
||||||
event (type, content)
|
|
||||||
"""
|
|
||||||
|
|
||||||
room_id_to_receipt_map: Mapping[str, JsonMapping]
|
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
|
||||||
return bool(self.room_id_to_receipt_map)
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class TypingExtension:
|
|
||||||
"""The Typing Notification extension (MSC3961)
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
room_id_to_typing_map: Mapping from room_id to `m.typing` ephemeral
|
|
||||||
event (type, content)
|
|
||||||
"""
|
|
||||||
|
|
||||||
room_id_to_typing_map: Mapping[str, JsonMapping]
|
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
|
||||||
return bool(self.room_id_to_typing_map)
|
|
||||||
|
|
||||||
to_device: Optional[ToDeviceExtension] = None
|
|
||||||
e2ee: Optional[E2eeExtension] = None
|
|
||||||
account_data: Optional[AccountDataExtension] = None
|
|
||||||
receipts: Optional[ReceiptsExtension] = None
|
|
||||||
typing: Optional[TypingExtension] = None
|
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
|
||||||
return bool(
|
|
||||||
self.to_device
|
|
||||||
or self.e2ee
|
|
||||||
or self.account_data
|
|
||||||
or self.receipts
|
|
||||||
or self.typing
|
|
||||||
)
|
|
||||||
|
|
||||||
next_pos: SlidingSyncStreamToken
|
|
||||||
lists: Dict[str, SlidingWindowList]
|
|
||||||
rooms: Dict[str, RoomResult]
|
|
||||||
extensions: Extensions
|
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
|
||||||
"""Make the result appear empty if there are no updates. This is used
|
|
||||||
to tell if the notifier needs to wait for more events when polling for
|
|
||||||
events.
|
|
||||||
"""
|
|
||||||
# We don't include `self.lists` here, as a) `lists` is always non-empty even if
|
|
||||||
# there are no changes, and b) since we're sorting rooms by `stream_ordering` of
|
|
||||||
# the latest activity, anything that would cause the order to change would end
|
|
||||||
# up in `self.rooms` and cause us to send down the change.
|
|
||||||
return bool(self.rooms or self.extensions)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def empty(next_pos: SlidingSyncStreamToken) -> "SlidingSyncResult":
|
|
||||||
"Return a new empty result"
|
|
||||||
return SlidingSyncResult(
|
|
||||||
next_pos=next_pos,
|
|
||||||
lists={},
|
|
||||||
rooms={},
|
|
||||||
extensions=SlidingSyncResult.Extensions(),
|
|
||||||
)
|
|
||||||
|
859
synapse/types/handlers/sliding_sync.py
Normal file
859
synapse/types/handlers/sliding_sync.py
Normal file
@ -0,0 +1,859 @@
|
|||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import typing
|
||||||
|
from collections import ChainMap
|
||||||
|
from enum import Enum
|
||||||
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
AbstractSet,
|
||||||
|
Callable,
|
||||||
|
Dict,
|
||||||
|
Final,
|
||||||
|
Generic,
|
||||||
|
List,
|
||||||
|
Mapping,
|
||||||
|
MutableMapping,
|
||||||
|
Optional,
|
||||||
|
Sequence,
|
||||||
|
Set,
|
||||||
|
Tuple,
|
||||||
|
TypeVar,
|
||||||
|
cast,
|
||||||
|
)
|
||||||
|
|
||||||
|
import attr
|
||||||
|
|
||||||
|
from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
||||||
|
from synapse.api.constants import EventTypes
|
||||||
|
from synapse.types import MultiWriterStreamToken, RoomStreamToken, StrCollection, UserID
|
||||||
|
|
||||||
|
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
||||||
|
from pydantic.v1 import Extra
|
||||||
|
else:
|
||||||
|
from pydantic import Extra
|
||||||
|
|
||||||
|
from synapse.events import EventBase
|
||||||
|
from synapse.types import (
|
||||||
|
DeviceListUpdates,
|
||||||
|
JsonDict,
|
||||||
|
JsonMapping,
|
||||||
|
Requester,
|
||||||
|
SlidingSyncStreamToken,
|
||||||
|
StreamToken,
|
||||||
|
)
|
||||||
|
from synapse.types.rest.client import SlidingSyncBody
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.handlers.relations import BundledAggregations
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class SlidingSyncConfig(SlidingSyncBody):
|
||||||
|
"""
|
||||||
|
Inherit from `SlidingSyncBody` since we need all of the same fields and add a few
|
||||||
|
extra fields that we need in the handler
|
||||||
|
"""
|
||||||
|
|
||||||
|
user: UserID
|
||||||
|
requester: Requester
|
||||||
|
|
||||||
|
# Pydantic config
|
||||||
|
class Config:
|
||||||
|
# By default, ignore fields that we don't recognise.
|
||||||
|
extra = Extra.ignore
|
||||||
|
# By default, don't allow fields to be reassigned after parsing.
|
||||||
|
allow_mutation = False
|
||||||
|
# Allow custom types like `UserID` to be used in the model
|
||||||
|
arbitrary_types_allowed = True
|
||||||
|
|
||||||
|
|
||||||
|
class OperationType(Enum):
|
||||||
|
"""
|
||||||
|
Represents the operation types in a Sliding Sync window.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
SYNC: Sets a range of entries. Clients SHOULD discard what they previous knew about
|
||||||
|
entries in this range.
|
||||||
|
INSERT: Sets a single entry. If the position is not empty then clients MUST move
|
||||||
|
entries to the left or the right depending on where the closest empty space is.
|
||||||
|
DELETE: Remove a single entry. Often comes before an INSERT to allow entries to move
|
||||||
|
places.
|
||||||
|
INVALIDATE: Remove a range of entries. Clients MAY persist the invalidated range for
|
||||||
|
offline support, but they should be treated as empty when additional operations
|
||||||
|
which concern indexes in the range arrive from the server.
|
||||||
|
"""
|
||||||
|
|
||||||
|
SYNC: Final = "SYNC"
|
||||||
|
INSERT: Final = "INSERT"
|
||||||
|
DELETE: Final = "DELETE"
|
||||||
|
INVALIDATE: Final = "INVALIDATE"
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class SlidingSyncResult:
|
||||||
|
"""
|
||||||
|
The Sliding Sync result to be serialized to JSON for a response.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
next_pos: The next position token in the sliding window to request (next_batch).
|
||||||
|
lists: Sliding window API. A map of list key to list results.
|
||||||
|
rooms: Room subscription API. A map of room ID to room results.
|
||||||
|
extensions: Extensions API. A map of extension key to extension results.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class RoomResult:
|
||||||
|
"""
|
||||||
|
Attributes:
|
||||||
|
name: Room name or calculated room name.
|
||||||
|
avatar: Room avatar
|
||||||
|
heroes: List of stripped membership events (containing `user_id` and optionally
|
||||||
|
`avatar_url` and `displayname`) for the users used to calculate the room name.
|
||||||
|
is_dm: Flag to specify whether the room is a direct-message room (most likely
|
||||||
|
between two people).
|
||||||
|
initial: Flag which is set when this is the first time the server is sending this
|
||||||
|
data on this connection. Clients can use this flag to replace or update
|
||||||
|
their local state. When there is an update, servers MUST omit this flag
|
||||||
|
entirely and NOT send "initial":false as this is wasteful on bandwidth. The
|
||||||
|
absence of this flag means 'false'.
|
||||||
|
unstable_expanded_timeline: Flag which is set if we're returning more historic
|
||||||
|
events due to the timeline limit having increased. See "XXX: Odd behavior"
|
||||||
|
comment ing `synapse.handlers.sliding_sync`.
|
||||||
|
required_state: The current state of the room
|
||||||
|
timeline: Latest events in the room. The last event is the most recent.
|
||||||
|
bundled_aggregations: A mapping of event ID to the bundled aggregations for
|
||||||
|
the timeline events above. This allows clients to show accurate reaction
|
||||||
|
counts (or edits, threads), even if some of the reaction events were skipped
|
||||||
|
over in a gappy sync.
|
||||||
|
stripped_state: Stripped state events (for rooms where the usre is
|
||||||
|
invited/knocked). Same as `rooms.invite.$room_id.invite_state` in sync v2,
|
||||||
|
absent on joined/left rooms
|
||||||
|
prev_batch: A token that can be passed as a start parameter to the
|
||||||
|
`/rooms/<room_id>/messages` API to retrieve earlier messages.
|
||||||
|
limited: True if there are more events than `timeline_limit` looking
|
||||||
|
backwards from the `response.pos` to the `request.pos`.
|
||||||
|
num_live: The number of timeline events which have just occurred and are not historical.
|
||||||
|
The last N events are 'live' and should be treated as such. This is mostly
|
||||||
|
useful to determine whether a given @mention event should make a noise or not.
|
||||||
|
Clients cannot rely solely on the absence of `initial: true` to determine live
|
||||||
|
events because if a room not in the sliding window bumps into the window because
|
||||||
|
of an @mention it will have `initial: true` yet contain a single live event
|
||||||
|
(with potentially other old events in the timeline).
|
||||||
|
bump_stamp: The `stream_ordering` of the last event according to the
|
||||||
|
`bump_event_types`. This helps clients sort more readily without them
|
||||||
|
needing to pull in a bunch of the timeline to determine the last activity.
|
||||||
|
`bump_event_types` is a thing because for example, we don't want display
|
||||||
|
name changes to mark the room as unread and bump it to the top. For
|
||||||
|
encrypted rooms, we just have to consider any activity as a bump because we
|
||||||
|
can't see the content and the client has to figure it out for themselves.
|
||||||
|
joined_count: The number of users with membership of join, including the client's
|
||||||
|
own user ID. (same as sync `v2 m.joined_member_count`)
|
||||||
|
invited_count: The number of users with membership of invite. (same as sync v2
|
||||||
|
`m.invited_member_count`)
|
||||||
|
notification_count: The total number of unread notifications for this room. (same
|
||||||
|
as sync v2)
|
||||||
|
highlight_count: The number of unread notifications for this room with the highlight
|
||||||
|
flag set. (same as sync v2)
|
||||||
|
"""
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class StrippedHero:
|
||||||
|
user_id: str
|
||||||
|
display_name: Optional[str]
|
||||||
|
avatar_url: Optional[str]
|
||||||
|
|
||||||
|
name: Optional[str]
|
||||||
|
avatar: Optional[str]
|
||||||
|
heroes: Optional[List[StrippedHero]]
|
||||||
|
is_dm: bool
|
||||||
|
initial: bool
|
||||||
|
unstable_expanded_timeline: bool
|
||||||
|
# Should be empty for invite/knock rooms with `stripped_state`
|
||||||
|
required_state: List[EventBase]
|
||||||
|
# Should be empty for invite/knock rooms with `stripped_state`
|
||||||
|
timeline_events: List[EventBase]
|
||||||
|
bundled_aggregations: Optional[Dict[str, "BundledAggregations"]]
|
||||||
|
# Optional because it's only relevant to invite/knock rooms
|
||||||
|
stripped_state: List[JsonDict]
|
||||||
|
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||||
|
prev_batch: Optional[StreamToken]
|
||||||
|
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||||
|
limited: Optional[bool]
|
||||||
|
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||||
|
num_live: Optional[int]
|
||||||
|
bump_stamp: int
|
||||||
|
joined_count: int
|
||||||
|
invited_count: int
|
||||||
|
notification_count: int
|
||||||
|
highlight_count: int
|
||||||
|
|
||||||
|
def __bool__(self) -> bool:
|
||||||
|
return (
|
||||||
|
# If this is the first time the client is seeing the room, we should not filter it out
|
||||||
|
# under any circumstance.
|
||||||
|
self.initial
|
||||||
|
# We need to let the client know if there are any new events
|
||||||
|
or bool(self.required_state)
|
||||||
|
or bool(self.timeline_events)
|
||||||
|
or bool(self.stripped_state)
|
||||||
|
)
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class SlidingWindowList:
|
||||||
|
"""
|
||||||
|
Attributes:
|
||||||
|
count: The total number of entries in the list. Always present if this list
|
||||||
|
is.
|
||||||
|
ops: The sliding list operations to perform.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class Operation:
|
||||||
|
"""
|
||||||
|
Attributes:
|
||||||
|
op: The operation type to perform.
|
||||||
|
range: Which index positions are affected by this operation. These are
|
||||||
|
both inclusive.
|
||||||
|
room_ids: Which room IDs are affected by this operation. These IDs match
|
||||||
|
up to the positions in the `range`, so the last room ID in this list
|
||||||
|
matches the 9th index. The room data is held in a separate object.
|
||||||
|
"""
|
||||||
|
|
||||||
|
op: OperationType
|
||||||
|
range: Tuple[int, int]
|
||||||
|
room_ids: List[str]
|
||||||
|
|
||||||
|
count: int
|
||||||
|
ops: List[Operation]
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class Extensions:
|
||||||
|
"""Responses for extensions
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
to_device: The to-device extension (MSC3885)
|
||||||
|
e2ee: The E2EE device extension (MSC3884)
|
||||||
|
"""
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class ToDeviceExtension:
|
||||||
|
"""The to-device extension (MSC3885)
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
next_batch: The to-device stream token the client should use
|
||||||
|
to get more results
|
||||||
|
events: A list of to-device messages for the client
|
||||||
|
"""
|
||||||
|
|
||||||
|
next_batch: str
|
||||||
|
events: Sequence[JsonMapping]
|
||||||
|
|
||||||
|
def __bool__(self) -> bool:
|
||||||
|
return bool(self.events)
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class E2eeExtension:
|
||||||
|
"""The E2EE device extension (MSC3884)
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
device_list_updates: List of user_ids whose devices have changed or left (only
|
||||||
|
present on incremental syncs).
|
||||||
|
device_one_time_keys_count: Map from key algorithm to the number of
|
||||||
|
unclaimed one-time keys currently held on the server for this device. If
|
||||||
|
an algorithm is unlisted, the count for that algorithm is assumed to be
|
||||||
|
zero. If this entire parameter is missing, the count for all algorithms
|
||||||
|
is assumed to be zero.
|
||||||
|
device_unused_fallback_key_types: List of unused fallback key algorithms
|
||||||
|
for this device.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Only present on incremental syncs
|
||||||
|
device_list_updates: Optional[DeviceListUpdates]
|
||||||
|
device_one_time_keys_count: Mapping[str, int]
|
||||||
|
device_unused_fallback_key_types: Sequence[str]
|
||||||
|
|
||||||
|
def __bool__(self) -> bool:
|
||||||
|
# Note that "signed_curve25519" is always returned in key count responses
|
||||||
|
# regardless of whether we uploaded any keys for it. This is necessary until
|
||||||
|
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
|
||||||
|
#
|
||||||
|
# Also related:
|
||||||
|
# https://github.com/element-hq/element-android/issues/3725 and
|
||||||
|
# https://github.com/matrix-org/synapse/issues/10456
|
||||||
|
default_otk = self.device_one_time_keys_count.get("signed_curve25519")
|
||||||
|
more_than_default_otk = len(self.device_one_time_keys_count) > 1 or (
|
||||||
|
default_otk is not None and default_otk > 0
|
||||||
|
)
|
||||||
|
|
||||||
|
return bool(
|
||||||
|
more_than_default_otk
|
||||||
|
or self.device_list_updates
|
||||||
|
or self.device_unused_fallback_key_types
|
||||||
|
)
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class AccountDataExtension:
|
||||||
|
"""The Account Data extension (MSC3959)
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
global_account_data_map: Mapping from `type` to `content` of global account
|
||||||
|
data events.
|
||||||
|
account_data_by_room_map: Mapping from room_id to mapping of `type` to
|
||||||
|
`content` of room account data events.
|
||||||
|
"""
|
||||||
|
|
||||||
|
global_account_data_map: Mapping[str, JsonMapping]
|
||||||
|
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]]
|
||||||
|
|
||||||
|
def __bool__(self) -> bool:
|
||||||
|
return bool(
|
||||||
|
self.global_account_data_map or self.account_data_by_room_map
|
||||||
|
)
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class ReceiptsExtension:
|
||||||
|
"""The Receipts extension (MSC3960)
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
room_id_to_receipt_map: Mapping from room_id to `m.receipt` ephemeral
|
||||||
|
event (type, content)
|
||||||
|
"""
|
||||||
|
|
||||||
|
room_id_to_receipt_map: Mapping[str, JsonMapping]
|
||||||
|
|
||||||
|
def __bool__(self) -> bool:
|
||||||
|
return bool(self.room_id_to_receipt_map)
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class TypingExtension:
|
||||||
|
"""The Typing Notification extension (MSC3961)
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
room_id_to_typing_map: Mapping from room_id to `m.typing` ephemeral
|
||||||
|
event (type, content)
|
||||||
|
"""
|
||||||
|
|
||||||
|
room_id_to_typing_map: Mapping[str, JsonMapping]
|
||||||
|
|
||||||
|
def __bool__(self) -> bool:
|
||||||
|
return bool(self.room_id_to_typing_map)
|
||||||
|
|
||||||
|
to_device: Optional[ToDeviceExtension] = None
|
||||||
|
e2ee: Optional[E2eeExtension] = None
|
||||||
|
account_data: Optional[AccountDataExtension] = None
|
||||||
|
receipts: Optional[ReceiptsExtension] = None
|
||||||
|
typing: Optional[TypingExtension] = None
|
||||||
|
|
||||||
|
def __bool__(self) -> bool:
|
||||||
|
return bool(
|
||||||
|
self.to_device
|
||||||
|
or self.e2ee
|
||||||
|
or self.account_data
|
||||||
|
or self.receipts
|
||||||
|
or self.typing
|
||||||
|
)
|
||||||
|
|
||||||
|
next_pos: SlidingSyncStreamToken
|
||||||
|
lists: Mapping[str, SlidingWindowList]
|
||||||
|
rooms: Dict[str, RoomResult]
|
||||||
|
extensions: Extensions
|
||||||
|
|
||||||
|
def __bool__(self) -> bool:
|
||||||
|
"""Make the result appear empty if there are no updates. This is used
|
||||||
|
to tell if the notifier needs to wait for more events when polling for
|
||||||
|
events.
|
||||||
|
"""
|
||||||
|
# We don't include `self.lists` here, as a) `lists` is always non-empty even if
|
||||||
|
# there are no changes, and b) since we're sorting rooms by `stream_ordering` of
|
||||||
|
# the latest activity, anything that would cause the order to change would end
|
||||||
|
# up in `self.rooms` and cause us to send down the change.
|
||||||
|
return bool(self.rooms or self.extensions)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def empty(next_pos: SlidingSyncStreamToken) -> "SlidingSyncResult":
|
||||||
|
"Return a new empty result"
|
||||||
|
return SlidingSyncResult(
|
||||||
|
next_pos=next_pos,
|
||||||
|
lists={},
|
||||||
|
rooms={},
|
||||||
|
extensions=SlidingSyncResult.Extensions(),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class StateValues:
|
||||||
|
"""
|
||||||
|
Understood values of the (type, state_key) tuple in `required_state`.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Include all state events of the given type
|
||||||
|
WILDCARD: Final = "*"
|
||||||
|
# Lazy-load room membership events (include room membership events for any event
|
||||||
|
# `sender` in the timeline). We only give special meaning to this value when it's a
|
||||||
|
# `state_key`.
|
||||||
|
LAZY: Final = "$LAZY"
|
||||||
|
# Subsitute with the requester's user ID. Typically used by clients to get
|
||||||
|
# the user's membership.
|
||||||
|
ME: Final = "$ME"
|
||||||
|
|
||||||
|
|
||||||
|
# We can't freeze this class because we want to update it in place with the
|
||||||
|
# de-duplicated data.
|
||||||
|
@attr.s(slots=True, auto_attribs=True, frozen=True)
|
||||||
|
class RoomSyncConfig:
|
||||||
|
"""
|
||||||
|
Holds the config for what data we should fetch for a room in the sync response.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
timeline_limit: The maximum number of events to return in the timeline.
|
||||||
|
|
||||||
|
required_state_map: Map from state event type to state_keys requested for the
|
||||||
|
room. The values are close to `StateKey` but actually use a syntax where you
|
||||||
|
can provide `*` wildcard and `$LAZY` for lazy-loading room members.
|
||||||
|
"""
|
||||||
|
|
||||||
|
timeline_limit: int
|
||||||
|
required_state_map: Mapping[str, AbstractSet[str]]
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_room_config(
|
||||||
|
cls,
|
||||||
|
room_params: SlidingSyncConfig.CommonRoomParameters,
|
||||||
|
) -> "RoomSyncConfig":
|
||||||
|
"""
|
||||||
|
Create a `RoomSyncConfig` from a `SlidingSyncList`/`RoomSubscription` config.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
room_params: `SlidingSyncConfig.SlidingSyncList` or `SlidingSyncConfig.RoomSubscription`
|
||||||
|
"""
|
||||||
|
required_state_map: Dict[str, Set[str]] = {}
|
||||||
|
for (
|
||||||
|
state_type,
|
||||||
|
state_key,
|
||||||
|
) in room_params.required_state:
|
||||||
|
# If we already have a wildcard for this specific `state_key`, we don't need
|
||||||
|
# to add it since the wildcard already covers it.
|
||||||
|
if state_key in required_state_map.get(StateValues.WILDCARD, set()):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# If we already have a wildcard `state_key` for this `state_type`, we don't need
|
||||||
|
# to add anything else
|
||||||
|
if StateValues.WILDCARD in required_state_map.get(state_type, set()):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# If we're getting wildcards for the `state_type` and `state_key`, that's
|
||||||
|
# all that matters so get rid of any other entries
|
||||||
|
if state_type == StateValues.WILDCARD and state_key == StateValues.WILDCARD:
|
||||||
|
required_state_map = {StateValues.WILDCARD: {StateValues.WILDCARD}}
|
||||||
|
# We can break, since we don't need to add anything else
|
||||||
|
break
|
||||||
|
|
||||||
|
# If we're getting a wildcard for the `state_type`, get rid of any other
|
||||||
|
# entries with the same `state_key`, since the wildcard will cover it already.
|
||||||
|
elif state_type == StateValues.WILDCARD:
|
||||||
|
# Get rid of any entries that match the `state_key`
|
||||||
|
#
|
||||||
|
# Make a copy so we don't run into an error: `dictionary changed size
|
||||||
|
# during iteration`, when we remove items
|
||||||
|
for (
|
||||||
|
existing_state_type,
|
||||||
|
existing_state_key_set,
|
||||||
|
) in list(required_state_map.items()):
|
||||||
|
# Make a copy so we don't run into an error: `Set changed size during
|
||||||
|
# iteration`, when we filter out and remove items
|
||||||
|
for existing_state_key in existing_state_key_set.copy():
|
||||||
|
if existing_state_key == state_key:
|
||||||
|
existing_state_key_set.remove(state_key)
|
||||||
|
|
||||||
|
# If we've the left the `set()` empty, remove it from the map
|
||||||
|
if existing_state_key_set == set():
|
||||||
|
required_state_map.pop(existing_state_type, None)
|
||||||
|
|
||||||
|
# If we're getting a wildcard `state_key`, get rid of any other state_keys
|
||||||
|
# for this `state_type` since the wildcard will cover it already.
|
||||||
|
if state_key == StateValues.WILDCARD:
|
||||||
|
required_state_map[state_type] = {state_key}
|
||||||
|
# Otherwise, just add it to the set
|
||||||
|
else:
|
||||||
|
if required_state_map.get(state_type) is None:
|
||||||
|
required_state_map[state_type] = {state_key}
|
||||||
|
else:
|
||||||
|
required_state_map[state_type].add(state_key)
|
||||||
|
|
||||||
|
return cls(
|
||||||
|
timeline_limit=room_params.timeline_limit,
|
||||||
|
required_state_map=required_state_map,
|
||||||
|
)
|
||||||
|
|
||||||
|
def combine_room_sync_config(
|
||||||
|
self, other_room_sync_config: "RoomSyncConfig"
|
||||||
|
) -> "RoomSyncConfig":
|
||||||
|
"""
|
||||||
|
Combine this `RoomSyncConfig` with another `RoomSyncConfig` and return the
|
||||||
|
superset union of the two.
|
||||||
|
"""
|
||||||
|
timeline_limit = self.timeline_limit
|
||||||
|
required_state_map = {
|
||||||
|
event_type: set(state_keys)
|
||||||
|
for event_type, state_keys in self.required_state_map.items()
|
||||||
|
}
|
||||||
|
|
||||||
|
# Take the highest timeline limit
|
||||||
|
if timeline_limit < other_room_sync_config.timeline_limit:
|
||||||
|
timeline_limit = other_room_sync_config.timeline_limit
|
||||||
|
|
||||||
|
# Union the required state
|
||||||
|
for (
|
||||||
|
state_type,
|
||||||
|
state_key_set,
|
||||||
|
) in other_room_sync_config.required_state_map.items():
|
||||||
|
# If we already have a wildcard for everything, we don't need to add
|
||||||
|
# anything else
|
||||||
|
if StateValues.WILDCARD in required_state_map.get(
|
||||||
|
StateValues.WILDCARD, set()
|
||||||
|
):
|
||||||
|
break
|
||||||
|
|
||||||
|
# If we already have a wildcard `state_key` for this `state_type`, we don't need
|
||||||
|
# to add anything else
|
||||||
|
if StateValues.WILDCARD in required_state_map.get(state_type, set()):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# If we're getting wildcards for the `state_type` and `state_key`, that's
|
||||||
|
# all that matters so get rid of any other entries
|
||||||
|
if (
|
||||||
|
state_type == StateValues.WILDCARD
|
||||||
|
and StateValues.WILDCARD in state_key_set
|
||||||
|
):
|
||||||
|
required_state_map = {state_type: {StateValues.WILDCARD}}
|
||||||
|
# We can break, since we don't need to add anything else
|
||||||
|
break
|
||||||
|
|
||||||
|
for state_key in state_key_set:
|
||||||
|
# If we already have a wildcard for this specific `state_key`, we don't need
|
||||||
|
# to add it since the wildcard already covers it.
|
||||||
|
if state_key in required_state_map.get(StateValues.WILDCARD, set()):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# If we're getting a wildcard for the `state_type`, get rid of any other
|
||||||
|
# entries with the same `state_key`, since the wildcard will cover it already.
|
||||||
|
if state_type == StateValues.WILDCARD:
|
||||||
|
# Get rid of any entries that match the `state_key`
|
||||||
|
#
|
||||||
|
# Make a copy so we don't run into an error: `dictionary changed size
|
||||||
|
# during iteration`, when we remove items
|
||||||
|
for existing_state_type, existing_state_key_set in list(
|
||||||
|
required_state_map.items()
|
||||||
|
):
|
||||||
|
# Make a copy so we don't run into an error: `Set changed size during
|
||||||
|
# iteration`, when we filter out and remove items
|
||||||
|
for existing_state_key in existing_state_key_set.copy():
|
||||||
|
if existing_state_key == state_key:
|
||||||
|
existing_state_key_set.remove(state_key)
|
||||||
|
|
||||||
|
# If we've the left the `set()` empty, remove it from the map
|
||||||
|
if existing_state_key_set == set():
|
||||||
|
required_state_map.pop(existing_state_type, None)
|
||||||
|
|
||||||
|
# If we're getting a wildcard `state_key`, get rid of any other state_keys
|
||||||
|
# for this `state_type` since the wildcard will cover it already.
|
||||||
|
if state_key == StateValues.WILDCARD:
|
||||||
|
required_state_map[state_type] = {state_key}
|
||||||
|
break
|
||||||
|
# Otherwise, just add it to the set
|
||||||
|
else:
|
||||||
|
if required_state_map.get(state_type) is None:
|
||||||
|
required_state_map[state_type] = {state_key}
|
||||||
|
else:
|
||||||
|
required_state_map[state_type].add(state_key)
|
||||||
|
|
||||||
|
return RoomSyncConfig(timeline_limit, required_state_map)
|
||||||
|
|
||||||
|
def must_await_full_state(
|
||||||
|
self,
|
||||||
|
is_mine_id: Callable[[str], bool],
|
||||||
|
) -> bool:
|
||||||
|
"""
|
||||||
|
Check if we have a we're only requesting `required_state` which is completely
|
||||||
|
satisfied even with partial state, then we don't need to `await_full_state` before
|
||||||
|
we can return it.
|
||||||
|
|
||||||
|
Also see `StateFilter.must_await_full_state(...)` for comparison
|
||||||
|
|
||||||
|
Partially-stated rooms should have all state events except for remote membership
|
||||||
|
events so if we require a remote membership event anywhere, then we need to
|
||||||
|
return `True` (requires full state).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
is_mine_id: a callable which confirms if a given state_key matches a mxid
|
||||||
|
of a local user
|
||||||
|
"""
|
||||||
|
wildcard_state_keys = self.required_state_map.get(StateValues.WILDCARD)
|
||||||
|
# Requesting *all* state in the room so we have to wait
|
||||||
|
if (
|
||||||
|
wildcard_state_keys is not None
|
||||||
|
and StateValues.WILDCARD in wildcard_state_keys
|
||||||
|
):
|
||||||
|
return True
|
||||||
|
|
||||||
|
# If the wildcards don't refer to remote user IDs, then we don't need to wait
|
||||||
|
# for full state.
|
||||||
|
if wildcard_state_keys is not None:
|
||||||
|
for possible_user_id in wildcard_state_keys:
|
||||||
|
if not possible_user_id[0].startswith(UserID.SIGIL):
|
||||||
|
# Not a user ID
|
||||||
|
continue
|
||||||
|
|
||||||
|
localpart_hostname = possible_user_id.split(":", 1)
|
||||||
|
if len(localpart_hostname) < 2:
|
||||||
|
# Not a user ID
|
||||||
|
continue
|
||||||
|
|
||||||
|
if not is_mine_id(possible_user_id):
|
||||||
|
return True
|
||||||
|
|
||||||
|
membership_state_keys = self.required_state_map.get(EventTypes.Member)
|
||||||
|
# We aren't requesting any membership events at all so the partial state will
|
||||||
|
# cover us.
|
||||||
|
if membership_state_keys is None:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# If we're requesting entirely local users, the partial state will cover us.
|
||||||
|
for user_id in membership_state_keys:
|
||||||
|
if user_id == StateValues.ME:
|
||||||
|
continue
|
||||||
|
# We're lazy-loading membership so we can just return the state we have.
|
||||||
|
# Lazy-loading means we include membership for any event `sender` in the
|
||||||
|
# timeline but since we had to auth those timeline events, we will have the
|
||||||
|
# membership state for them (including from remote senders).
|
||||||
|
elif user_id == StateValues.LAZY:
|
||||||
|
continue
|
||||||
|
elif user_id == StateValues.WILDCARD:
|
||||||
|
return False
|
||||||
|
elif not is_mine_id(user_id):
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Local users only so the partial state will cover us.
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
class HaveSentRoomFlag(Enum):
|
||||||
|
"""Flag for whether we have sent the room down a sliding sync connection.
|
||||||
|
|
||||||
|
The valid state changes here are:
|
||||||
|
NEVER -> LIVE
|
||||||
|
LIVE -> PREVIOUSLY
|
||||||
|
PREVIOUSLY -> LIVE
|
||||||
|
"""
|
||||||
|
|
||||||
|
# The room has never been sent down (or we have forgotten we have sent it
|
||||||
|
# down).
|
||||||
|
NEVER = "never"
|
||||||
|
|
||||||
|
# We have previously sent the room down, but there are updates that we
|
||||||
|
# haven't sent down.
|
||||||
|
PREVIOUSLY = "previously"
|
||||||
|
|
||||||
|
# We have sent the room down and the client has received all updates.
|
||||||
|
LIVE = "live"
|
||||||
|
|
||||||
|
|
||||||
|
T = TypeVar("T", str, RoomStreamToken, MultiWriterStreamToken)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(auto_attribs=True, slots=True, frozen=True)
|
||||||
|
class HaveSentRoom(Generic[T]):
|
||||||
|
"""Whether we have sent the room data down a sliding sync connection.
|
||||||
|
|
||||||
|
We are generic over the type of token used, e.g. `RoomStreamToken` or
|
||||||
|
`MultiWriterStreamToken`.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
status: Flag of if we have or haven't sent down the room
|
||||||
|
last_token: If the flag is `PREVIOUSLY` then this is non-null and
|
||||||
|
contains the last stream token of the last updates we sent down
|
||||||
|
the room, i.e. we still need to send everything since then to the
|
||||||
|
client.
|
||||||
|
"""
|
||||||
|
|
||||||
|
status: HaveSentRoomFlag
|
||||||
|
last_token: Optional[T]
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def live() -> "HaveSentRoom[T]":
|
||||||
|
return HaveSentRoom(HaveSentRoomFlag.LIVE, None)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def previously(last_token: T) -> "HaveSentRoom[T]":
|
||||||
|
"""Constructor for `PREVIOUSLY` flag."""
|
||||||
|
return HaveSentRoom(HaveSentRoomFlag.PREVIOUSLY, last_token)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def never() -> "HaveSentRoom[T]":
|
||||||
|
return HaveSentRoom(HaveSentRoomFlag.NEVER, None)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(auto_attribs=True, slots=True, frozen=True)
|
||||||
|
class RoomStatusMap(Generic[T]):
|
||||||
|
"""For a given stream, e.g. events, records what we have or have not sent
|
||||||
|
down for that stream in a given room."""
|
||||||
|
|
||||||
|
# `room_id` -> `HaveSentRoom`
|
||||||
|
_statuses: Mapping[str, HaveSentRoom[T]] = attr.Factory(dict)
|
||||||
|
|
||||||
|
def have_sent_room(self, room_id: str) -> HaveSentRoom[T]:
|
||||||
|
"""Return whether we have previously sent the room down"""
|
||||||
|
return self._statuses.get(room_id, HaveSentRoom.never())
|
||||||
|
|
||||||
|
def get_mutable(self) -> "MutableRoomStatusMap[T]":
|
||||||
|
"""Get a mutable copy of this state."""
|
||||||
|
return MutableRoomStatusMap(
|
||||||
|
statuses=self._statuses,
|
||||||
|
)
|
||||||
|
|
||||||
|
def copy(self) -> "RoomStatusMap[T]":
|
||||||
|
"""Make a copy of the class. Useful for converting from a mutable to
|
||||||
|
immutable version."""
|
||||||
|
|
||||||
|
return RoomStatusMap(statuses=dict(self._statuses))
|
||||||
|
|
||||||
|
def __len__(self) -> int:
|
||||||
|
return len(self._statuses)
|
||||||
|
|
||||||
|
|
||||||
|
class MutableRoomStatusMap(RoomStatusMap[T]):
|
||||||
|
"""A mutable version of `RoomStatusMap`"""
|
||||||
|
|
||||||
|
# We use a ChainMap here so that we can easily track what has been updated
|
||||||
|
# and what hasn't. Note that when we persist the per connection state this
|
||||||
|
# will get flattened to a normal dict (via calling `.copy()`)
|
||||||
|
_statuses: typing.ChainMap[str, HaveSentRoom[T]]
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
statuses: Mapping[str, HaveSentRoom[T]],
|
||||||
|
) -> None:
|
||||||
|
# ChainMap requires a mutable mapping, but we're not actually going to
|
||||||
|
# mutate it.
|
||||||
|
statuses = cast(MutableMapping, statuses)
|
||||||
|
|
||||||
|
super().__init__(
|
||||||
|
statuses=ChainMap({}, statuses),
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_updates(self) -> Mapping[str, HaveSentRoom[T]]:
|
||||||
|
"""Return only the changes that were made"""
|
||||||
|
return self._statuses.maps[0]
|
||||||
|
|
||||||
|
def record_sent_rooms(self, room_ids: StrCollection) -> None:
|
||||||
|
"""Record that we have sent these rooms in the response"""
|
||||||
|
for room_id in room_ids:
|
||||||
|
current_status = self._statuses.get(room_id, HaveSentRoom.never())
|
||||||
|
if current_status.status == HaveSentRoomFlag.LIVE:
|
||||||
|
continue
|
||||||
|
|
||||||
|
self._statuses[room_id] = HaveSentRoom.live()
|
||||||
|
|
||||||
|
def record_unsent_rooms(self, room_ids: StrCollection, from_token: T) -> None:
|
||||||
|
"""Record that we have not sent these rooms in the response, but there
|
||||||
|
have been updates.
|
||||||
|
"""
|
||||||
|
# Whether we add/update the entries for unsent rooms depends on the
|
||||||
|
# existing entry:
|
||||||
|
# - LIVE: We have previously sent down everything up to
|
||||||
|
# `last_room_token, so we update the entry to be `PREVIOUSLY` with
|
||||||
|
# `last_room_token`.
|
||||||
|
# - PREVIOUSLY: We have previously sent down everything up to *a*
|
||||||
|
# given token, so we don't need to update the entry.
|
||||||
|
# - NEVER: We have never previously sent down the room, and we haven't
|
||||||
|
# sent anything down this time either so we leave it as NEVER.
|
||||||
|
|
||||||
|
for room_id in room_ids:
|
||||||
|
current_status = self._statuses.get(room_id, HaveSentRoom.never())
|
||||||
|
if current_status.status != HaveSentRoomFlag.LIVE:
|
||||||
|
continue
|
||||||
|
|
||||||
|
self._statuses[room_id] = HaveSentRoom.previously(from_token)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(auto_attribs=True, frozen=True)
|
||||||
|
class PerConnectionState:
|
||||||
|
"""The per-connection state. A snapshot of what we've sent down the
|
||||||
|
connection before.
|
||||||
|
|
||||||
|
Currently, we track whether we've sent down various aspects of a given room
|
||||||
|
before.
|
||||||
|
|
||||||
|
We use the `rooms` field to store the position in the events stream for each
|
||||||
|
room that we've previously sent to the client before. On the next request
|
||||||
|
that includes the room, we can then send only what's changed since that
|
||||||
|
recorded position.
|
||||||
|
|
||||||
|
Same goes for the `receipts` field so we only need to send the new receipts
|
||||||
|
since the last time you made a sync request.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
rooms: The status of each room for the events stream.
|
||||||
|
receipts: The status of each room for the receipts stream.
|
||||||
|
room_configs: Map from room_id to the `RoomSyncConfig` of all
|
||||||
|
rooms that we have previously sent down.
|
||||||
|
"""
|
||||||
|
|
||||||
|
rooms: RoomStatusMap[RoomStreamToken] = attr.Factory(RoomStatusMap)
|
||||||
|
receipts: RoomStatusMap[MultiWriterStreamToken] = attr.Factory(RoomStatusMap)
|
||||||
|
|
||||||
|
room_configs: Mapping[str, RoomSyncConfig] = attr.Factory(dict)
|
||||||
|
|
||||||
|
def get_mutable(self) -> "MutablePerConnectionState":
|
||||||
|
"""Get a mutable copy of this state."""
|
||||||
|
room_configs = cast(MutableMapping[str, RoomSyncConfig], self.room_configs)
|
||||||
|
|
||||||
|
return MutablePerConnectionState(
|
||||||
|
rooms=self.rooms.get_mutable(),
|
||||||
|
receipts=self.receipts.get_mutable(),
|
||||||
|
room_configs=ChainMap({}, room_configs),
|
||||||
|
)
|
||||||
|
|
||||||
|
def copy(self) -> "PerConnectionState":
|
||||||
|
return PerConnectionState(
|
||||||
|
rooms=self.rooms.copy(),
|
||||||
|
receipts=self.receipts.copy(),
|
||||||
|
room_configs=dict(self.room_configs),
|
||||||
|
)
|
||||||
|
|
||||||
|
def __len__(self) -> int:
|
||||||
|
return len(self.rooms) + len(self.receipts) + len(self.room_configs)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(auto_attribs=True)
|
||||||
|
class MutablePerConnectionState(PerConnectionState):
|
||||||
|
"""A mutable version of `PerConnectionState`"""
|
||||||
|
|
||||||
|
rooms: MutableRoomStatusMap[RoomStreamToken]
|
||||||
|
receipts: MutableRoomStatusMap[MultiWriterStreamToken]
|
||||||
|
|
||||||
|
room_configs: typing.ChainMap[str, RoomSyncConfig]
|
||||||
|
|
||||||
|
def has_updates(self) -> bool:
|
||||||
|
return (
|
||||||
|
bool(self.rooms.get_updates())
|
||||||
|
or bool(self.receipts.get_updates())
|
||||||
|
or bool(self.get_room_config_updates())
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_room_config_updates(self) -> Mapping[str, RoomSyncConfig]:
|
||||||
|
"""Get updates to the room sync config"""
|
||||||
|
return self.room_configs.maps[0]
|
@ -27,7 +27,9 @@ from synapse.types import ISynapseReactor
|
|||||||
try:
|
try:
|
||||||
from twisted.internet.epollreactor import EPollReactor as Reactor
|
from twisted.internet.epollreactor import EPollReactor as Reactor
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from twisted.internet.pollreactor import PollReactor as Reactor # type: ignore[assignment]
|
from twisted.internet.pollreactor import ( # type: ignore[assignment]
|
||||||
|
PollReactor as Reactor,
|
||||||
|
)
|
||||||
from twisted.internet.main import installReactor
|
from twisted.internet.main import installReactor
|
||||||
|
|
||||||
|
|
||||||
|
@ -550,7 +550,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
|
|||||||
access_token="mockAccessToken",
|
access_token="mockAccessToken",
|
||||||
)
|
)
|
||||||
|
|
||||||
self.assertEqual(channel.code, HTTPStatus.NOT_IMPLEMENTED, channel.json_body)
|
self.assertEqual(channel.code, HTTPStatus.UNAUTHORIZED, channel.json_body)
|
||||||
|
|
||||||
def expect_unauthorized(
|
def expect_unauthorized(
|
||||||
self, method: str, path: str, content: Union[bytes, str, JsonDict] = ""
|
self, method: str, path: str, content: Union[bytes, str, JsonDict] = ""
|
||||||
|
@ -757,6 +757,54 @@ class SpaceSummaryTestCase(unittest.HomeserverTestCase):
|
|||||||
)
|
)
|
||||||
self._assert_hierarchy(result, expected)
|
self._assert_hierarchy(result, expected)
|
||||||
|
|
||||||
|
def test_fed_root(self) -> None:
|
||||||
|
"""
|
||||||
|
Test if requested room is available over federation.
|
||||||
|
"""
|
||||||
|
fed_hostname = self.hs.hostname + "2"
|
||||||
|
fed_space = "#fed_space:" + fed_hostname
|
||||||
|
fed_subroom = "#fed_sub_room:" + fed_hostname
|
||||||
|
|
||||||
|
requested_room_entry = _RoomEntry(
|
||||||
|
fed_space,
|
||||||
|
{
|
||||||
|
"room_id": fed_space,
|
||||||
|
"world_readable": True,
|
||||||
|
"room_type": RoomTypes.SPACE,
|
||||||
|
},
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": EventTypes.SpaceChild,
|
||||||
|
"room_id": fed_space,
|
||||||
|
"state_key": fed_subroom,
|
||||||
|
"content": {"via": [fed_hostname]},
|
||||||
|
}
|
||||||
|
],
|
||||||
|
)
|
||||||
|
child_room = {
|
||||||
|
"room_id": fed_subroom,
|
||||||
|
"world_readable": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
async def summarize_remote_room_hierarchy(
|
||||||
|
_self: Any, room: Any, suggested_only: bool
|
||||||
|
) -> Tuple[Optional[_RoomEntry], Dict[str, JsonDict], Set[str]]:
|
||||||
|
return requested_room_entry, {fed_subroom: child_room}, set()
|
||||||
|
|
||||||
|
expected = [
|
||||||
|
(fed_space, [fed_subroom]),
|
||||||
|
(fed_subroom, ()),
|
||||||
|
]
|
||||||
|
|
||||||
|
with mock.patch(
|
||||||
|
"synapse.handlers.room_summary.RoomSummaryHandler._summarize_remote_room_hierarchy",
|
||||||
|
new=summarize_remote_room_hierarchy,
|
||||||
|
):
|
||||||
|
result = self.get_success(
|
||||||
|
self.handler.get_room_hierarchy(create_requester(self.user), fed_space)
|
||||||
|
)
|
||||||
|
self._assert_hierarchy(result, expected)
|
||||||
|
|
||||||
def test_fed_filtering(self) -> None:
|
def test_fed_filtering(self) -> None:
|
||||||
"""
|
"""
|
||||||
Rooms returned over federation should be properly filtered to only include
|
Rooms returned over federation should be properly filtered to only include
|
||||||
|
@ -18,7 +18,6 @@
|
|||||||
#
|
#
|
||||||
#
|
#
|
||||||
import logging
|
import logging
|
||||||
from copy import deepcopy
|
|
||||||
from typing import Dict, List, Optional
|
from typing import Dict, List, Optional
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
@ -47,7 +46,7 @@ from synapse.rest.client import knock, login, room
|
|||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||||
from synapse.types import JsonDict, StreamToken, UserID
|
from synapse.types import JsonDict, StreamToken, UserID
|
||||||
from synapse.types.handlers import SlidingSyncConfig
|
from synapse.types.handlers.sliding_sync import SlidingSyncConfig
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
|
|
||||||
from tests.replication._base import BaseMultiWorkerStreamTestCase
|
from tests.replication._base import BaseMultiWorkerStreamTestCase
|
||||||
@ -566,23 +565,11 @@ class RoomSyncConfigTestCase(TestCase):
|
|||||||
"""
|
"""
|
||||||
Combine A into B and B into A to make sure we get the same result.
|
Combine A into B and B into A to make sure we get the same result.
|
||||||
"""
|
"""
|
||||||
# Since we're mutating these in place, make a copy for each of our trials
|
combined_config = a.combine_room_sync_config(b)
|
||||||
room_sync_config_a = deepcopy(a)
|
self._assert_room_config_equal(combined_config, expected, "B into A")
|
||||||
room_sync_config_b = deepcopy(b)
|
|
||||||
|
|
||||||
# Combine B into A
|
combined_config = a.combine_room_sync_config(b)
|
||||||
room_sync_config_a.combine_room_sync_config(room_sync_config_b)
|
self._assert_room_config_equal(combined_config, expected, "A into B")
|
||||||
|
|
||||||
self._assert_room_config_equal(room_sync_config_a, expected, "B into A")
|
|
||||||
|
|
||||||
# Since we're mutating these in place, make a copy for each of our trials
|
|
||||||
room_sync_config_a = deepcopy(a)
|
|
||||||
room_sync_config_b = deepcopy(b)
|
|
||||||
|
|
||||||
# Combine A into B
|
|
||||||
room_sync_config_b.combine_room_sync_config(room_sync_config_a)
|
|
||||||
|
|
||||||
self._assert_room_config_equal(room_sync_config_b, expected, "A into B")
|
|
||||||
|
|
||||||
|
|
||||||
class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
||||||
@ -620,7 +607,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
now_token = self.event_sources.get_current_token()
|
now_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=now_token,
|
from_token=now_token,
|
||||||
to_token=now_token,
|
to_token=now_token,
|
||||||
@ -647,7 +634,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
after_room_token = self.event_sources.get_current_token()
|
after_room_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room_token,
|
from_token=before_room_token,
|
||||||
to_token=after_room_token,
|
to_token=after_room_token,
|
||||||
@ -682,7 +669,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
after_room_token = self.event_sources.get_current_token()
|
after_room_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room_token,
|
from_token=after_room_token,
|
||||||
to_token=after_room_token,
|
to_token=after_room_token,
|
||||||
@ -756,7 +743,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
after_room_token = self.event_sources.get_current_token()
|
after_room_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room_token,
|
from_token=before_room_token,
|
||||||
to_token=after_room_token,
|
to_token=after_room_token,
|
||||||
@ -828,7 +815,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
after_kick_token = self.event_sources.get_current_token()
|
after_kick_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_kick_token,
|
from_token=after_kick_token,
|
||||||
to_token=after_kick_token,
|
to_token=after_kick_token,
|
||||||
@ -921,7 +908,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
self.assertEqual(channel.code, 200, channel.result)
|
self.assertEqual(channel.code, 200, channel.result)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room_forgets,
|
from_token=before_room_forgets,
|
||||||
to_token=before_room_forgets,
|
to_token=before_room_forgets,
|
||||||
@ -951,7 +938,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
after_room2_token = self.event_sources.get_current_token()
|
after_room2_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room1_token,
|
from_token=after_room1_token,
|
||||||
to_token=after_room2_token,
|
to_token=after_room2_token,
|
||||||
@ -1001,7 +988,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
self.helper.join(room_id2, user1_id, tok=user1_tok)
|
self.helper.join(room_id2, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room1_token,
|
from_token=before_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1041,7 +1028,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
leave_response = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
leave_response = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room1_token,
|
from_token=before_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1088,7 +1075,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
leave_response = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
leave_response = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room1_token,
|
from_token=after_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1152,7 +1139,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
leave_response = self.helper.leave(kick_room_id, user1_id, tok=user1_tok)
|
leave_response = self.helper.leave(kick_room_id, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_kick_token,
|
from_token=after_kick_token,
|
||||||
to_token=after_kick_token,
|
to_token=after_kick_token,
|
||||||
@ -1208,7 +1195,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
leave_response2 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
leave_response2 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room1_token,
|
from_token=before_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1263,7 +1250,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
join_response2 = self.helper.join(room_id1, user1_id, tok=user1_tok)
|
join_response2 = self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room1_token,
|
from_token=before_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1322,7 +1309,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
self.helper.join(room_id2, user1_id, tok=user1_tok)
|
self.helper.join(room_id2, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=None,
|
from_token=None,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1404,7 +1391,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
self.helper.join(room_id4, user1_id, tok=user1_tok)
|
self.helper.join(room_id4, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=from_token,
|
from_token=from_token,
|
||||||
to_token=to_token,
|
to_token=to_token,
|
||||||
@ -1477,7 +1464,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room1_token,
|
from_token=after_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1520,7 +1507,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room1_token,
|
from_token=after_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1570,7 +1557,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
leave_response3 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
leave_response3 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room1_token,
|
from_token=before_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1632,7 +1619,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
leave_response3 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
leave_response3 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room1_token,
|
from_token=after_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1691,7 +1678,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
leave_response = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
leave_response = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room1_token,
|
from_token=after_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1765,7 +1752,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room1_token,
|
from_token=before_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1830,7 +1817,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
after_change1_token = self.event_sources.get_current_token()
|
after_change1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room1_token,
|
from_token=after_room1_token,
|
||||||
to_token=after_change1_token,
|
to_token=after_change1_token,
|
||||||
@ -1902,7 +1889,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room1_token,
|
from_token=after_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -1984,7 +1971,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room1_token,
|
from_token=before_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -2052,7 +2039,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
)
|
)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room1_token,
|
from_token=before_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -2088,7 +2075,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
after_more_changes_token = self.event_sources.get_current_token()
|
after_more_changes_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=after_room1_token,
|
from_token=after_room1_token,
|
||||||
to_token=after_more_changes_token,
|
to_token=after_more_changes_token,
|
||||||
@ -2153,7 +2140,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
after_room1_token = self.event_sources.get_current_token()
|
after_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room1_token,
|
from_token=before_room1_token,
|
||||||
to_token=after_room1_token,
|
to_token=after_room1_token,
|
||||||
@ -2229,7 +2216,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
self.helper.leave(room_id3, user1_id, tok=user1_tok)
|
self.helper.leave(room_id3, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_room3_token,
|
from_token=before_room3_token,
|
||||||
to_token=after_room3_token,
|
to_token=after_room3_token,
|
||||||
@ -2365,7 +2352,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# The function under test
|
# The function under test
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_reset_token,
|
from_token=before_reset_token,
|
||||||
to_token=after_reset_token,
|
to_token=after_reset_token,
|
||||||
@ -2579,7 +2566,7 @@ class GetRoomMembershipForUserAtToTokenShardTestCase(BaseMultiWorkerStreamTestCa
|
|||||||
|
|
||||||
# The function under test
|
# The function under test
|
||||||
room_id_results = self.get_success(
|
room_id_results = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
from_token=before_stuck_activity_token,
|
from_token=before_stuck_activity_token,
|
||||||
to_token=stuck_activity_token,
|
to_token=stuck_activity_token,
|
||||||
@ -2669,14 +2656,14 @@ class FilterRoomsRelevantForSyncTestCase(HomeserverTestCase):
|
|||||||
Get the rooms the user should be syncing with
|
Get the rooms the user should be syncing with
|
||||||
"""
|
"""
|
||||||
room_membership_for_user_map = self.get_success(
|
room_membership_for_user_map = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
user=user,
|
user=user,
|
||||||
from_token=from_token,
|
from_token=from_token,
|
||||||
to_token=to_token,
|
to_token=to_token,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
filtered_sync_room_map = self.get_success(
|
filtered_sync_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms_relevant_for_sync(
|
self.sliding_sync_handler.room_lists.filter_rooms_relevant_for_sync(
|
||||||
user=user,
|
user=user,
|
||||||
room_membership_for_user_map=room_membership_for_user_map,
|
room_membership_for_user_map=room_membership_for_user_map,
|
||||||
)
|
)
|
||||||
@ -3030,14 +3017,14 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
Get the rooms the user should be syncing with
|
Get the rooms the user should be syncing with
|
||||||
"""
|
"""
|
||||||
room_membership_for_user_map = self.get_success(
|
room_membership_for_user_map = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
user=user,
|
user=user,
|
||||||
from_token=from_token,
|
from_token=from_token,
|
||||||
to_token=to_token,
|
to_token=to_token,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
filtered_sync_room_map = self.get_success(
|
filtered_sync_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms_relevant_for_sync(
|
self.sliding_sync_handler.room_lists.filter_rooms_relevant_for_sync(
|
||||||
user=user,
|
user=user,
|
||||||
room_membership_for_user_map=room_membership_for_user_map,
|
room_membership_for_user_map=room_membership_for_user_map,
|
||||||
)
|
)
|
||||||
@ -3196,7 +3183,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_dm=True`
|
# Try with `is_dm=True`
|
||||||
truthy_filtered_room_map = self.get_success(
|
truthy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3210,7 +3197,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_dm=False`
|
# Try with `is_dm=False`
|
||||||
falsy_filtered_room_map = self.get_success(
|
falsy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3252,7 +3239,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=True`
|
# Try with `is_encrypted=True`
|
||||||
truthy_filtered_room_map = self.get_success(
|
truthy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3266,7 +3253,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=False`
|
# Try with `is_encrypted=False`
|
||||||
falsy_filtered_room_map = self.get_success(
|
falsy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3316,7 +3303,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=True`
|
# Try with `is_encrypted=True`
|
||||||
truthy_filtered_room_map = self.get_success(
|
truthy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3330,7 +3317,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=False`
|
# Try with `is_encrypted=False`
|
||||||
falsy_filtered_room_map = self.get_success(
|
falsy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3390,7 +3377,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=True`
|
# Try with `is_encrypted=True`
|
||||||
truthy_filtered_room_map = self.get_success(
|
truthy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3404,7 +3391,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=False`
|
# Try with `is_encrypted=False`
|
||||||
falsy_filtered_room_map = self.get_success(
|
falsy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3463,7 +3450,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=True`
|
# Try with `is_encrypted=True`
|
||||||
truthy_filtered_room_map = self.get_success(
|
truthy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3484,7 +3471,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=False`
|
# Try with `is_encrypted=False`
|
||||||
falsy_filtered_room_map = self.get_success(
|
falsy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3533,7 +3520,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=True`
|
# Try with `is_encrypted=True`
|
||||||
truthy_filtered_room_map = self.get_success(
|
truthy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3549,7 +3536,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=False`
|
# Try with `is_encrypted=False`
|
||||||
falsy_filtered_room_map = self.get_success(
|
falsy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3619,7 +3606,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=True`
|
# Try with `is_encrypted=True`
|
||||||
truthy_filtered_room_map = self.get_success(
|
truthy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3637,7 +3624,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=False`
|
# Try with `is_encrypted=False`
|
||||||
falsy_filtered_room_map = self.get_success(
|
falsy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3700,7 +3687,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=True`
|
# Try with `is_encrypted=True`
|
||||||
truthy_filtered_room_map = self.get_success(
|
truthy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3716,7 +3703,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_encrypted=False`
|
# Try with `is_encrypted=False`
|
||||||
falsy_filtered_room_map = self.get_success(
|
falsy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3760,7 +3747,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_invite=True`
|
# Try with `is_invite=True`
|
||||||
truthy_filtered_room_map = self.get_success(
|
truthy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3774,7 +3761,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try with `is_invite=False`
|
# Try with `is_invite=False`
|
||||||
falsy_filtered_room_map = self.get_success(
|
falsy_filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3827,7 +3814,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only normal rooms
|
# Try finding only normal rooms
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
||||||
@ -3839,7 +3826,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only spaces
|
# Try finding only spaces
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
||||||
@ -3851,7 +3838,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding normal rooms and spaces
|
# Try finding normal rooms and spaces
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3865,7 +3852,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding an arbitrary room type
|
# Try finding an arbitrary room type
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3918,7 +3905,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding *NOT* normal rooms
|
# Try finding *NOT* normal rooms
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(not_room_types=[None]),
|
SlidingSyncConfig.SlidingSyncList.Filters(not_room_types=[None]),
|
||||||
@ -3930,7 +3917,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding *NOT* spaces
|
# Try finding *NOT* spaces
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3944,7 +3931,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding *NOT* normal rooms or spaces
|
# Try finding *NOT* normal rooms or spaces
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3959,7 +3946,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
# Test how it behaves when we have both `room_types` and `not_room_types`.
|
# Test how it behaves when we have both `room_types` and `not_room_types`.
|
||||||
# `not_room_types` should win.
|
# `not_room_types` should win.
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -3975,7 +3962,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
# Test how it behaves when we have both `room_types` and `not_room_types`.
|
# Test how it behaves when we have both `room_types` and `not_room_types`.
|
||||||
# `not_room_types` should win.
|
# `not_room_types` should win.
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(
|
SlidingSyncConfig.SlidingSyncList.Filters(
|
||||||
@ -4025,7 +4012,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only normal rooms
|
# Try finding only normal rooms
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
||||||
@ -4037,7 +4024,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only spaces
|
# Try finding only spaces
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
||||||
@ -4094,7 +4081,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only normal rooms
|
# Try finding only normal rooms
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
||||||
@ -4106,7 +4093,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only spaces
|
# Try finding only spaces
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
||||||
@ -4152,7 +4139,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only normal rooms
|
# Try finding only normal rooms
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
||||||
@ -4166,7 +4153,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only spaces
|
# Try finding only spaces
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
||||||
@ -4228,7 +4215,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only normal rooms
|
# Try finding only normal rooms
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
||||||
@ -4242,7 +4229,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only spaces
|
# Try finding only spaces
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
||||||
@ -4305,7 +4292,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only normal rooms
|
# Try finding only normal rooms
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
|
||||||
@ -4319,7 +4306,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Try finding only spaces
|
# Try finding only spaces
|
||||||
filtered_room_map = self.get_success(
|
filtered_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms(
|
self.sliding_sync_handler.room_lists.filter_rooms(
|
||||||
UserID.from_string(user1_id),
|
UserID.from_string(user1_id),
|
||||||
sync_room_map,
|
sync_room_map,
|
||||||
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
|
||||||
@ -4366,14 +4353,14 @@ class SortRoomsTestCase(HomeserverTestCase):
|
|||||||
Get the rooms the user should be syncing with
|
Get the rooms the user should be syncing with
|
||||||
"""
|
"""
|
||||||
room_membership_for_user_map = self.get_success(
|
room_membership_for_user_map = self.get_success(
|
||||||
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
|
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
|
||||||
user=user,
|
user=user,
|
||||||
from_token=from_token,
|
from_token=from_token,
|
||||||
to_token=to_token,
|
to_token=to_token,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
filtered_sync_room_map = self.get_success(
|
filtered_sync_room_map = self.get_success(
|
||||||
self.sliding_sync_handler.filter_rooms_relevant_for_sync(
|
self.sliding_sync_handler.room_lists.filter_rooms_relevant_for_sync(
|
||||||
user=user,
|
user=user,
|
||||||
room_membership_for_user_map=room_membership_for_user_map,
|
room_membership_for_user_map=room_membership_for_user_map,
|
||||||
)
|
)
|
||||||
@ -4408,7 +4395,7 @@ class SortRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Sort the rooms (what we're testing)
|
# Sort the rooms (what we're testing)
|
||||||
sorted_sync_rooms = self.get_success(
|
sorted_sync_rooms = self.get_success(
|
||||||
self.sliding_sync_handler.sort_rooms(
|
self.sliding_sync_handler.room_lists.sort_rooms(
|
||||||
sync_room_map=sync_room_map,
|
sync_room_map=sync_room_map,
|
||||||
to_token=after_rooms_token,
|
to_token=after_rooms_token,
|
||||||
)
|
)
|
||||||
@ -4489,7 +4476,7 @@ class SortRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Sort the rooms (what we're testing)
|
# Sort the rooms (what we're testing)
|
||||||
sorted_sync_rooms = self.get_success(
|
sorted_sync_rooms = self.get_success(
|
||||||
self.sliding_sync_handler.sort_rooms(
|
self.sliding_sync_handler.room_lists.sort_rooms(
|
||||||
sync_room_map=sync_room_map,
|
sync_room_map=sync_room_map,
|
||||||
to_token=after_rooms_token,
|
to_token=after_rooms_token,
|
||||||
)
|
)
|
||||||
@ -4553,7 +4540,7 @@ class SortRoomsTestCase(HomeserverTestCase):
|
|||||||
|
|
||||||
# Sort the rooms (what we're testing)
|
# Sort the rooms (what we're testing)
|
||||||
sorted_sync_rooms = self.get_success(
|
sorted_sync_rooms = self.get_success(
|
||||||
self.sliding_sync_handler.sort_rooms(
|
self.sliding_sync_handler.room_lists.sort_rooms(
|
||||||
sync_room_map=sync_room_map,
|
sync_room_map=sync_room_map,
|
||||||
to_token=after_rooms_token,
|
to_token=after_rooms_token,
|
||||||
)
|
)
|
||||||
|
@ -17,6 +17,7 @@
|
|||||||
# [This file includes modifications made by New Vector Limited]
|
# [This file includes modifications made by New Vector Limited]
|
||||||
#
|
#
|
||||||
#
|
#
|
||||||
|
import io
|
||||||
from typing import Any, Dict, Generator
|
from typing import Any, Dict, Generator
|
||||||
from unittest.mock import ANY, Mock, create_autospec
|
from unittest.mock import ANY, Mock, create_autospec
|
||||||
|
|
||||||
@ -32,7 +33,9 @@ from twisted.web.http import HTTPChannel
|
|||||||
from twisted.web.http_headers import Headers
|
from twisted.web.http_headers import Headers
|
||||||
|
|
||||||
from synapse.api.errors import HttpResponseException, RequestSendFailed
|
from synapse.api.errors import HttpResponseException, RequestSendFailed
|
||||||
|
from synapse.api.ratelimiting import Ratelimiter
|
||||||
from synapse.config._base import ConfigError
|
from synapse.config._base import ConfigError
|
||||||
|
from synapse.config.ratelimiting import RatelimitSettings
|
||||||
from synapse.http.matrixfederationclient import (
|
from synapse.http.matrixfederationclient import (
|
||||||
ByteParser,
|
ByteParser,
|
||||||
MatrixFederationHttpClient,
|
MatrixFederationHttpClient,
|
||||||
@ -337,6 +340,81 @@ class FederationClientTests(HomeserverTestCase):
|
|||||||
r = self.successResultOf(d)
|
r = self.successResultOf(d)
|
||||||
self.assertEqual(r.code, 200)
|
self.assertEqual(r.code, 200)
|
||||||
|
|
||||||
|
def test_authed_media_redirect_response(self) -> None:
|
||||||
|
"""
|
||||||
|
Validate that, when following a `Location` redirect, the
|
||||||
|
maximum size is _not_ set to the initial response `Content-Length` and
|
||||||
|
the media file can be downloaded.
|
||||||
|
"""
|
||||||
|
limiter = Ratelimiter(
|
||||||
|
store=self.hs.get_datastores().main,
|
||||||
|
clock=self.clock,
|
||||||
|
cfg=RatelimitSettings(key="", per_second=0.17, burst_count=1048576),
|
||||||
|
)
|
||||||
|
|
||||||
|
output_stream = io.BytesIO()
|
||||||
|
|
||||||
|
d = defer.ensureDeferred(
|
||||||
|
self.cl.federation_get_file(
|
||||||
|
"testserv:8008", "path", output_stream, limiter, "127.0.0.1", 10000
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
clients = self.reactor.tcpClients
|
||||||
|
self.assertEqual(len(clients), 1)
|
||||||
|
(host, port, factory, _timeout, _bindAddress) = clients[0]
|
||||||
|
self.assertEqual(host, "1.2.3.4")
|
||||||
|
self.assertEqual(port, 8008)
|
||||||
|
|
||||||
|
# complete the connection and wire it up to a fake transport
|
||||||
|
protocol = factory.buildProtocol(None)
|
||||||
|
transport = StringTransport()
|
||||||
|
protocol.makeConnection(transport)
|
||||||
|
|
||||||
|
# Deferred does not have a result
|
||||||
|
self.assertNoResult(d)
|
||||||
|
|
||||||
|
redirect_data = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nLocation: http://testserv:8008/ab/c1/2345.txt\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a--\r\n\r\n"
|
||||||
|
protocol.dataReceived(
|
||||||
|
b"HTTP/1.1 200 OK\r\n"
|
||||||
|
b"Server: Fake\r\n"
|
||||||
|
b"Content-Length: %i\r\n"
|
||||||
|
b"Content-Type: multipart/mixed; boundary=6067d4698f8d40a0a794ea7d7379d53a\r\n\r\n"
|
||||||
|
% (len(redirect_data))
|
||||||
|
)
|
||||||
|
protocol.dataReceived(redirect_data)
|
||||||
|
|
||||||
|
# Still no result, not followed the redirect yet
|
||||||
|
self.assertNoResult(d)
|
||||||
|
|
||||||
|
# Now send the response returned by the server at `Location`
|
||||||
|
clients = self.reactor.tcpClients
|
||||||
|
(host, port, factory, _timeout, _bindAddress) = clients[1]
|
||||||
|
self.assertEqual(host, "1.2.3.4")
|
||||||
|
self.assertEqual(port, 8008)
|
||||||
|
protocol = factory.buildProtocol(None)
|
||||||
|
transport = StringTransport()
|
||||||
|
protocol.makeConnection(transport)
|
||||||
|
|
||||||
|
# make sure the length is longer than the initial response
|
||||||
|
data = b"Hello world!" * 30
|
||||||
|
protocol.dataReceived(
|
||||||
|
b"HTTP/1.1 200 OK\r\n"
|
||||||
|
b"Server: Fake\r\n"
|
||||||
|
b"Content-Length: %i\r\n"
|
||||||
|
b"Content-Type: text/plain\r\n"
|
||||||
|
b"\r\n"
|
||||||
|
b"%s\r\n"
|
||||||
|
b"\r\n" % (len(data), data)
|
||||||
|
)
|
||||||
|
|
||||||
|
# We should get a successful response
|
||||||
|
length, _, _ = self.successResultOf(d)
|
||||||
|
self.assertEqual(length, len(data))
|
||||||
|
self.assertEqual(output_stream.getvalue(), data)
|
||||||
|
|
||||||
@parameterized.expand(["get_json", "post_json", "delete_json", "put_json"])
|
@parameterized.expand(["get_json", "post_json", "delete_json", "put_json"])
|
||||||
def test_timeout_reading_body(self, method_name: str) -> None:
|
def test_timeout_reading_body(self, method_name: str) -> None:
|
||||||
"""
|
"""
|
||||||
|
@ -782,3 +782,135 @@ class SlidingSyncReceiptsExtensionTestCase(SlidingSyncBase):
|
|||||||
{user2_id},
|
{user2_id},
|
||||||
exact=True,
|
exact=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def test_return_own_read_receipts(self) -> None:
|
||||||
|
"""Test that we always send the user's own read receipts in initial
|
||||||
|
rooms, even if the receipts don't match events in the timeline..
|
||||||
|
"""
|
||||||
|
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
|
# Send a message and read receipts into room1
|
||||||
|
event_response = self.helper.send(room_id1, body="new event", tok=user2_tok)
|
||||||
|
room1_event_id = event_response["event_id"]
|
||||||
|
|
||||||
|
self.helper.send_read_receipt(room_id1, room1_event_id, tok=user1_tok)
|
||||||
|
self.helper.send_read_receipt(room_id1, room1_event_id, tok=user2_tok)
|
||||||
|
|
||||||
|
# Now send a message so the above message is not in the timeline.
|
||||||
|
self.helper.send(room_id1, body="new event", tok=user2_tok)
|
||||||
|
|
||||||
|
# Make a SS request for only the latest message.
|
||||||
|
sync_body = {
|
||||||
|
"lists": {
|
||||||
|
"main": {
|
||||||
|
"ranges": [[0, 0]],
|
||||||
|
"required_state": [],
|
||||||
|
"timeline_limit": 1,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"extensions": {
|
||||||
|
"receipts": {
|
||||||
|
"enabled": True,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
|
||||||
|
|
||||||
|
# We should get our own receipt in room1, even though its not in the
|
||||||
|
# timeline limit.
|
||||||
|
self.assertIncludes(
|
||||||
|
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||||
|
{room_id1},
|
||||||
|
exact=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# We should only see our read receipt, not the other user's.
|
||||||
|
receipt = response_body["extensions"]["receipts"]["rooms"][room_id1]
|
||||||
|
self.assertIncludes(
|
||||||
|
receipt["content"][room1_event_id][ReceiptTypes.READ].keys(),
|
||||||
|
{user1_id},
|
||||||
|
exact=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_read_receipts_expanded_timeline(self) -> None:
|
||||||
|
"""Test that we get read receipts when we expand the timeline limit (`unstable_expanded_timeline`)."""
|
||||||
|
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
|
# Send a message and read receipt into room1
|
||||||
|
event_response = self.helper.send(room_id1, body="new event", tok=user2_tok)
|
||||||
|
room1_event_id = event_response["event_id"]
|
||||||
|
|
||||||
|
self.helper.send_read_receipt(room_id1, room1_event_id, tok=user2_tok)
|
||||||
|
|
||||||
|
# Now send a message so the above message is not in the timeline.
|
||||||
|
self.helper.send(room_id1, body="new event", tok=user2_tok)
|
||||||
|
|
||||||
|
# Make a SS request for only the latest message.
|
||||||
|
sync_body = {
|
||||||
|
"lists": {
|
||||||
|
"main": {
|
||||||
|
"ranges": [[0, 0]],
|
||||||
|
"required_state": [],
|
||||||
|
"timeline_limit": 1,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"extensions": {
|
||||||
|
"receipts": {
|
||||||
|
"enabled": True,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||||
|
|
||||||
|
# We shouldn't see user2 read receipt, as its not in the timeline
|
||||||
|
self.assertIncludes(
|
||||||
|
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||||
|
set(),
|
||||||
|
exact=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Now do another request with a room subscription with an increased timeline limit
|
||||||
|
sync_body["room_subscriptions"] = {
|
||||||
|
room_id1: {
|
||||||
|
"required_state": [],
|
||||||
|
"timeline_limit": 2,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
response_body, from_token = self.do_sync(
|
||||||
|
sync_body, since=from_token, tok=user1_tok
|
||||||
|
)
|
||||||
|
|
||||||
|
# Assert that we did actually get an expanded timeline
|
||||||
|
room_response = response_body["rooms"][room_id1]
|
||||||
|
self.assertNotIn("initial", room_response)
|
||||||
|
self.assertEqual(room_response["unstable_expanded_timeline"], True)
|
||||||
|
|
||||||
|
# We should now see user2 read receipt, as its in the expanded timeline
|
||||||
|
self.assertIncludes(
|
||||||
|
response_body["extensions"]["receipts"].get("rooms").keys(),
|
||||||
|
{room_id1},
|
||||||
|
exact=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# We should only see our read receipt, not the other user's.
|
||||||
|
receipt = response_body["extensions"]["receipts"]["rooms"][room_id1]
|
||||||
|
self.assertIncludes(
|
||||||
|
receipt["content"][room1_event_id][ReceiptTypes.READ].keys(),
|
||||||
|
{user2_id},
|
||||||
|
exact=True,
|
||||||
|
)
|
||||||
|
@ -191,8 +191,14 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
|
|||||||
}
|
}
|
||||||
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
_, from_token = self.do_sync(sync_body, tok=user1_tok)
|
||||||
|
|
||||||
# Reset the in-memory cache
|
# Reset the positions
|
||||||
self.hs.get_sliding_sync_handler().connection_store._connections.clear()
|
self.get_success(
|
||||||
|
self.store.db_pool.simple_delete(
|
||||||
|
table="sliding_sync_connections",
|
||||||
|
keyvalues={"user_id": user1_id},
|
||||||
|
desc="clear_sliding_sync_connections_cache",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
# Make the Sliding Sync request
|
# Make the Sliding Sync request
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
from http import HTTPStatus
|
from http import HTTPStatus
|
||||||
|
from unittest.mock import AsyncMock
|
||||||
|
|
||||||
from synapse.rest.client import auth_issuer
|
from synapse.rest.client import auth_issuer
|
||||||
|
|
||||||
@ -50,10 +51,27 @@ class AuthIssuerTestCase(HomeserverTestCase):
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
def test_returns_issuer_when_oidc_enabled(self) -> None:
|
def test_returns_issuer_when_oidc_enabled(self) -> None:
|
||||||
# Make an unauthenticated request for the discovery info.
|
# Patch the HTTP client to return the issuer metadata
|
||||||
|
req_mock = AsyncMock(return_value={"issuer": ISSUER})
|
||||||
|
self.hs.get_proxied_http_client().get_json = req_mock # type: ignore[method-assign]
|
||||||
|
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
"GET",
|
"GET",
|
||||||
"/_matrix/client/unstable/org.matrix.msc2965/auth_issuer",
|
"/_matrix/client/unstable/org.matrix.msc2965/auth_issuer",
|
||||||
)
|
)
|
||||||
|
|
||||||
self.assertEqual(channel.code, HTTPStatus.OK)
|
self.assertEqual(channel.code, HTTPStatus.OK)
|
||||||
self.assertEqual(channel.json_body, {"issuer": ISSUER})
|
self.assertEqual(channel.json_body, {"issuer": ISSUER})
|
||||||
|
|
||||||
|
req_mock.assert_called_with("https://account.example.com/.well-known/openid-configuration")
|
||||||
|
req_mock.reset_mock()
|
||||||
|
|
||||||
|
# Second call it should use the cached value
|
||||||
|
channel = self.make_request(
|
||||||
|
"GET",
|
||||||
|
"/_matrix/client/unstable/org.matrix.msc2965/auth_issuer",
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(channel.code, HTTPStatus.OK)
|
||||||
|
self.assertEqual(channel.json_body, {"issuer": ISSUER})
|
||||||
|
req_mock.assert_not_called()
|
||||||
|
@ -315,9 +315,7 @@ class SigningKeyUploadServletTestCase(unittest.HomeserverTestCase):
|
|||||||
"master_key": master_key2,
|
"master_key": master_key2,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
self.assertEqual(
|
self.assertEqual(channel.code, HTTPStatus.UNAUTHORIZED, channel.json_body)
|
||||||
channel.code, HTTPStatus.NOT_IMPLEMENTED, channel.json_body
|
|
||||||
)
|
|
||||||
|
|
||||||
# Pretend that MAS did UIA and allowed us to replace the master key.
|
# Pretend that MAS did UIA and allowed us to replace the master key.
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
@ -349,9 +347,7 @@ class SigningKeyUploadServletTestCase(unittest.HomeserverTestCase):
|
|||||||
"master_key": master_key3,
|
"master_key": master_key3,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
self.assertEqual(
|
self.assertEqual(channel.code, HTTPStatus.UNAUTHORIZED, channel.json_body)
|
||||||
channel.code, HTTPStatus.NOT_IMPLEMENTED, channel.json_body
|
|
||||||
)
|
|
||||||
|
|
||||||
# Pretend that MAS did UIA and allowed us to replace the master key.
|
# Pretend that MAS did UIA and allowed us to replace the master key.
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
@ -376,6 +372,4 @@ class SigningKeyUploadServletTestCase(unittest.HomeserverTestCase):
|
|||||||
"master_key": master_key3,
|
"master_key": master_key3,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
self.assertEqual(
|
self.assertEqual(channel.code, HTTPStatus.UNAUTHORIZED, channel.json_body)
|
||||||
channel.code, HTTPStatus.NOT_IMPLEMENTED, channel.json_body
|
|
||||||
)
|
|
||||||
|
@ -17,6 +17,8 @@
|
|||||||
# [This file includes modifications made by New Vector Limited]
|
# [This file includes modifications made by New Vector Limited]
|
||||||
#
|
#
|
||||||
#
|
#
|
||||||
|
from unittest.mock import AsyncMock
|
||||||
|
|
||||||
from twisted.web.resource import Resource
|
from twisted.web.resource import Resource
|
||||||
|
|
||||||
from synapse.rest.well_known import well_known_resource
|
from synapse.rest.well_known import well_known_resource
|
||||||
@ -112,7 +114,6 @@ class WellKnownTests(unittest.HomeserverTestCase):
|
|||||||
"msc3861": {
|
"msc3861": {
|
||||||
"enabled": True,
|
"enabled": True,
|
||||||
"issuer": "https://issuer",
|
"issuer": "https://issuer",
|
||||||
"account_management_url": "https://my-account.issuer",
|
|
||||||
"client_id": "id",
|
"client_id": "id",
|
||||||
"client_auth_method": "client_secret_post",
|
"client_auth_method": "client_secret_post",
|
||||||
"client_secret": "secret",
|
"client_secret": "secret",
|
||||||
@ -122,6 +123,11 @@ class WellKnownTests(unittest.HomeserverTestCase):
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
def test_client_well_known_msc3861_oauth_delegation(self) -> None:
|
def test_client_well_known_msc3861_oauth_delegation(self) -> None:
|
||||||
|
# Patch the HTTP client to return the issuer metadata
|
||||||
|
req_mock = AsyncMock(return_value={"issuer": "https://issuer", "account_management_uri": "https://my-account.issuer"})
|
||||||
|
self.hs.get_proxied_http_client().get_json = req_mock # type: ignore[method-assign]
|
||||||
|
|
||||||
|
for _ in range(2):
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
"GET", "/.well-known/matrix/client", shorthand=False
|
"GET", "/.well-known/matrix/client", shorthand=False
|
||||||
)
|
)
|
||||||
@ -137,3 +143,6 @@ class WellKnownTests(unittest.HomeserverTestCase):
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# It should have been called exactly once, because it gets cached
|
||||||
|
req_mock.assert_called_once_with("https://issuer/.well-known/openid-configuration")
|
||||||
|
Loading…
Reference in New Issue
Block a user