Compare commits

...

68 Commits

Author SHA1 Message Date
Tulir Asokan
bd6ff6176d Merge tag 'meow-patchset-v1.116.0rc1' 2024-09-25 13:34:24 +03:00
Tulir Asokan
9572748b56 Don't apply alias rules to admins 2024-09-25 13:33:02 +03:00
Tulir Asokan
245d82f44e Allow pdf inline 2024-09-25 13:33:02 +03:00
Tulir Asokan
ca4e3165b3 Remove unnecessary pusher URL validation 2024-09-25 13:33:02 +03:00
Tulir Asokan
0583a5967b Allow specific users to use timestamp massaging without being appservices 2024-09-25 13:33:01 +03:00
Tulir Asokan
e48d921119 Allow custom content in read receipts 2024-09-25 13:31:49 +03:00
Tulir Asokan
9bff00c396 Allow unhiding events that the C-S API filters away by default 2024-09-25 13:31:49 +03:00
Tulir Asokan
5994cbd4a0 Allow bypassing unnecessary validation in C-S API 2024-09-25 13:31:49 +03:00
Tulir Asokan
abf856ae6f Set immutable cache-control header for media downloads 2024-09-25 13:31:48 +03:00
Tulir Asokan
d50743e1d8 Thumbnail webp images as webp to avoid losing transparency 2024-09-25 13:31:13 +03:00
Tulir Asokan
4db99ed019 Allow registering invalid user IDs with admin API 2024-09-25 13:31:13 +03:00
Tulir Asokan
75620e118e Allow specifying room ID when creating room 2024-09-25 13:31:13 +03:00
Tulir Asokan
d3229d1b5e Fix default power level for room creator 2024-09-25 13:31:13 +03:00
Tulir Asokan
e21fcdca4f Add meow readme and config extension 2024-09-25 13:31:13 +03:00
Tulir Asokan
c2a173a0f2 Add meow dockerfile
N.B. requires requirements.txt to be generated in repo root beforehand
2024-09-25 13:31:13 +03:00
Quentin Gliech
13dea6949b
Changelog fixes 2024-09-25 12:07:51 +02:00
Quentin Gliech
386cabda83
1.116.0rc1 2024-09-25 11:34:36 +02:00
dependabot[bot]
f53a3a56e2
Bump treq from 23.11.0 to 24.9.1 (#17744)
Bumps [treq](https://github.com/twisted/treq) from 23.11.0 to 24.9.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/twisted/treq/releases">treq's
releases</a>.</em></p>
<blockquote>
<h2>Treq 24.9.0</h2>
<h2>Features</h2>
<ul>
<li>treq now ships type annotations. (<a
href="https://redirect.github.com/twisted/treq/issues/366">#366</a>)</li>
<li>The new <code>treq.cookies</code> module provides helper functions
for working with <code>http.cookiejar.Cookie</code> and
<code>CookieJar</code> objects. (<a
href="https://redirect.github.com/twisted/treq/issues/384">#384</a>)</li>
<li>Python 3.13 is now supported. (<a
href="https://redirect.github.com/twisted/treq/issues/391">#391</a>)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li><code>treq.content.text_content()</code> no longer generates
deprecation warnings due to use of the <code>cgi</code> module. (<a
href="https://redirect.github.com/twisted/treq/issues/355">#355</a>)</li>
</ul>
<h2>Deprecations and Removals</h2>
<ul>
<li>Mixing the <em>json</em> argument with <em>files</em> or
<em>data</em> now raises <code>TypeError</code>. (<a
href="https://redirect.github.com/twisted/treq/issues/297">#297</a>)</li>
<li>Passing non-string (<code>str</code> or <code>bytes</code>) values
as part of a dict to the <em>headers</em> argument now results in a
<code>TypeError</code>, as does passing any collection other than a
<code>dict</code> or <code>Headers</code> instance. (<a
href="https://redirect.github.com/twisted/treq/issues/302">#302</a>)</li>
<li>Support for Python 3.7 and PyPy 3.8, which have reached end of
support, has been dropped. (<a
href="https://redirect.github.com/twisted/treq/issues/378">#378</a>)</li>
</ul>
<h2>Misc</h2>
<ul>
<li><a
href="https://redirect.github.com/twisted/treq/issues/336">#336</a>, <a
href="https://redirect.github.com/twisted/treq/issues/382">#382</a>, <a
href="https://redirect.github.com/twisted/treq/issues/395">#395</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/twisted/treq/blob/trunk/CHANGELOG.rst">treq's
changelog</a>.</em></p>
<blockquote>
<h1>24.9.1 (2024-09-19)</h1>
<h2>Bugfixes</h2>
<ul>
<li>treq has vendored its dependency on the <code>multipart</code>
library to avoid import
conflicts with <code>python-multipart</code>; it should now be
installable alongside
that library. (<code>[#399](https://github.com/twisted/treq/issues/399)
&lt;https://github.com/twisted/treq/issues/399&gt;</code>__)</li>
</ul>
<h1>24.9.0 (2024-09-17)</h1>
<h2>Features</h2>
<ul>
<li>treq now ships type annotations.
(<code>[#366](https://github.com/twisted/treq/issues/366)
&lt;https://github.com/twisted/treq/issues/366&gt;</code>__)</li>
<li>The new :mod:<code>treq.cookies</code> module provides helper
functions for working with <code>http.cookiejar.Cookie</code> and
<code>CookieJar</code> objects.
(<code>[#384](https://github.com/twisted/treq/issues/384)
&lt;https://github.com/twisted/treq/issues/384&gt;</code>__)</li>
<li>Python 3.13 is now supported.
(<code>[#391](https://github.com/twisted/treq/issues/391)
&lt;https://github.com/twisted/treq/issues/391&gt;</code>__)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>:mod:<code>treq.content.text_content()</code> no longer generates
deprecation warnings due to use of the <code>cgi</code> module.
(<code>[#355](https://github.com/twisted/treq/issues/355)
&lt;https://github.com/twisted/treq/issues/355&gt;</code>__)</li>
</ul>
<h2>Deprecations and Removals</h2>
<ul>
<li>Mixing the <em>json</em> argument with <em>files</em> or
<em>data</em> now raises <code>TypeError</code>.
(<code>[#297](https://github.com/twisted/treq/issues/297)
&lt;https://github.com/twisted/treq/issues/297&gt;</code>__)</li>
<li>Passing non-string (<code>str</code> or <code>bytes</code>) values
as part of a dict to the <em>headers</em> argument now results in a
<code>TypeError</code>, as does passing any collection other than a
<code>dict</code> or <code>Headers</code> instance.
(<code>[#302](https://github.com/twisted/treq/issues/302)
&lt;https://github.com/twisted/treq/issues/302&gt;</code>__)</li>
<li>Support for Python 3.7 and PyPy 3.8, which have reached end of
support, has been dropped.
(<code>[#378](https://github.com/twisted/treq/issues/378)
&lt;https://github.com/twisted/treq/issues/378&gt;</code>__)</li>
</ul>
<h2>Misc</h2>
<ul>
<li><code>[#336](https://github.com/twisted/treq/issues/336)
&lt;https://github.com/twisted/treq/issues/336&gt;</code><strong>,
<code>[#382](https://github.com/twisted/treq/issues/382)
&lt;https://github.com/twisted/treq/issues/382&gt;</code></strong>,
<code>[#395](https://github.com/twisted/treq/issues/395)
&lt;https://github.com/twisted/treq/issues/395&gt;</code>__</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="caaf9fcb62"><code>caaf9fc</code></a>
Release 24.9.1</li>
<li><a
href="9cedb088b4"><code>9cedb08</code></a>
Merge pull request <a
href="https://redirect.github.com/twisted/treq/issues/400">#400</a> from
twisted/vendor-multipart-for-now</li>
<li><a
href="4aa1ee8a3c"><code>4aa1ee8</code></a>
news fragment</li>
<li><a
href="d7c16de8f5"><code>d7c16de</code></a>
octothorpes rise up</li>
<li><a
href="4fd3c842c2"><code>4fd3c84</code></a>
try to make the linter happy</li>
<li><a
href="f0a5148cba"><code>f0a5148</code></a>
fix import, switch to <code>from</code></li>
<li><a
href="7f16b87f0a"><code>7f16b87</code></a>
correct import</li>
<li><a
href="1526431a37"><code>1526431</code></a>
add a lightly-modified vendored version of <a
href="https://github.com/defnull/multipa">https://github.com/defnull/multipa</a>...</li>
<li><a
href="7c52d4917f"><code>7c52d49</code></a>
remove dependency on <code>multipart</code> package</li>
<li><a
href="ca3966f57a"><code>ca3966f</code></a>
Merge pull request <a
href="https://redirect.github.com/twisted/treq/issues/398">#398</a> from
twisted/397-release-24.9.0</li>
<li>Additional commits viewable in <a
href="https://github.com/twisted/treq/compare/release-23.11.0...treq-24.9.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=treq&package-manager=pip&previous-version=23.11.0&new-version=24.9.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Quentin Gliech <quenting@element.io>
2024-09-25 11:19:03 +02:00
V02460
2fc43e4219
Remove the deprecated cgi module (#17741)
Removes all uses of the `cgi` module from Synapse. It was deprecated in
Python version 3.11 and removed in version 3.13 ([“dead
battery”](https://docs.python.org/3.13/whatsnew/3.13.html#pep-594-remove-dead-batteries-from-the-standard-library)).

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Quentin Gliech <quenting@element.io>
2024-09-25 11:15:34 +02:00
dependabot[bot]
b0d2aca164
Bump phonenumbers from 8.13.44 to 8.13.45 (#17762) 2024-09-25 06:38:37 +00:00
dependabot[bot]
f68e8d0021
Bump ruff from 0.6.5 to 0.6.7 (#17760) 2024-09-24 22:48:43 +00:00
dependabot[bot]
89e7609f5c
Bump msgpack from 1.0.8 to 1.1.0 (#17759) 2024-09-24 22:34:37 +00:00
dependabot[bot]
b89a66f831
Bump idna from 3.8 to 3.10 (#17758) 2024-09-25 00:20:24 +02:00
dependabot[bot]
b066b3aa04
Bump types-setuptools from 74.1.0.20240907 to 75.1.0.20240917 (#17757)
Bumps [types-setuptools](https://github.com/python/typeshed) from
74.1.0.20240907 to 75.1.0.20240917.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-setuptools&package-manager=pip&previous-version=74.1.0.20240907&new-version=75.1.0.20240917)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 17:30:24 +00:00
dependabot[bot]
e4b0cd87cc
Bump pydantic from 2.8.2 to 2.9.2 (#17756)
Bumps [pydantic](https://github.com/pydantic/pydantic) from 2.8.2 to
2.9.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/releases">pydantic's
releases</a>.</em></p>
<blockquote>
<h2>v2.9.2 (2024-09-17)</h2>
<h2>What's Changed</h2>
<h3>Fixes</h3>
<ul>
<li>Do not error when trying to evaluate annotations of private
attributes by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10358">#10358</a></li>
<li>Adding notes on designing sound <code>Callable</code> discriminators
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10400">#10400</a></li>
<li>Fix serialization schema generation when using
<code>PlainValidator</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10427">#10427</a></li>
<li>Fix <code>Union</code> serialization warnings by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1449">pydantic/pydantic-core#1449</a></li>
<li>Fix variance issue in <code>_IncEx</code> type alias, only allow
<code>True</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10414">#10414</a></li>
<li>Fix <code>ZoneInfo</code> validation with various invalid types by
<a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10408">#10408</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.9.1...v2.9.2">https://github.com/pydantic/pydantic/compare/v2.9.1...v2.9.2</a></p>
<h2>v2.9.1 (2024-09-09)</h2>
<h2>What's Changed</h2>
<h3>Fixes</h3>
<ul>
<li>Fix Predicate issue in v2.9.0 by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10321">#10321</a></li>
<li>Fixing <code>annotated-types</code> bound to <code>&gt;=0.6.0</code>
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10327">#10327</a></li>
<li>Turn <code>tzdata</code> install requirement into optional
<code>timezone</code> dependency by <a
href="https://github.com/jakob-keller"><code>@​jakob-keller</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10331">#10331</a></li>
<li>Fix <code>IncExc</code> type alias definition by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10339">#10339</a></li>
<li>Use correct types namespace when building namedtuple core schemas by
<a href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10337">#10337</a></li>
<li>Fix evaluation of stringified annotations during namespace
inspection by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10347">#10347</a></li>
<li>Fix tagged union serialization with alias generators by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1442">pydantic/pydantic-core#1442</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.9.0...v2.9.1">https://github.com/pydantic/pydantic/compare/v2.9.0...v2.9.1</a></p>
<h2>v2.9.0 (2024-09-05)</h2>
<p>The code released in v2.9.0 is practically identical to that of
v2.9.0b2.</p>
<p>Check out our <a
href="https://pydantic.dev/articles/pydantic-v2-9-release">blog post</a>
to learn more about the release highlights!</p>
<h2>What's Changed</h2>
<h3>Packaging</h3>
<ul>
<li>Bump <code>ruff</code> to <code>v0.5.0</code> and
<code>pyright</code> to <code>v1.1.369</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9801">#9801</a></li>
<li>Bump <code>pydantic-extra-types</code> to <code>v2.9.0</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9832">#9832</a></li>
<li>Support compatibility with <code>pdm v2.18.1</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10138">#10138</a></li>
<li>Bump <code>v1</code> version stub to <code>v1.10.18</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10214">#10214</a></li>
<li>Bump <code>pydantic-core</code> to <code>v2.23.2</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10311">#10311</a></li>
</ul>
<h3>New Features</h3>
<ul>
<li>Add support for <code>ZoneInfo</code> by <a
href="https://github.com/Youssefares"><code>@​Youssefares</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/9896">#9896</a></li>
<li>Add <code>Config.val_json_bytes</code> by <a
href="https://github.com/josh-newman"><code>@​josh-newman</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/9770">#9770</a></li>
<li>Add DSN for Snowflake by <a
href="https://github.com/aditkumar72"><code>@​aditkumar72</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/10128">#10128</a></li>
<li>Support <code>complex</code> number by <a
href="https://github.com/changhc"><code>@​changhc</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9654">#9654</a></li>
<li>Add support for <code>annotated_types.Not</code> by <a
href="https://github.com/aditkumar72"><code>@​aditkumar72</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/10210">#10210</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/blob/main/HISTORY.md">pydantic's
changelog</a>.</em></p>
<blockquote>
<h2>v2.9.2 (2024-09-17)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.9.2">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>Fixes</h4>
<ul>
<li>Do not error when trying to evaluate annotations of private
attributes by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10358">#10358</a></li>
<li>Adding notes on designing sound <code>Callable</code> discriminators
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10400">#10400</a></li>
<li>Fix serialization schema generation when using
<code>PlainValidator</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10427">#10427</a></li>
<li>Fix <code>Union</code> serialization warnings by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1449">pydantic/pydantic-core#1449</a></li>
<li>Fix variance issue in <code>_IncEx</code> type alias, only allow
<code>True</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10414">#10414</a></li>
<li>Fix <code>ZoneInfo</code> validation with various invalid types by
<a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10408">#10408</a></li>
</ul>
<h2>v2.9.1 (2024-09-09)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.9.1">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>Fixes</h4>
<ul>
<li>Fix Predicate issue in v2.9.0 by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10321">#10321</a></li>
<li>Fixing <code>annotated-types</code> bound to <code>&gt;=0.6.0</code>
by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10327">#10327</a></li>
<li>Turn <code>tzdata</code> install requirement into optional
<code>timezone</code> dependency by <a
href="https://github.com/jakob-keller"><code>@​jakob-keller</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10331">#10331</a></li>
<li>Fix <code>IncExc</code> type alias definition by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10339">#10339</a></li>
<li>Use correct types namespace when building namedtuple core schemas by
<a href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10337">#10337</a></li>
<li>Fix evaluation of stringified annotations during namespace
inspection by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10347">#10347</a></li>
<li>Fix tagged union serialization with alias generators by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-core/pull/1442">pydantic/pydantic-core#1442</a></li>
</ul>
<h2>v2.9.0 (2024-09-05)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.9.0">GitHub
release</a></p>
<p>The code released in v2.9.0 is practically identical to that of
v2.9.0b2.</p>
<h3>What's Changed</h3>
<h4>Packaging</h4>
<ul>
<li>Bump <code>ruff</code> to <code>v0.5.0</code> and
<code>pyright</code> to <code>v1.1.369</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9801">#9801</a></li>
<li>Bump <code>pydantic-extra-types</code> to <code>v2.9.0</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/9832">#9832</a></li>
<li>Support compatibility with <code>pdm v2.18.1</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10138">#10138</a></li>
<li>Bump <code>v1</code> version stub to <code>v1.10.18</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10214">#10214</a></li>
<li>Bump <code>pydantic-core</code> to <code>v2.23.2</code> by <a
href="https://github.com/sydney-runkle"><code>@​sydney-runkle</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic/pull/10311">#10311</a></li>
</ul>
<h4>New Features</h4>
<ul>
<li>Add support for <code>ZoneInfo</code> by <a
href="https://github.com/Youssefares"><code>@​Youssefares</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/9896">#9896</a></li>
<li>Add <code>Config.val_json_bytes</code> by <a
href="https://github.com/josh-newman"><code>@​josh-newman</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/9770">#9770</a></li>
<li>Add DSN for Snowflake by <a
href="https://github.com/aditkumar72"><code>@​aditkumar72</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic/pull/10128">#10128</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7cedbfb03d"><code>7cedbfb</code></a>
history updates</li>
<li><a
href="7eab2b8f75"><code>7eab2b8</code></a>
v bump</li>
<li><a
href="c0a288f145"><code>c0a288f</code></a>
Fix <code>ZoneInfo</code> with various invalid types (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10408">#10408</a>)</li>
<li><a
href="ea6115de0f"><code>ea6115d</code></a>
Fix variance issue in <code>_IncEx</code> type alias, only allow
<code>True</code> (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10414">#10414</a>)</li>
<li><a
href="fbfe25a119"><code>fbfe25a</code></a>
Fix serialization schema generation when using
<code>PlainValidator</code> (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10427">#10427</a>)</li>
<li><a
href="26cff3ccf6"><code>26cff3c</code></a>
Adding notes on designing callable discriminators (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10400">#10400</a>)</li>
<li><a
href="8a0e7adf6a"><code>8a0e7ad</code></a>
Do not error when trying to evaluate annotations of private attributes
(<a
href="https://redirect.github.com/pydantic/pydantic/issues/10358">#10358</a>)</li>
<li><a
href="ecc5275d01"><code>ecc5275</code></a>
bump</li>
<li><a
href="2c61bfda43"><code>2c61bfd</code></a>
Fix evaluation of stringified annotations during namespace inspection
(<a
href="https://redirect.github.com/pydantic/pydantic/issues/10347">#10347</a>)</li>
<li><a
href="3d364cbf99"><code>3d364cb</code></a>
Use correct types namespace when building namedtuple core schemas (<a
href="https://redirect.github.com/pydantic/pydantic/issues/10337">#10337</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pydantic/pydantic/compare/v2.8.2...v2.9.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pydantic&package-manager=pip&previous-version=2.8.2&new-version=2.9.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 17:28:08 +00:00
dependabot[bot]
985b3ab58d
Bump types-pyyaml from 6.0.12.20240808 to 6.0.12.20240917 (#17755)
Bumps [types-pyyaml](https://github.com/python/typeshed) from
6.0.12.20240808 to 6.0.12.20240917.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-pyyaml&package-manager=pip&previous-version=6.0.12.20240808&new-version=6.0.12.20240917)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 17:21:38 +00:00
dependabot[bot]
afc3af7763
Bump prometheus-client from 0.20.0 to 0.21.0 (#17746)
Bumps [prometheus-client](https://github.com/prometheus/client_python)
from 0.20.0 to 0.21.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/prometheus/client_python/releases">prometheus-client's
releases</a>.</em></p>
<blockquote>
<h2>0.21.0 / 2024-09-20</h2>
<h2>What's Changed</h2>
<p>[CHANGE] Reject invalid (not GET or OPTION) HTTP methods. <a
href="https://redirect.github.com/prometheus/client_python/issues/1019">#1019</a>
[ENHANCEMENT] Allow writing metrics when holding a lock for the metric
in the same thread. <a
href="https://redirect.github.com/prometheus/client_python/issues/1014">#1014</a>
[BUGFIX] Check for and error on None label values. <a
href="https://redirect.github.com/prometheus/client_python/issues/1012">#1012</a>
[BUGFIX] Fix timestamp comparison. <a
href="https://redirect.github.com/prometheus/client_python/issues/1038">#1038</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3b183b4499"><code>3b183b4</code></a>
Release 0.21.0</li>
<li><a
href="0014e97763"><code>0014e97</code></a>
Use re-entrant lock. (<a
href="https://redirect.github.com/prometheus/client_python/issues/1014">#1014</a>)</li>
<li><a
href="7c45f84e5e"><code>7c45f84</code></a>
Reject invalid HTTP methods and resources (<a
href="https://redirect.github.com/prometheus/client_python/issues/1019">#1019</a>)</li>
<li><a
href="09a5ae3060"><code>09a5ae3</code></a>
Fix timestamp comparison (<a
href="https://redirect.github.com/prometheus/client_python/issues/1038">#1038</a>)</li>
<li><a
href="e364a96f50"><code>e364a96</code></a>
Fix a typo in ASGI docs (<a
href="https://redirect.github.com/prometheus/client_python/issues/1036">#1036</a>)</li>
<li><a
href="eeec421b2f"><code>eeec421</code></a>
Pin python 3.8 and 3.9 at patch level (<a
href="https://redirect.github.com/prometheus/client_python/issues/1024">#1024</a>)</li>
<li><a
href="7bc8cddfbb"><code>7bc8cdd</code></a>
docs: correct link to multiprocessing docs (<a
href="https://redirect.github.com/prometheus/client_python/issues/1023">#1023</a>)</li>
<li><a
href="4535ce0f43"><code>4535ce0</code></a>
Add sanity check for label value (<a
href="https://redirect.github.com/prometheus/client_python/issues/1012">#1012</a>)</li>
<li>See full diff in <a
href="https://github.com/prometheus/client_python/compare/v0.20.0...v0.21.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=prometheus-client&package-manager=pip&previous-version=0.20.0&new-version=0.21.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 18:51:24 +02:00
dependabot[bot]
af2da0e47a
Bump pyasn1-modules from 0.4.0 to 0.4.1 (#17747)
Bumps [pyasn1-modules](https://github.com/pyasn1/pyasn1-modules) from
0.4.0 to 0.4.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pyasn1/pyasn1-modules/releases">pyasn1-modules's
releases</a>.</em></p>
<blockquote>
<h2>Release 0.4.1</h2>
<p>It's a minor release.</p>
<ul>
<li>Added support for Python 3.13.</li>
</ul>
<p>All changes are noted in the <a
href="https://github.com/pyasn1/pyasn1-modules/blob/main/CHANGES.txt">CHANGELOG</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyasn1/pyasn1-modules/blob/main/CHANGES.txt">pyasn1-modules's
changelog</a>.</em></p>
<blockquote>
<h2>Revision 0.4.1, released 10-09-2024</h2>
<ul>
<li>Added support for Python 3.13</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="36b036311a"><code>36b0363</code></a>
Prepare release 0.4.1</li>
<li><a
href="b0d849798a"><code>b0d8497</code></a>
Add support for Python 3.13 (<a
href="https://redirect.github.com/pyasn1/pyasn1-modules/issues/17">#17</a>)</li>
<li>See full diff in <a
href="https://github.com/pyasn1/pyasn1-modules/compare/v0.4.0...v0.4.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pyasn1-modules&package-manager=pip&previous-version=0.4.0&new-version=0.4.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 18:51:07 +02:00
dependabot[bot]
ac8c9ac50d
Bump python-multipart from 0.0.9 to 0.0.10 (#17745)
Bumps [python-multipart](https://github.com/Kludex/python-multipart)
from 0.0.9 to 0.0.10.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/releases">python-multipart's
releases</a>.</em></p>
<blockquote>
<h2>Version 0.0.10</h2>
<h2>What's Changed</h2>
<ul>
<li>Support <code>on_header_begin</code> by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/103">Kludex/python-multipart#103</a></li>
<li>Improve type hints on <code>FormParser</code> by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/104">Kludex/python-multipart#104</a></li>
<li>Fix <code>OnFileCallback</code> type by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/106">Kludex/python-multipart#106</a></li>
<li>Improve type hints by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/110">Kludex/python-multipart#110</a></li>
<li>Improve type hints on <code>File</code> by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/111">Kludex/python-multipart#111</a></li>
<li>Add type hint to helper functions by <a
href="https://github.com/Kludex"><code>@​Kludex</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/112">Kludex/python-multipart#112</a></li>
<li>Minor fix for Field.<strong>repr</strong> by <a
href="https://github.com/eltbus"><code>@​eltbus</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/114">Kludex/python-multipart#114</a></li>
<li>Fix use of chunk_size parameter by <a
href="https://github.com/jhnstrk"><code>@​jhnstrk</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/136">Kludex/python-multipart#136</a></li>
<li>Allow digits and valid token chars in headers by <a
href="https://github.com/jhnstrk"><code>@​jhnstrk</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/134">Kludex/python-multipart#134</a></li>
<li>Fix headers being carried between parts. fixes <a
href="https://redirect.github.com/Kludex/python-multipart/issues/63">#63</a>
by <a href="https://github.com/jhnstrk"><code>@​jhnstrk</code></a> in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/135">Kludex/python-multipart#135</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/onuralpszr"><code>@​onuralpszr</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/108">Kludex/python-multipart#108</a></li>
<li><a
href="https://github.com/janusheide"><code>@​janusheide</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/119">Kludex/python-multipart#119</a></li>
<li><a
href="https://github.com/yecril23pl"><code>@​yecril23pl</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/121">Kludex/python-multipart#121</a></li>
<li><a href="https://github.com/manunio"><code>@​manunio</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/117">Kludex/python-multipart#117</a></li>
<li><a href="https://github.com/jhnstrk"><code>@​jhnstrk</code></a> made
their first contribution in <a
href="https://redirect.github.com/Kludex/python-multipart/pull/136">Kludex/python-multipart#136</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Kludex/python-multipart/compare/0.0.9...0.0.10">https://github.com/Kludex/python-multipart/compare/0.0.9...0.0.10</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md">python-multipart's
changelog</a>.</em></p>
<blockquote>
<h2>0.0.10 (2024-09-21)</h2>
<ul>
<li>Support <code>on_header_begin</code> <a
href="https://redirect.github.com/Kludex/python-multipart/pull/103">#103</a>.</li>
<li>Improve type hints on <code>FormParser</code> <a
href="https://redirect.github.com/Kludex/python-multipart/pull/104">#104</a>.</li>
<li>Fix <code>OnFileCallback</code> type <a
href="https://redirect.github.com/Kludex/python-multipart/pull/106">#106</a>.</li>
<li>Improve type hints <a
href="https://redirect.github.com/Kludex/python-multipart/pull/110">#110</a>.</li>
<li>Improve type hints on <code>File</code> <a
href="https://redirect.github.com/Kludex/python-multipart/pull/111">#111</a>.</li>
<li>Add type hint to helper functions <a
href="https://redirect.github.com/Kludex/python-multipart/pull/112">#112</a>.</li>
<li>Minor fix for Field.<strong>repr</strong> <a
href="https://redirect.github.com/Kludex/python-multipart/pull/114">#114</a>.</li>
<li>Fix use of chunk_size parameter <a
href="https://redirect.github.com/Kludex/python-multipart/pull/136">#136</a>.</li>
<li>Allow digits and valid token chars in headers <a
href="https://redirect.github.com/Kludex/python-multipart/pull/134">#134</a>.</li>
<li>Fix headers being carried between parts <a
href="https://redirect.github.com/Kludex/python-multipart/pull/135">#135</a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="851a0263fc"><code>851a026</code></a>
Add entry to changelog (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/157">#157</a>)</li>
<li><a
href="265d6a4d1c"><code>265d6a4</code></a>
Upgrade documentation packages (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/156">#156</a>)</li>
<li><a
href="21825fced4"><code>21825fc</code></a>
Version 0.0.10 (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/155">#155</a>)</li>
<li><a
href="0defda6213"><code>0defda6</code></a>
Update pipelines (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/154">#154</a>)</li>
<li><a
href="c664cef3bb"><code>c664cef</code></a>
Use uv (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/153">#153</a>)</li>
<li><a
href="8b85d35fd7"><code>8b85d35</code></a>
Fix headers being carried between parts. fixes <a
href="https://redirect.github.com/Kludex/python-multipart/issues/63">#63</a>
(<a
href="https://redirect.github.com/Kludex/python-multipart/issues/135">#135</a>)</li>
<li><a
href="3ea51c714e"><code>3ea51c7</code></a>
Allow digits and valid token chars in headers (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/134">#134</a>)</li>
<li><a
href="3a722ed61a"><code>3a722ed</code></a>
Fix use of chunk_size parameter (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/136">#136</a>)</li>
<li><a
href="b5a5c19902"><code>b5a5c19</code></a>
Bump the python-packages group with 7 updates (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/138">#138</a>)</li>
<li><a
href="eb7b1fc392"><code>eb7b1fc</code></a>
Bump the github-actions group with 1 update (<a
href="https://redirect.github.com/Kludex/python-multipart/issues/139">#139</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/Kludex/python-multipart/compare/0.0.9...0.0.10">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=python-multipart&package-manager=pip&previous-version=0.0.9&new-version=0.0.10)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 18:50:57 +02:00
dependabot[bot]
443a9eb335
Bump bytes from 1.7.1 to 1.7.2 (#17743)
Bumps [bytes](https://github.com/tokio-rs/bytes) from 1.7.1 to 1.7.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/releases">bytes's
releases</a>.</em></p>
<blockquote>
<h2>Bytes 1.7.2</h2>
<h1>1.7.2 (September 17, 2024)</h1>
<h3>Fixed</h3>
<ul>
<li>Fix default impl of <code>Buf::{get_int, get_int_le}</code> (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/tokio-rs/bytes/blob/master/CHANGELOG.md">bytes's
changelog</a>.</em></p>
<blockquote>
<h1>1.7.2 (September 17, 2024)</h1>
<h3>Fixed</h3>
<ul>
<li>Fix default impl of <code>Buf::{get_int, get_int_le}</code> (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
</ul>
<h3>Documented</h3>
<ul>
<li>Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
</ul>
<h3>Internal changes</h3>
<ul>
<li>Ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d7c1d658d9"><code>d7c1d65</code></a>
chore: prepare bytes v1.7.2 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/736">#736</a>)</li>
<li><a
href="ac46ebdd46"><code>ac46ebd</code></a>
ci: update nightly to nightly-2024-09-15 (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/734">#734</a>)</li>
<li><a
href="79fb85323c"><code>79fb853</code></a>
fix: apply sign extension when decoding int (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/732">#732</a>)</li>
<li><a
href="291df5acc9"><code>291df5a</code></a>
Fix double spaces in comments and doc comments (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/731">#731</a>)</li>
<li><a
href="ed7d5ff39e"><code>ed7d5ff</code></a>
test: ensure BytesMut::advance reduces capacity (<a
href="https://redirect.github.com/tokio-rs/bytes/issues/728">#728</a>)</li>
<li>See full diff in <a
href="https://github.com/tokio-rs/bytes/compare/v1.7.1...v1.7.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bytes&package-manager=cargo&previous-version=1.7.1&new-version=1.7.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 18:33:57 +02:00
Erik Johnston
aad26cb93f
Never return negative bump stamp (#17748)
Fixes #17737
2024-09-24 10:07:23 +00:00
Andrew Ferrazzutti
5173741c71
Support MSC4140: Delayed events (Futures) (#17326) 2024-09-23 13:33:48 +01:00
Erik Johnston
75e2c17d2a
Speed up sorting of sliding sync rooms in initial request (#17734)
We do this by using the event stream cache.

---------

Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
2024-09-20 08:12:56 +01:00
Erik Johnston
a851f6b237
Sliding sync: Add connection tracking to the account_data extension (#17695)
This is basically exactly the same logic as for receipts. Essentially we
just need to track which room account data we have and haven't sent down
to clients, and use that when we pull stuff out.

I think this just needs a couple of extra tests written

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-09-19 19:51:51 +01:00
Eric Eastwood
c2e5e9e67c
Sliding Sync: Avoid fetching left rooms and add back newly_left rooms (#17725)
Performance optimization: We can avoid fetching rooms that the user has
left themselves (which could be a significant amount), then only add
back rooms that the user has `newly_left` (left in the token range of an
incremental sync). It's a lot faster to fetch less rooms than fetch them
all and throw them away in most cases. Since the user only leaves a room
(or is state reset out) once in a blue moon, we can avoid a lot of work.

Based on @erikjohnston's branch, erikj/ss_perf


---------

Co-authored-by: Erik Johnston <erik@matrix.org>
2024-09-19 10:07:18 -05:00
Erik Johnston
07a51d2a56
Fix sliding sync for rooms with unknown room version (#17733)
Follow on from #17727
2024-09-19 14:01:11 +01:00
Eric Eastwood
83fc225030
Sliding Sync: Add cache to get_tags_for_room(...) (#17730)
Add cache to `get_tags_for_room(...)`

This helps Sliding Sync because `get_tags_for_room(...)` is going to be
used in https://github.com/element-hq/synapse/pull/17695

Essentially, we're just trying to match `get_account_data_for_room(...)`
which already has a tree cache.
2024-09-19 12:43:26 +01:00
Eric Eastwood
a9c0e27eb7
Sliding Sync: No need to sort if the range is large enough to cover all of the rooms (#17731)
No need to sort if the range is large enough to cover all of the rooms
in the list. Previously, we would only do this optimization if the range
was exactly large enough.

Follow-up to https://github.com/element-hq/synapse/pull/17672
2024-09-19 09:33:34 +01:00
Eric Eastwood
faf5b40520
Sliding Sync: Fix _bulk_get_max_event_pos(...) being inefficient (#17728)
Fix `_bulk_get_max_event_pos(...)` being inefficient. It kept adding all
of the `batch_results` to the `results` over and over every time we
checked a single room in the batch.

I think we still ended up with the right answer before because we
accumulate `recheck_rooms` and actually recheck them to overwrite the
bad data we wrote to the `results` before.

Introduced in
https://github.com/element-hq/synapse/pull/17606/files#diff-cbd54e4b5a2a1646299d659a2d5884d6cb14e608efd2e1658e72b465bb66e31bR1481
2024-09-19 09:32:16 +01:00
Eric Eastwood
af998e6c66
Sliding sync: Ignore invites from ignored users (#17729)
`m.ignored_user_list` in account data
2024-09-18 18:09:23 -05:00
Eric Eastwood
61b7c31772
Sliding Sync: Shortcut for checking if certain background updates have completed (#17724)
Shortcut for checking if certain background updates have completed

Pulling this change out from one of @erikjohnston's branches
(https://github.com/element-hq/synapse/compare/develop...erikj/ss_perf)

---------

Co-authored-by: Erik Johnston <erikj@element.io>
2024-09-18 13:12:14 -05:00
Kegan Dougal
3c8a116e1a
Sliding Sync: bugfix: ensure we can sync with SSS even with missing rooms (#17727)
Fixes https://github.com/element-hq/element-x-ios/issues/3300

Some rooms are missing from `sliding_sync_joined_rooms`. When this
happens, the first call will succeed, but any subsequent calls for this
room ID will cause the cache to return `None` for the room ID, rather
than not having the key at all. This then causes the `<=` check to
throw.

Root cause: https://github.com/element-hq/synapse/issues/17726

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [ ] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2024-09-18 16:25:50 +00:00
Shay
51dd4df0a3
Add an Admin API endpoint to redact all a user's events (#17506) 2024-09-18 10:08:01 +00:00
Eric Eastwood
8881ad6d4b
Sliding Sync: Short-circuit have_finished_sliding_sync_background_jobs (#17723)
We only need to check it if returned bump stamp is `None`, which is rare.

Pulling this change out from one of @erikjohnston's branches
(https://github.com/element-hq/synapse/compare/develop...erikj/ss_perf)
2024-09-17 17:36:59 -05:00
Olivier 'reivilibre
d40bc279ed Merge branch 'master' into develop 2024-09-17 15:47:32 +01:00
Eric Eastwood
03937a1cae
Sliding Sync: Return room tags in account data extension (#17707)
The account data extension was also updated to avoid copies when we pull
the data out of the cache.

Fix https://github.com/element-hq/synapse/issues/17694
2024-09-16 13:47:35 -05:00
dependabot[bot]
285de43e48
Bump anyhow from 1.0.87 to 1.0.89 (#17716)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-16 18:52:48 +01:00
dependabot[bot]
4900438712
Bump pyasn1 from 0.6.0 to 0.6.1 (#17714)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-16 18:52:10 +01:00
dependabot[bot]
cf982d2e32
Bump ruff from 0.6.4 to 0.6.5 (#17715)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-16 18:51:33 +01:00
dependabot[bot]
7589565edd
Bump types-requests from 2.32.0.20240712 to 2.32.0.20240914 (#17713)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-16 18:32:39 +01:00
dependabot[bot]
7ed23e072e
Bump sentry-sdk from 2.13.0 to 2.14.0 (#17712) 2024-09-16 18:32:01 +01:00
David Baker
4ac783549c
Sliding Sync: Support filtering by 'tags' / 'not_tags' in SSS (#17662)
This appears to be enough to make Element Web work (or at least move it
on to the next hurdle)

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-09-12 20:18:19 -05:00
Erik Johnston
1cb84aaab5
Sliding Sync: Increase concurrency of sliding sync a bit (#17696)
For initial requests a typical page size is 20 rooms, so we may as well
do the batching as 20.

This should speed up bigger syncs a little bit.
2024-09-12 16:36:16 -05:00
Eric Eastwood
9b83fb7c16
Sliding Sync: Move filters tests to rest layer (#17703)
Move filters tests to rest layer in order to test the new (with sliding
sync tables) and fallback paths that Sliding Sync can use.

Also found a bug in the new path because it's not being tested which is
also fixed in this PR. We now take into account `has_known_state` when
filtering.

Spawning from
https://github.com/element-hq/synapse/pull/17662#discussion_r1755574791.
This should have been done when we started using the new sliding sync
tables in https://github.com/element-hq/synapse/pull/17630
2024-09-12 15:27:03 -05:00
Andrew Morgan
c5b4be6d07 Merge branch 'release-v1.115' into develop 2024-09-12 13:05:43 +01:00
Éloi Rivard
ebad618bf0
import pydantic objects from the _pydantic_compat module (#17667)
This PR changes `from pydantic import BaseModel` to `from
synapse._pydantic_compat import BaseModel` (as well as `constr`,
`conbytes`, `conint`, `confloat`).

It allows `check_pydantic_models.py` to mock those pydantic objects only
in the synapse module, and not interfere with pydantic objects in
external dependencies.

This should solve the CI problems for #17144, which breaks because
`check_pydantic_models.py` patches pydantic models from
[scim2-models](https://scim2-models.readthedocs.io/).

/cc @DMRobertson @gotmax23
fixes #17659 


### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2024-09-11 21:01:43 +00:00
Eric Eastwood
16af80b8fb
Sliding Sync: Use Sliding Sync tables for sorting (#17693)
Use Sliding Sync tables for sorting
(`bulk_get_last_event_pos_in_room_before_stream_ordering(...)` ->
`_bulk_get_max_event_pos(...)`)
2024-09-11 12:16:24 -05:00
Eric Eastwood
e4a1f271b9
Sliding Sync: Make sure we get up-to-date information from get_sliding_sync_rooms_for_user(...) (#17692)
We need to bust the `get_sliding_sync_rooms_for_user`
cache when the room encryption is updated and any
other field that is used in the query.

Follow-up to https://github.com/element-hq/synapse/pull/17630

- Bust cache for membership change (cross-reference
`get_rooms_for_user`)
- Bust cache for room `encryption` (cross-reference
`get_room_encryption`)
- Bust cache for `forgotten` (cross-reference
`did_forget`/`get_forgotten_rooms_for_user`)
2024-09-11 12:13:54 -05:00
Erik Johnston
6b131a99fe Merge remote-tracking branch 'origin/release-v1.115' into develop 2024-09-11 16:43:07 +01:00
Erik Johnston
596b96411b
Sliding sync: various fixups to the background update (#17652) 2024-09-11 15:38:46 +01:00
Erik Johnston
f6c2b0ec2e
Sliding sync: don't fetch room summary for named rooms. (#17683)
For rooms with a name we can skip fetching a full room summary, as we
don't need to calculate heroes, and instead just fetch the room counts
directly.

This also changes things to not return counts and heroes for non-joined
rooms. For left/banned rooms we were returning zero values anyway, and
for invite/knock rooms we don't really want to leak such information
(even if some of is included in the stripped state).
2024-09-11 13:16:57 +01:00
Travis Ralston
a7fcac5648
Enable guest access on new media endpoints, per MSC4189 (#17675) 2024-09-10 18:29:24 +01:00
V02460
e06e3c4004
Add config option turn_shared_secret_path (#17690)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2024-09-10 17:27:46 +00:00
dependabot[bot]
60441059a3
Bump anyhow from 1.0.86 to 1.0.87 (#17685)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-10 18:05:31 +01:00
Jeremy Wright
1b197752b6
Fix minor misspelling in README.rst. (#17664) 2024-09-10 17:33:25 +01:00
dependabot[bot]
598a83d005
Bump cryptography from 43.0.0 to 43.0.1 (#17689)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-10 17:32:17 +01:00
dependabot[bot]
be603de2cb
Bump serde_json from 1.0.127 to 1.0.128 (#17687)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-10 17:31:34 +01:00
dependabot[bot]
62523571ae
Bump serde from 1.0.209 to 1.0.210 (#17686) 2024-09-10 17:30:37 +01:00
101 changed files with 8985 additions and 2553 deletions

View File

@ -8,6 +8,7 @@
!README.rst !README.rst
!pyproject.toml !pyproject.toml
!poetry.lock !poetry.lock
!requirements.txt
!Cargo.lock !Cargo.lock
!Cargo.toml !Cargo.toml
!build_rust.py !build_rust.py

19
.gitlab-ci.yml Normal file
View File

@ -0,0 +1,19 @@
image: docker:stable
stages:
- build
build amd64:
stage: build
tags:
- amd64
only:
- master
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- synversion=$(cat pyproject.toml | grep '^version =' | sed -E 's/^version = "(.+)"$/\1/')
- docker build --tag $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$synversion .
- docker push $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:$synversion
- docker rmi $CI_REGISTRY_IMAGE:latest $CI_REGISTRY_IMAGE:$synversion

View File

@ -1,3 +1,66 @@
# Synapse 1.116.0rc1 (2024-09-25)
### Features
- Add initial implementation of delayed events as proposed by [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140). ([\#17326](https://github.com/element-hq/synapse/issues/17326))
- Add an asynchronous Admin API endpoint [to redact all a user's events](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#redact-all-the-events-of-a-user),
and [an endpoint to check on the status of that redaction task](https://element-hq.github.io/synapse/v1.116/admin_api/user_admin_api.html#check-the-status-of-a-redaction-process). ([\#17506](https://github.com/element-hq/synapse/issues/17506))
- Add support for the `tags` and `not_tags` filters for [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17662](https://github.com/element-hq/synapse/issues/17662))
- Guests can use the new media endpoints to download media, as described by [MSC4189](https://github.com/matrix-org/matrix-spec-proposals/pull/4189). ([\#17675](https://github.com/element-hq/synapse/issues/17675))
- Add config option `turn_shared_secret_path`. ([\#17690](https://github.com/element-hq/synapse/issues/17690))
- Return room tags in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync account data extension. ([\#17707](https://github.com/element-hq/synapse/issues/17707))
### Bugfixes
- Make sure we get up-to-date state information when using the new [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync tables to derive room membership. ([\#17692](https://github.com/element-hq/synapse/issues/17692))
- Fix bug where room account data would not correctly be sent down [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync for old rooms. ([\#17695](https://github.com/element-hq/synapse/issues/17695))
- Fix a bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync which could prevent /sync from working for certain user accounts. ([\#17727](https://github.com/element-hq/synapse/issues/17727), [\#17733](https://github.com/element-hq/synapse/issues/17733))
- Ignore invites from ignored users in Sliding Sync. ([\#17729](https://github.com/element-hq/synapse/issues/17729))
- Fix bug in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync where the server would incorrectly return a negative bump stamp, which caused Element X apps to stop syncing. ([\#17748](https://github.com/element-hq/synapse/issues/17748))
### Internal Changes
- Import pydantic objects from the `_pydantic_compat` module.
This allows `check_pydantic_models.py` to mock those pydantic objects
only in the synapse module, and not interfere with pydantic objects in
external dependencies. ([\#17667](https://github.com/element-hq/synapse/issues/17667))
- Use [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync tables as a bulk shortcut for getting the max `event_stream_ordering` of rooms. ([\#17693](https://github.com/element-hq/synapse/issues/17693))
- Speed up [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync requests a bit where there are many room changes. ([\#17696](https://github.com/element-hq/synapse/issues/17696))
- Refactor [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync filter unit tests so the sliding sync API has better test coverage. ([\#17703](https://github.com/element-hq/synapse/issues/17703))
- Fetch `bump_stamp`s more efficiently in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17723](https://github.com/element-hq/synapse/issues/17723))
- Shortcut for checking if certain background updates have completed (utilized in [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync). ([\#17724](https://github.com/element-hq/synapse/issues/17724))
- More efficiently fetch rooms for [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17725](https://github.com/element-hq/synapse/issues/17725))
- Fix `_bulk_get_max_event_pos` being inefficient. ([\#17728](https://github.com/element-hq/synapse/issues/17728))
- Add cache to `get_tags_for_room(...)`. ([\#17730](https://github.com/element-hq/synapse/issues/17730))
- Small performance improvement in speeding up [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Sliding Sync. ([\#17731](https://github.com/element-hq/synapse/issues/17731))
- Minor speed up of initial [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) sliding sync requests. ([\#17734](https://github.com/element-hq/synapse/issues/17734))
- Remove usage of the deprecated `cgi` module, deprecated in Python 3.11 and removed in Python 3.13. ([\#17741](https://github.com/element-hq/synapse/issues/17741))
- Fix typing of a variable that is not `Unknown` anymore after updating `treq`. ([\#17744](https://github.com/element-hq/synapse/issues/17744))
### Updates to locked dependencies
* Bump anyhow from 1.0.86 to 1.0.89. ([\#17685](https://github.com/element-hq/synapse/issues/17685), [\#17716](https://github.com/element-hq/synapse/issues/17716))
* Bump bytes from 1.7.1 to 1.7.2. ([\#17743](https://github.com/element-hq/synapse/issues/17743))
* Bump cryptography from 43.0.0 to 43.0.1. ([\#17689](https://github.com/element-hq/synapse/issues/17689))
* Bump idna from 3.8 to 3.10. ([\#17758](https://github.com/element-hq/synapse/issues/17758))
* Bump msgpack from 1.0.8 to 1.1.0. ([\#17759](https://github.com/element-hq/synapse/issues/17759))
* Bump phonenumbers from 8.13.44 to 8.13.45. ([\#17762](https://github.com/element-hq/synapse/issues/17762))
* Bump prometheus-client from 0.20.0 to 0.21.0. ([\#17746](https://github.com/element-hq/synapse/issues/17746))
* Bump pyasn1 from 0.6.0 to 0.6.1. ([\#17714](https://github.com/element-hq/synapse/issues/17714))
* Bump pyasn1-modules from 0.4.0 to 0.4.1. ([\#17747](https://github.com/element-hq/synapse/issues/17747))
* Bump pydantic from 2.8.2 to 2.9.2. ([\#17756](https://github.com/element-hq/synapse/issues/17756))
* Bump python-multipart from 0.0.9 to 0.0.10. ([\#17745](https://github.com/element-hq/synapse/issues/17745))
* Bump ruff from 0.6.4 to 0.6.7. ([\#17715](https://github.com/element-hq/synapse/issues/17715), [\#17760](https://github.com/element-hq/synapse/issues/17760))
* Bump sentry-sdk from 2.13.0 to 2.14.0. ([\#17712](https://github.com/element-hq/synapse/issues/17712))
* Bump serde from 1.0.209 to 1.0.210. ([\#17686](https://github.com/element-hq/synapse/issues/17686))
* Bump serde_json from 1.0.127 to 1.0.128. ([\#17687](https://github.com/element-hq/synapse/issues/17687))
* Bump treq from 23.11.0 to 24.9.1. ([\#17744](https://github.com/element-hq/synapse/issues/17744))
* Bump types-pyyaml from 6.0.12.20240808 to 6.0.12.20240917. ([\#17755](https://github.com/element-hq/synapse/issues/17755))
* Bump types-requests from 2.32.0.20240712 to 2.32.0.20240914. ([\#17713](https://github.com/element-hq/synapse/issues/17713))
* Bump types-setuptools from 74.1.0.20240907 to 75.1.0.20240917. ([\#17757](https://github.com/element-hq/synapse/issues/17757))
# Synapse 1.115.0 (2024-09-17) # Synapse 1.115.0 (2024-09-17)
No significant changes since 1.115.0rc2. No significant changes since 1.115.0rc2.

20
Cargo.lock generated
View File

@ -13,9 +13,9 @@ dependencies = [
[[package]] [[package]]
name = "anyhow" name = "anyhow"
version = "1.0.86" version = "1.0.89"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b3d1d046238990b9cf5bcde22a3fb3584ee5cf65fb2765f454ed428c7a0063da" checksum = "86fdf8605db99b54d3cd748a44c6d04df638eb5dafb219b135d0149bd0db01f6"
[[package]] [[package]]
name = "arc-swap" name = "arc-swap"
@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]] [[package]]
name = "bytes" name = "bytes"
version = "1.7.1" version = "1.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50" checksum = "428d9aa8fbc0670b7b8d6030a7fadd0f86151cae55e4dbbece15f3780a3dfaf3"
[[package]] [[package]]
name = "cfg-if" name = "cfg-if"
@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.209" version = "1.0.210"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "99fce0ffe7310761ca6bf9faf5115afbc19688edd00171d81b1bb1b116c63e09" checksum = "c8e3592472072e6e22e0a54d5904d9febf8508f65fb8552499a1abc7d1078c3a"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.209" version = "1.0.210"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a5831b979fd7b5439637af1752d535ff49f4860c0f341d1baeb6faf0f4242170" checksum = "243902eda00fad750862fc144cea25caca5e20d615af0a81bee94ca738f1df1f"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -505,9 +505,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.127" version = "1.0.128"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8043c06d9f82bd7271361ed64f415fe5e12a77fdb52e573e7f06a516dea329ad" checksum = "6ff5456707a1de34e7e37f2a6fd3d3f808c318259cbd01ab6377795054b483d8"
dependencies = [ dependencies = [
"itoa", "itoa",
"memchr", "memchr",

61
Dockerfile Normal file
View File

@ -0,0 +1,61 @@
ARG PYTHON_VERSION=3.11
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
RUN apt-get update && apt-get install -y \
build-essential \
libffi-dev \
libjpeg-dev \
libpq-dev \
libssl-dev \
libwebp-dev \
libxml++2.6-dev \
libxslt1-dev \
zlib1g-dev \
openssl \
git \
curl \
&& rm -rf /var/lib/apt/lists/*
ENV RUSTUP_HOME=/rust
ENV CARGO_HOME=/cargo
ENV PATH=/cargo/bin:/rust/bin:$PATH
RUN mkdir /rust /cargo
RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable
COPY synapse /synapse/synapse/
COPY rust /synapse/rust/
COPY README.rst pyproject.toml requirements.txt build_rust.py /synapse/
RUN pip install --prefix="/install" --no-warn-script-location --ignore-installed \
--no-deps -r /synapse/requirements.txt \
&& pip install --prefix="/install" --no-warn-script-location \
--no-deps \
'git+https://github.com/maunium/synapse-simple-antispam#egg=synapse-simple-antispam' \
'git+https://github.com/devture/matrix-synapse-shared-secret-auth@2.0.3#egg=shared_secret_authenticator' \
&& pip install --prefix="/install" --no-warn-script-location \
--no-deps /synapse
FROM docker.io/python:${PYTHON_VERSION}-slim
RUN apt-get update && apt-get install -y \
curl \
libjpeg62-turbo \
libpq5 \
libwebp7 \
xmlsec1 \
libjemalloc2 \
openssl \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /install /usr/local
VOLUME ["/data"]
ENV LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libjemalloc.so.2"
ENTRYPOINT ["python3", "-m", "synapse.app.homeserver"]
CMD ["--keys-directory", "/data", "-c", "/data/homeserver.yaml"]
HEALTHCHECK --start-period=5s --interval=1m --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1

63
README.md Normal file
View File

@ -0,0 +1,63 @@
# Maunium Synapse
This is a fork of [Synapse] to remove dumb limits and fix bugs that the
upstream devs don't want to fix.
The only official distribution is the docker image in the [GitLab container
registry], but you can also install from source ([upstream instructions]).
The master branch and `:latest` docker tag are upgraded to each upstream
release candidate very soon after release (usually within 10 minutes†). There
are also docker tags for each release, e.g. `:1.75.0`. If you don't want RCs,
use the specific release tags.
†If there are merge conflicts, the update may be delayed for up to a few days
after the full release.
[Synapse]: https://github.com/matrix-org/synapse
[GitLab container registry]: https://mau.dev/maunium/synapse/container_registry
[upstream instructions]: https://github.com/matrix-org/synapse/blob/develop/INSTALL.md#installing-from-source
## List of changes
* Default power level for room creator is 9001 instead of 100.
* Room creator can specify a custom room ID with the `room_id` param in the
request body. If the room ID is already in use, it will return `M_CONFLICT`.
* ~~URL previewer user agent includes `Bot` so Twitter previews work properly.~~
Upstreamed after over 2 years 🎉
* ~~Local event creation concurrency is disabled to avoid unnecessary state
resolution.~~ Upstreamed after over 3 years 🎉
* Register admin API can register invalid user IDs.
* Docker image with jemalloc enabled by default.
* Config option to allow specific users to send events without unnecessary
validation.
* Config option to allow specific users to receive events that are usually
filtered away (e.g. `org.matrix.dummy_event` and `m.room.aliases`).
* Config option to allow specific users to use timestamp massaging without
being appservice users.
* Removed bad pusher URL validation.
* webp images are thumbnailed to webp instead of jpeg to avoid losing
transparency.
* Media repo `Cache-Control` header says `immutable` and 1 year for all media
that exists, as media IDs in Matrix are immutable.
* Allowed sending custom data with read receipts.
You can view the full list of changes on the [meow-patchset] branch.
Additionally, historical patch sets are saved as `meow-patchset-vX` [tags].
[meow-patchset]: https://mau.dev/maunium/synapse/-/compare/patchset-base...meow-patchset
[tags]: https://mau.dev/maunium/synapse/-/tags?search=meow-patchset&sort=updated_desc
## Configuration reference
```yaml
meow:
# List of users who aren't subject to unnecessary validation in the C-S API.
validation_override:
- "@you:example.com"
# List of users who will get org.matrix.dummy_event and m.room.aliases events down /sync
filter_override:
- "@you:example.com"
# Whether or not the admin API should be able to register invalid user IDs.
admin_api_register_invalid: true
# List of users who can use timestamp massaging without being appservices
timestamp_override:
- "@you:example.com"
```

View File

@ -158,7 +158,7 @@ it:
We **strongly** recommend using a CAPTCHA, particularly if your homeserver is exposed to We **strongly** recommend using a CAPTCHA, particularly if your homeserver is exposed to
the public internet. Without it, anyone can freely register accounts on your homeserver. the public internet. Without it, anyone can freely register accounts on your homeserver.
This can be exploited by attackers to create spambots targetting the rest of the Matrix This can be exploited by attackers to create spambots targeting the rest of the Matrix
federation. federation.
Your new user name will be formed partly from the ``server_name``, and partly Your new user name will be formed partly from the ``server_name``, and partly

View File

@ -20,8 +20,8 @@
# #
import argparse import argparse
import cgi
import datetime import datetime
import html
import json import json
import urllib.request import urllib.request
from typing import List from typing import List
@ -85,7 +85,7 @@ def make_graph(pdus: List[dict], filename_prefix: str) -> None:
"name": name, "name": name,
"type": pdu.get("pdu_type"), "type": pdu.get("pdu_type"),
"state_key": pdu.get("state_key"), "state_key": pdu.get("state_key"),
"content": cgi.escape(json.dumps(pdu.get("content")), quote=True), "content": html.escape(json.dumps(pdu.get("content")), quote=True),
"time": t, "time": t,
"depth": pdu.get("depth"), "depth": pdu.get("depth"),
} }

6
debian/changelog vendored
View File

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.116.0~rc1) stable; urgency=medium
* New synapse release 1.116.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 25 Sep 2024 09:34:07 +0000
matrix-synapse-py3 (1.115.0) stable; urgency=medium matrix-synapse-py3 (1.115.0) stable; urgency=medium
* New Synapse release 1.115.0. * New Synapse release 1.115.0.

View File

@ -111,6 +111,9 @@ server_notices:
system_mxid_avatar_url: "" system_mxid_avatar_url: ""
room_name: "Server Alert" room_name: "Server Alert"
# Enable delayed events (msc4140)
max_event_delay_duration: 24h
# Disable sync cache so that initial `/sync` requests are up-to-date. # Disable sync cache so that initial `/sync` requests are up-to-date.
caches: caches:

View File

@ -1361,3 +1361,83 @@ Returns a `404` HTTP status code if no user was found, with a response body like
``` ```
_Added in Synapse 1.72.0._ _Added in Synapse 1.72.0._
## Redact all the events of a user
The API is
```
POST /_synapse/admin/v1/user/$user_id/redact
{
"rooms": ["!roomid1", "!roomid2"]
}
```
If an empty list is provided as the key for `rooms`, all events in all the rooms the user is member of will be redacted,
otherwise all the events in the rooms provided in the request will be redacted.
The API starts redaction process running, and returns immediately with a JSON body with
a redact id which can be used to query the status of the redaction process:
```json
{
"redact_id": "<opaque id>"
}
```
**Parameters**
The following parameters should be set in the URL:
- `user_id` - The fully qualified MXID of the user: for example, `@user:server.com`.
The following JSON body parameter must be provided:
- `rooms` - A list of rooms to redact the user's events in. If an empty list is provided all events in all rooms
the user is a member of will be redacted
_Added in Synapse 1.116.0._
The following JSON body parameters are optional:
- `reason` - Reason the redaction is being requested, ie "spam", "abuse", etc. This will be included in each redaction event, and be visible to users.
- `limit` - a limit on the number of the user's events to search for ones that can be redacted (events are redacted newest to oldest) in each room, defaults to 1000 if not provided
## Check the status of a redaction process
It is possible to query the status of the background task for redacting a user's events.
The status can be queried up to 24 hours after completion of the task,
or until Synapse is restarted (whichever happens first).
The API is:
```
GET /_synapse/admin/v1/user/redact_status/$redact_id
```
A response body like the following is returned:
```
{
"status": "active",
"failed_redactions": [],
}
```
**Parameters**
The following parameters should be set in the URL:
* `redact_id` - string - The ID for this redaction process, provided when the redaction was requested.
**Response**
The following fields are returned in the JSON response body:
- `status` - string - one of scheduled/active/completed/failed, indicating the status of the redaction job
- `failed_redactions` - dictionary - the keys of the dict are event ids the process was unable to redact, if any, and the values are
the corresponding error that caused the redaction to fail
_Added in Synapse 1.116.0._

View File

@ -761,6 +761,19 @@ email:
password_reset: "[%(server_name)s] Password reset" password_reset: "[%(server_name)s] Password reset"
email_validation: "[%(server_name)s] Validate your email" email_validation: "[%(server_name)s] Validate your email"
``` ```
---
### `max_event_delay_duration`
The maximum allowed duration by which sent events can be delayed, as per
[MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140).
Must be a positive value if set.
Defaults to no duration (`null`), which disallows sending delayed events.
Example configuration:
```yaml
max_event_delay_duration: 24h
```
## Homeserver blocking ## Homeserver blocking
Useful options for Synapse admins. Useful options for Synapse admins.
@ -2315,6 +2328,22 @@ Example configuration:
```yaml ```yaml
turn_shared_secret: "YOUR_SHARED_SECRET" turn_shared_secret: "YOUR_SHARED_SECRET"
``` ```
---
### `turn_shared_secret_path`
An alternative to [`turn_shared_secret`](#turn_shared_secret):
allows the shared secret to be specified in an external file.
The file should be a plain text file, containing only the shared secret.
Synapse reads the shared secret from the given file once at startup.
Example configuration:
```yaml
turn_shared_secret_path: /path/to/secrets/file
```
_Added in Synapse 1.116.0._
--- ---
### `turn_username` and `turn_password` ### `turn_username` and `turn_password`

View File

@ -290,6 +290,7 @@ information.
Additionally, the following REST endpoints can be handled for GET requests: Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/ ^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
^/_matrix/client/unstable/org.matrix.msc4140/delayed_events
Pagination requests can also be handled, but all requests for a given Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to room must be routed to the same instance. Additionally, care must be taken to

494
poetry.lock generated
View File

@ -2,13 +2,13 @@
[[package]] [[package]]
name = "annotated-types" name = "annotated-types"
version = "0.5.0" version = "0.7.0"
description = "Reusable constraint types to use with typing.Annotated" description = "Reusable constraint types to use with typing.Annotated"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.8"
files = [ files = [
{file = "annotated_types-0.5.0-py3-none-any.whl", hash = "sha256:58da39888f92c276ad970249761ebea80ba544b77acddaa1a4d6cf78287d45fd"}, {file = "annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53"},
{file = "annotated_types-0.5.0.tar.gz", hash = "sha256:47cdc3490d9ac1506ce92c7aaa76c579dc3509ff11e098fc867e5130ab7be802"}, {file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"},
] ]
[package.dependencies] [package.dependencies]
@ -357,38 +357,38 @@ files = [
[[package]] [[package]]
name = "cryptography" name = "cryptography"
version = "43.0.0" version = "43.0.1"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "cryptography-43.0.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:64c3f16e2a4fc51c0d06af28441881f98c5d91009b8caaff40cf3548089e9c74"}, {file = "cryptography-43.0.1-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:8385d98f6a3bf8bb2d65a73e17ed87a3ba84f6991c155691c51112075f9ffc5d"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3dcdedae5c7710b9f97ac6bba7e1052b95c7083c9d0e9df96e02a1932e777895"}, {file = "cryptography-43.0.1-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:27e613d7077ac613e399270253259d9d53872aaf657471473ebfc9a52935c062"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d9a1eca329405219b605fac09ecfc09ac09e595d6def650a437523fcd08dd22"}, {file = "cryptography-43.0.1-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:68aaecc4178e90719e95298515979814bda0cbada1256a4485414860bd7ab962"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ea9e57f8ea880eeea38ab5abf9fbe39f923544d7884228ec67d666abd60f5a47"}, {file = "cryptography-43.0.1-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:de41fd81a41e53267cb020bb3a7212861da53a7d39f863585d13ea11049cf277"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9a8d6802e0825767476f62aafed40532bd435e8a5f7d23bd8b4f5fd04cc80ecf"}, {file = "cryptography-43.0.1-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:f98bf604c82c416bc829e490c700ca1553eafdf2912a91e23a79d97d9801372a"},
{file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:cc70b4b581f28d0a254d006f26949245e3657d40d8857066c2ae22a61222ef55"}, {file = "cryptography-43.0.1-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:61ec41068b7b74268fa86e3e9e12b9f0c21fcf65434571dbb13d954bceb08042"},
{file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:4a997df8c1c2aae1e1e5ac49c2e4f610ad037fc5a3aadc7b64e39dea42249431"}, {file = "cryptography-43.0.1-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:014f58110f53237ace6a408b5beb6c427b64e084eb451ef25a28308270086494"},
{file = "cryptography-43.0.0-cp37-abi3-win32.whl", hash = "sha256:6e2b11c55d260d03a8cf29ac9b5e0608d35f08077d8c087be96287f43af3ccdc"}, {file = "cryptography-43.0.1-cp37-abi3-win32.whl", hash = "sha256:2bd51274dcd59f09dd952afb696bf9c61a7a49dfc764c04dd33ef7a6b502a1e2"},
{file = "cryptography-43.0.0-cp37-abi3-win_amd64.whl", hash = "sha256:31e44a986ceccec3d0498e16f3d27b2ee5fdf69ce2ab89b52eaad1d2f33d8778"}, {file = "cryptography-43.0.1-cp37-abi3-win_amd64.whl", hash = "sha256:666ae11966643886c2987b3b721899d250855718d6d9ce41b521252a17985f4d"},
{file = "cryptography-43.0.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:7b3f5fe74a5ca32d4d0f302ffe6680fcc5c28f8ef0dc0ae8f40c0f3a1b4fca66"}, {file = "cryptography-43.0.1-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:ac119bb76b9faa00f48128b7f5679e1d8d437365c5d26f1c2c3f0da4ce1b553d"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ac1955ce000cb29ab40def14fd1bbfa7af2017cca696ee696925615cafd0dce5"}, {file = "cryptography-43.0.1-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1bbcce1a551e262dfbafb6e6252f1ae36a248e615ca44ba302df077a846a8806"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:299d3da8e00b7e2b54bb02ef58d73cd5f55fb31f33ebbf33bd00d9aa6807df7e"}, {file = "cryptography-43.0.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58d4e9129985185a06d849aa6df265bdd5a74ca6e1b736a77959b498e0505b85"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ee0c405832ade84d4de74b9029bedb7b31200600fa524d218fc29bfa371e97f5"}, {file = "cryptography-43.0.1-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:d03a475165f3134f773d1388aeb19c2d25ba88b6a9733c5c590b9ff7bbfa2e0c"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:cb013933d4c127349b3948aa8aaf2f12c0353ad0eccd715ca789c8a0f671646f"}, {file = "cryptography-43.0.1-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:511f4273808ab590912a93ddb4e3914dfd8a388fed883361b02dea3791f292e1"},
{file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:fdcb265de28585de5b859ae13e3846a8e805268a823a12a4da2597f1f5afc9f0"}, {file = "cryptography-43.0.1-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:80eda8b3e173f0f247f711eef62be51b599b5d425c429b5d4ca6a05e9e856baa"},
{file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:2905ccf93a8a2a416f3ec01b1a7911c3fe4073ef35640e7ee5296754e30b762b"}, {file = "cryptography-43.0.1-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:38926c50cff6f533f8a2dae3d7f19541432610d114a70808f0926d5aaa7121e4"},
{file = "cryptography-43.0.0-cp39-abi3-win32.whl", hash = "sha256:47ca71115e545954e6c1d207dd13461ab81f4eccfcb1345eac874828b5e3eaaf"}, {file = "cryptography-43.0.1-cp39-abi3-win32.whl", hash = "sha256:a575913fb06e05e6b4b814d7f7468c2c660e8bb16d8d5a1faf9b33ccc569dd47"},
{file = "cryptography-43.0.0-cp39-abi3-win_amd64.whl", hash = "sha256:0663585d02f76929792470451a5ba64424acc3cd5227b03921dab0e2f27b1709"}, {file = "cryptography-43.0.1-cp39-abi3-win_amd64.whl", hash = "sha256:d75601ad10b059ec832e78823b348bfa1a59f6b8d545db3a24fd44362a1564cb"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2c6d112bf61c5ef44042c253e4859b3cbbb50df2f78fa8fae6747a7814484a70"}, {file = "cryptography-43.0.1-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ea25acb556320250756e53f9e20a4177515f012c9eaea17eb7587a8c4d8ae034"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:844b6d608374e7d08f4f6e6f9f7b951f9256db41421917dfb2d003dde4cd6b66"}, {file = "cryptography-43.0.1-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c1332724be35d23a854994ff0b66530119500b6053d0bd3363265f7e5e77288d"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:51956cf8730665e2bdf8ddb8da0056f699c1a5715648c1b0144670c1ba00b48f"}, {file = "cryptography-43.0.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:fba1007b3ef89946dbbb515aeeb41e30203b004f0b4b00e5e16078b518563289"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:aae4d918f6b180a8ab8bf6511a419473d107df4dbb4225c7b48c5c9602c38c7f"}, {file = "cryptography-43.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:5b43d1ea6b378b54a1dc99dd8a2b5be47658fe9a7ce0a58ff0b55f4b43ef2b84"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:232ce02943a579095a339ac4b390fbbe97f5b5d5d107f8a08260ea2768be8cc2"}, {file = "cryptography-43.0.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:88cce104c36870d70c49c7c8fd22885875d950d9ee6ab54df2745f83ba0dc365"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5bcb8a5620008a8034d39bce21dc3e23735dfdb6a33a06974739bfa04f853947"}, {file = "cryptography-43.0.1-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:9d3cdb25fa98afdd3d0892d132b8d7139e2c087da1712041f6b762e4f807cc96"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:08a24a7070b2b6804c1940ff0f910ff728932a9d0e80e7814234269f9d46d069"}, {file = "cryptography-43.0.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e710bf40870f4db63c3d7d929aa9e09e4e7ee219e703f949ec4073b4294f6172"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:e9c5266c432a1e23738d178e51c2c7a5e2ddf790f248be939448c0ba2021f9d1"}, {file = "cryptography-43.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:7c05650fe8023c5ed0d46793d4b7d7e6cd9c04e68eabe5b0aeea836e37bdcec2"},
{file = "cryptography-43.0.0.tar.gz", hash = "sha256:b88075ada2d51aa9f18283532c9f60e72170041bba88d7f37e49cbb10275299e"}, {file = "cryptography-43.0.1.tar.gz", hash = "sha256:203e92a75716d8cfb491dc47c79e17d0d9207ccffcbcb35f598fbe463ae3444d"},
] ]
[package.dependencies] [package.dependencies]
@ -401,7 +401,7 @@ nox = ["nox"]
pep8test = ["check-sdist", "click", "mypy", "ruff"] pep8test = ["check-sdist", "click", "mypy", "ruff"]
sdist = ["build"] sdist = ["build"]
ssh = ["bcrypt (>=3.1.5)"] ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi", "cryptography-vectors (==43.0.0)", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"] test = ["certifi", "cryptography-vectors (==43.0.1)", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
test-randomorder = ["pytest-randomly"] test-randomorder = ["pytest-randomly"]
[[package]] [[package]]
@ -608,15 +608,18 @@ idna = ">=2.5"
[[package]] [[package]]
name = "idna" name = "idna"
version = "3.8" version = "3.10"
description = "Internationalized Domain Names in Applications (IDNA)" description = "Internationalized Domain Names in Applications (IDNA)"
optional = false optional = false
python-versions = ">=3.6" python-versions = ">=3.6"
files = [ files = [
{file = "idna-3.8-py3-none-any.whl", hash = "sha256:050b4e5baadcd44d760cedbd2b8e639f2ff89bbc7a5730fcc662954303377aac"}, {file = "idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3"},
{file = "idna-3.8.tar.gz", hash = "sha256:d838c2c0ed6fced7693d5e8ab8e734d5f8fda53a039c0164afb0b82e771e3603"}, {file = "idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9"},
] ]
[package.extras]
all = ["flake8 (>=7.1.1)", "mypy (>=1.11.2)", "pytest (>=8.3.2)", "ruff (>=0.6.2)"]
[[package]] [[package]]
name = "ijson" name = "ijson"
version = "3.3.0" version = "3.3.0"
@ -1243,67 +1246,75 @@ files = [
[[package]] [[package]]
name = "msgpack" name = "msgpack"
version = "1.0.8" version = "1.1.0"
description = "MessagePack serializer" description = "MessagePack serializer"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "msgpack-1.0.8-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:505fe3d03856ac7d215dbe005414bc28505d26f0c128906037e66d98c4e95868"}, {file = "msgpack-1.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7ad442d527a7e358a469faf43fda45aaf4ac3249c8310a82f0ccff9164e5dccd"},
{file = "msgpack-1.0.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e6b7842518a63a9f17107eb176320960ec095a8ee3b4420b5f688e24bf50c53c"}, {file = "msgpack-1.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:74bed8f63f8f14d75eec75cf3d04ad581da6b914001b474a5d3cd3372c8cc27d"},
{file = "msgpack-1.0.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:376081f471a2ef24828b83a641a02c575d6103a3ad7fd7dade5486cad10ea659"}, {file = "msgpack-1.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:914571a2a5b4e7606997e169f64ce53a8b1e06f2cf2c3a7273aa106236d43dd5"},
{file = "msgpack-1.0.8-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e390971d082dba073c05dbd56322427d3280b7cc8b53484c9377adfbae67dc2"}, {file = "msgpack-1.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c921af52214dcbb75e6bdf6a661b23c3e6417f00c603dd2070bccb5c3ef499f5"},
{file = "msgpack-1.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00e073efcba9ea99db5acef3959efa45b52bc67b61b00823d2a1a6944bf45982"}, {file = "msgpack-1.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d8ce0b22b890be5d252de90d0e0d119f363012027cf256185fc3d474c44b1b9e"},
{file = "msgpack-1.0.8-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:82d92c773fbc6942a7a8b520d22c11cfc8fd83bba86116bfcf962c2f5c2ecdaa"}, {file = "msgpack-1.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:73322a6cc57fcee3c0c57c4463d828e9428275fb85a27aa2aa1a92fdc42afd7b"},
{file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9ee32dcb8e531adae1f1ca568822e9b3a738369b3b686d1477cbc643c4a9c128"}, {file = "msgpack-1.1.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:e1f3c3d21f7cf67bcf2da8e494d30a75e4cf60041d98b3f79875afb5b96f3a3f"},
{file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e3aa7e51d738e0ec0afbed661261513b38b3014754c9459508399baf14ae0c9d"}, {file = "msgpack-1.1.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:64fc9068d701233effd61b19efb1485587560b66fe57b3e50d29c5d78e7fef68"},
{file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:69284049d07fce531c17404fcba2bb1df472bc2dcdac642ae71a2d079d950653"}, {file = "msgpack-1.1.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:42f754515e0f683f9c79210a5d1cad631ec3d06cea5172214d2176a42e67e19b"},
{file = "msgpack-1.0.8-cp310-cp310-win32.whl", hash = "sha256:13577ec9e247f8741c84d06b9ece5f654920d8365a4b636ce0e44f15e07ec693"}, {file = "msgpack-1.1.0-cp310-cp310-win32.whl", hash = "sha256:3df7e6b05571b3814361e8464f9304c42d2196808e0119f55d0d3e62cd5ea044"},
{file = "msgpack-1.0.8-cp310-cp310-win_amd64.whl", hash = "sha256:e532dbd6ddfe13946de050d7474e3f5fb6ec774fbb1a188aaf469b08cf04189a"}, {file = "msgpack-1.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:685ec345eefc757a7c8af44a3032734a739f8c45d1b0ac45efc5d8977aa4720f"},
{file = "msgpack-1.0.8-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9517004e21664f2b5a5fd6333b0731b9cf0817403a941b393d89a2f1dc2bd836"}, {file = "msgpack-1.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:3d364a55082fb2a7416f6c63ae383fbd903adb5a6cf78c5b96cc6316dc1cedc7"},
{file = "msgpack-1.0.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d16a786905034e7e34098634b184a7d81f91d4c3d246edc6bd7aefb2fd8ea6ad"}, {file = "msgpack-1.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:79ec007767b9b56860e0372085f8504db5d06bd6a327a335449508bbee9648fa"},
{file = "msgpack-1.0.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e2872993e209f7ed04d963e4b4fbae72d034844ec66bc4ca403329db2074377b"}, {file = "msgpack-1.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6ad622bf7756d5a497d5b6836e7fc3752e2dd6f4c648e24b1803f6048596f701"},
{file = "msgpack-1.0.8-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c330eace3dd100bdb54b5653b966de7f51c26ec4a7d4e87132d9b4f738220ba"}, {file = "msgpack-1.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e59bca908d9ca0de3dc8684f21ebf9a690fe47b6be93236eb40b99af28b6ea6"},
{file = "msgpack-1.0.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83b5c044f3eff2a6534768ccfd50425939e7a8b5cf9a7261c385de1e20dcfc85"}, {file = "msgpack-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e1da8f11a3dd397f0a32c76165cf0c4eb95b31013a94f6ecc0b280c05c91b59"},
{file = "msgpack-1.0.8-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1876b0b653a808fcd50123b953af170c535027bf1d053b59790eebb0aeb38950"}, {file = "msgpack-1.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:452aff037287acb1d70a804ffd022b21fa2bb7c46bee884dbc864cc9024128a0"},
{file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:dfe1f0f0ed5785c187144c46a292b8c34c1295c01da12e10ccddfc16def4448a"}, {file = "msgpack-1.1.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8da4bf6d54ceed70e8861f833f83ce0814a2b72102e890cbdfe4b34764cdd66e"},
{file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:3528807cbbb7f315bb81959d5961855e7ba52aa60a3097151cb21956fbc7502b"}, {file = "msgpack-1.1.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:41c991beebf175faf352fb940bf2af9ad1fb77fd25f38d9142053914947cdbf6"},
{file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e2f879ab92ce502a1e65fce390eab619774dda6a6ff719718069ac94084098ce"}, {file = "msgpack-1.1.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:a52a1f3a5af7ba1c9ace055b659189f6c669cf3657095b50f9602af3a3ba0fe5"},
{file = "msgpack-1.0.8-cp311-cp311-win32.whl", hash = "sha256:26ee97a8261e6e35885c2ecd2fd4a6d38252246f94a2aec23665a4e66d066305"}, {file = "msgpack-1.1.0-cp311-cp311-win32.whl", hash = "sha256:58638690ebd0a06427c5fe1a227bb6b8b9fdc2bd07701bec13c2335c82131a88"},
{file = "msgpack-1.0.8-cp311-cp311-win_amd64.whl", hash = "sha256:eadb9f826c138e6cf3c49d6f8de88225a3c0ab181a9b4ba792e006e5292d150e"}, {file = "msgpack-1.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:fd2906780f25c8ed5d7b323379f6138524ba793428db5d0e9d226d3fa6aa1788"},
{file = "msgpack-1.0.8-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:114be227f5213ef8b215c22dde19532f5da9652e56e8ce969bf0a26d7c419fee"}, {file = "msgpack-1.1.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:d46cf9e3705ea9485687aa4001a76e44748b609d260af21c4ceea7f2212a501d"},
{file = "msgpack-1.0.8-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:d661dc4785affa9d0edfdd1e59ec056a58b3dbb9f196fa43587f3ddac654ac7b"}, {file = "msgpack-1.1.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:5dbad74103df937e1325cc4bfeaf57713be0b4f15e1c2da43ccdd836393e2ea2"},
{file = "msgpack-1.0.8-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d56fd9f1f1cdc8227d7b7918f55091349741904d9520c65f0139a9755952c9e8"}, {file = "msgpack-1.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:58dfc47f8b102da61e8949708b3eafc3504509a5728f8b4ddef84bd9e16ad420"},
{file = "msgpack-1.0.8-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0726c282d188e204281ebd8de31724b7d749adebc086873a59efb8cf7ae27df3"}, {file = "msgpack-1.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4676e5be1b472909b2ee6356ff425ebedf5142427842aa06b4dfd5117d1ca8a2"},
{file = "msgpack-1.0.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8db8e423192303ed77cff4dce3a4b88dbfaf43979d280181558af5e2c3c71afc"}, {file = "msgpack-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17fb65dd0bec285907f68b15734a993ad3fc94332b5bb21b0435846228de1f39"},
{file = "msgpack-1.0.8-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:99881222f4a8c2f641f25703963a5cefb076adffd959e0558dc9f803a52d6a58"}, {file = "msgpack-1.1.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a51abd48c6d8ac89e0cfd4fe177c61481aca2d5e7ba42044fd218cfd8ea9899f"},
{file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b5505774ea2a73a86ea176e8a9a4a7c8bf5d521050f0f6f8426afe798689243f"}, {file = "msgpack-1.1.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2137773500afa5494a61b1208619e3871f75f27b03bcfca7b3a7023284140247"},
{file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:ef254a06bcea461e65ff0373d8a0dd1ed3aa004af48839f002a0c994a6f72d04"}, {file = "msgpack-1.1.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:398b713459fea610861c8a7b62a6fec1882759f308ae0795b5413ff6a160cf3c"},
{file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e1dd7839443592d00e96db831eddb4111a2a81a46b028f0facd60a09ebbdd543"}, {file = "msgpack-1.1.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:06f5fd2f6bb2a7914922d935d3b8bb4a7fff3a9a91cfce6d06c13bc42bec975b"},
{file = "msgpack-1.0.8-cp312-cp312-win32.whl", hash = "sha256:64d0fcd436c5683fdd7c907eeae5e2cbb5eb872fafbc03a43609d7941840995c"}, {file = "msgpack-1.1.0-cp312-cp312-win32.whl", hash = "sha256:ad33e8400e4ec17ba782f7b9cf868977d867ed784a1f5f2ab46e7ba53b6e1e1b"},
{file = "msgpack-1.0.8-cp312-cp312-win_amd64.whl", hash = "sha256:74398a4cf19de42e1498368c36eed45d9528f5fd0155241e82c4082b7e16cffd"}, {file = "msgpack-1.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:115a7af8ee9e8cddc10f87636767857e7e3717b7a2e97379dc2054712693e90f"},
{file = "msgpack-1.0.8-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0ceea77719d45c839fd73abcb190b8390412a890df2f83fb8cf49b2a4b5c2f40"}, {file = "msgpack-1.1.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:071603e2f0771c45ad9bc65719291c568d4edf120b44eb36324dcb02a13bfddf"},
{file = "msgpack-1.0.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1ab0bbcd4d1f7b6991ee7c753655b481c50084294218de69365f8f1970d4c151"}, {file = "msgpack-1.1.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0f92a83b84e7c0749e3f12821949d79485971f087604178026085f60ce109330"},
{file = "msgpack-1.0.8-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:1cce488457370ffd1f953846f82323cb6b2ad2190987cd4d70b2713e17268d24"}, {file = "msgpack-1.1.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4a1964df7b81285d00a84da4e70cb1383f2e665e0f1f2a7027e683956d04b734"},
{file = "msgpack-1.0.8-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3923a1778f7e5ef31865893fdca12a8d7dc03a44b33e2a5f3295416314c09f5d"}, {file = "msgpack-1.1.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:59caf6a4ed0d164055ccff8fe31eddc0ebc07cf7326a2aaa0dbf7a4001cd823e"},
{file = "msgpack-1.0.8-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a22e47578b30a3e199ab067a4d43d790249b3c0587d9a771921f86250c8435db"}, {file = "msgpack-1.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0907e1a7119b337971a689153665764adc34e89175f9a34793307d9def08e6ca"},
{file = "msgpack-1.0.8-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bd739c9251d01e0279ce729e37b39d49a08c0420d3fee7f2a4968c0576678f77"}, {file = "msgpack-1.1.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:65553c9b6da8166e819a6aa90ad15288599b340f91d18f60b2061f402b9a4915"},
{file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d3420522057ebab1728b21ad473aa950026d07cb09da41103f8e597dfbfaeb13"}, {file = "msgpack-1.1.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:7a946a8992941fea80ed4beae6bff74ffd7ee129a90b4dd5cf9c476a30e9708d"},
{file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5845fdf5e5d5b78a49b826fcdc0eb2e2aa7191980e3d2cfd2a30303a74f212e2"}, {file = "msgpack-1.1.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:4b51405e36e075193bc051315dbf29168d6141ae2500ba8cd80a522964e31434"},
{file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a0e76621f6e1f908ae52860bdcb58e1ca85231a9b0545e64509c931dd34275a"}, {file = "msgpack-1.1.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b4c01941fd2ff87c2a934ee6055bda4ed353a7846b8d4f341c428109e9fcde8c"},
{file = "msgpack-1.0.8-cp38-cp38-win32.whl", hash = "sha256:374a8e88ddab84b9ada695d255679fb99c53513c0a51778796fcf0944d6c789c"}, {file = "msgpack-1.1.0-cp313-cp313-win32.whl", hash = "sha256:7c9a35ce2c2573bada929e0b7b3576de647b0defbd25f5139dcdaba0ae35a4cc"},
{file = "msgpack-1.0.8-cp38-cp38-win_amd64.whl", hash = "sha256:f3709997b228685fe53e8c433e2df9f0cdb5f4542bd5114ed17ac3c0129b0480"}, {file = "msgpack-1.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:bce7d9e614a04d0883af0b3d4d501171fbfca038f12c77fa838d9f198147a23f"},
{file = "msgpack-1.0.8-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f51bab98d52739c50c56658cc303f190785f9a2cd97b823357e7aeae54c8f68a"}, {file = "msgpack-1.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c40ffa9a15d74e05ba1fe2681ea33b9caffd886675412612d93ab17b58ea2fec"},
{file = "msgpack-1.0.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:73ee792784d48aa338bba28063e19a27e8d989344f34aad14ea6e1b9bd83f596"}, {file = "msgpack-1.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f1ba6136e650898082d9d5a5217d5906d1e138024f836ff48691784bbe1adf96"},
{file = "msgpack-1.0.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f9904e24646570539a8950400602d66d2b2c492b9010ea7e965025cb71d0c86d"}, {file = "msgpack-1.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e0856a2b7e8dcb874be44fea031d22e5b3a19121be92a1e098f46068a11b0870"},
{file = "msgpack-1.0.8-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e75753aeda0ddc4c28dce4c32ba2f6ec30b1b02f6c0b14e547841ba5b24f753f"}, {file = "msgpack-1.1.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:471e27a5787a2e3f974ba023f9e265a8c7cfd373632247deb225617e3100a3c7"},
{file = "msgpack-1.0.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5dbf059fb4b7c240c873c1245ee112505be27497e90f7c6591261c7d3c3a8228"}, {file = "msgpack-1.1.0-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:646afc8102935a388ffc3914b336d22d1c2d6209c773f3eb5dd4d6d3b6f8c1cb"},
{file = "msgpack-1.0.8-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4916727e31c28be8beaf11cf117d6f6f188dcc36daae4e851fee88646f5b6b18"}, {file = "msgpack-1.1.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:13599f8829cfbe0158f6456374e9eea9f44eee08076291771d8ae93eda56607f"},
{file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7938111ed1358f536daf311be244f34df7bf3cdedb3ed883787aca97778b28d8"}, {file = "msgpack-1.1.0-cp38-cp38-win32.whl", hash = "sha256:8a84efb768fb968381e525eeeb3d92857e4985aacc39f3c47ffd00eb4509315b"},
{file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:493c5c5e44b06d6c9268ce21b302c9ca055c1fd3484c25ba41d34476c76ee746"}, {file = "msgpack-1.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:879a7b7b0ad82481c52d3c7eb99bf6f0645dbdec5134a4bddbd16f3506947feb"},
{file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5fbb160554e319f7b22ecf530a80a3ff496d38e8e07ae763b9e82fadfe96f273"}, {file = "msgpack-1.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:53258eeb7a80fc46f62fd59c876957a2d0e15e6449a9e71842b6d24419d88ca1"},
{file = "msgpack-1.0.8-cp39-cp39-win32.whl", hash = "sha256:f9af38a89b6a5c04b7d18c492c8ccf2aee7048aff1ce8437c4683bb5a1df893d"}, {file = "msgpack-1.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:7e7b853bbc44fb03fbdba34feb4bd414322180135e2cb5164f20ce1c9795ee48"},
{file = "msgpack-1.0.8-cp39-cp39-win_amd64.whl", hash = "sha256:ed59dd52075f8fc91da6053b12e8c89e37aa043f8986efd89e61fae69dc1b011"}, {file = "msgpack-1.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f3e9b4936df53b970513eac1758f3882c88658a220b58dcc1e39606dccaaf01c"},
{file = "msgpack-1.0.8.tar.gz", hash = "sha256:95c02b0e27e706e48d0e5426d1710ca78e0f0628d6e89d5b5a5b91a5f12274f3"}, {file = "msgpack-1.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:46c34e99110762a76e3911fc923222472c9d681f1094096ac4102c18319e6468"},
{file = "msgpack-1.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8a706d1e74dd3dea05cb54580d9bd8b2880e9264856ce5068027eed09680aa74"},
{file = "msgpack-1.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:534480ee5690ab3cbed89d4c8971a5c631b69a8c0883ecfea96c19118510c846"},
{file = "msgpack-1.1.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:8cf9e8c3a2153934a23ac160cc4cba0ec035f6867c8013cc6077a79823370346"},
{file = "msgpack-1.1.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:3180065ec2abbe13a4ad37688b61b99d7f9e012a535b930e0e683ad6bc30155b"},
{file = "msgpack-1.1.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:c5a91481a3cc573ac8c0d9aace09345d989dc4a0202b7fcb312c88c26d4e71a8"},
{file = "msgpack-1.1.0-cp39-cp39-win32.whl", hash = "sha256:f80bc7d47f76089633763f952e67f8214cb7b3ee6bfa489b3cb6a84cfac114cd"},
{file = "msgpack-1.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:4d1b7ff2d6146e16e8bd665ac726a89c74163ef8cd39fa8c1087d4e52d3a2325"},
{file = "msgpack-1.1.0.tar.gz", hash = "sha256:dd432ccc2c72b914e4cb77afce64aab761c1137cc698be3984eee260bcb2896e"},
] ]
[[package]] [[package]]
@ -1436,13 +1447,13 @@ dev = ["jinja2"]
[[package]] [[package]]
name = "phonenumbers" name = "phonenumbers"
version = "8.13.44" version = "8.13.45"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers." description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false optional = false
python-versions = "*" python-versions = "*"
files = [ files = [
{file = "phonenumbers-8.13.44-py2.py3-none-any.whl", hash = "sha256:52cd02865dab1428ca9e89d442629b61d407c7dc687cfb80a3e8d068a584513c"}, {file = "phonenumbers-8.13.45-py2.py3-none-any.whl", hash = "sha256:bf05ec20fcd13f0d53e43a34ed7bd1c8be26a72b88fce4b8c64fca5b4641987a"},
{file = "phonenumbers-8.13.44.tar.gz", hash = "sha256:2175021e84ee4e41b43c890f2d0af51f18c6ca9ad525886d6d6e4ea882e46fac"}, {file = "phonenumbers-8.13.45.tar.gz", hash = "sha256:53679a95b6060fd5e15467759252c87933d8566d6a5be00995a579eb0e02435b"},
] ]
[[package]] [[package]]
@ -1569,13 +1580,13 @@ files = [
[[package]] [[package]]
name = "prometheus-client" name = "prometheus-client"
version = "0.20.0" version = "0.21.0"
description = "Python client for the Prometheus monitoring system." description = "Python client for the Prometheus monitoring system."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "prometheus_client-0.20.0-py3-none-any.whl", hash = "sha256:cde524a85bce83ca359cc837f28b8c0db5cac7aa653a588fd7e84ba061c329e7"}, {file = "prometheus_client-0.21.0-py3-none-any.whl", hash = "sha256:4fa6b4dd0ac16d58bb587c04b1caae65b8c5043e85f778f42f5f632f6af2e166"},
{file = "prometheus_client-0.20.0.tar.gz", hash = "sha256:287629d00b147a32dcb2be0b9df905da599b2d82f80377083ec8463309a4bb89"}, {file = "prometheus_client-0.21.0.tar.gz", hash = "sha256:96c83c606b71ff2b0a433c98889d275f51ffec6c5e267de37c7a2b5c9aa9233e"},
] ]
[package.extras] [package.extras]
@ -1632,24 +1643,24 @@ psycopg2 = "*"
[[package]] [[package]]
name = "pyasn1" name = "pyasn1"
version = "0.6.0" version = "0.6.1"
description = "Pure-Python implementation of ASN.1 types and DER/BER/CER codecs (X.208)" description = "Pure-Python implementation of ASN.1 types and DER/BER/CER codecs (X.208)"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "pyasn1-0.6.0-py2.py3-none-any.whl", hash = "sha256:cca4bb0f2df5504f02f6f8a775b6e416ff9b0b3b16f7ee80b5a3153d9b804473"}, {file = "pyasn1-0.6.1-py3-none-any.whl", hash = "sha256:0d632f46f2ba09143da3a8afe9e33fb6f92fa2320ab7e886e2d0f7672af84629"},
{file = "pyasn1-0.6.0.tar.gz", hash = "sha256:3a35ab2c4b5ef98e17dfdec8ab074046fbda76e281c5a706ccd82328cfc8f64c"}, {file = "pyasn1-0.6.1.tar.gz", hash = "sha256:6f580d2bdd84365380830acf45550f2511469f673cb4a5ae3857a3170128b034"},
] ]
[[package]] [[package]]
name = "pyasn1-modules" name = "pyasn1-modules"
version = "0.4.0" version = "0.4.1"
description = "A collection of ASN.1-based protocols modules" description = "A collection of ASN.1-based protocols modules"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "pyasn1_modules-0.4.0-py3-none-any.whl", hash = "sha256:be04f15b66c206eed667e0bb5ab27e2b1855ea54a842e5037738099e8ca4ae0b"}, {file = "pyasn1_modules-0.4.1-py3-none-any.whl", hash = "sha256:49bfa96b45a292b711e986f222502c1c9a5e1f4e568fc30e2574a6c7d07838fd"},
{file = "pyasn1_modules-0.4.0.tar.gz", hash = "sha256:831dbcea1b177b28c9baddf4c6d1013c24c3accd14a1873fffaa6a2e905f17b6"}, {file = "pyasn1_modules-0.4.1.tar.gz", hash = "sha256:c28e2dbf9c06ad61c71a075c7e0f9fd0f1b0bb2d2ad4377f240d33ac2ab60a7c"},
] ]
[package.dependencies] [package.dependencies]
@ -1668,18 +1679,18 @@ files = [
[[package]] [[package]]
name = "pydantic" name = "pydantic"
version = "2.8.2" version = "2.9.2"
description = "Data validation using Python type hints" description = "Data validation using Python type hints"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "pydantic-2.8.2-py3-none-any.whl", hash = "sha256:73ee9fddd406dc318b885c7a2eab8a6472b68b8fb5ba8150949fc3db939f23c8"}, {file = "pydantic-2.9.2-py3-none-any.whl", hash = "sha256:f048cec7b26778210e28a0459867920654d48e5e62db0958433636cde4254f12"},
{file = "pydantic-2.8.2.tar.gz", hash = "sha256:6f62c13d067b0755ad1c21a34bdd06c0c12625a22b0fc09c6b149816604f7c2a"}, {file = "pydantic-2.9.2.tar.gz", hash = "sha256:d155cef71265d1e9807ed1c32b4c8deec042a44a50a4188b25ac67ecd81a9c0f"},
] ]
[package.dependencies] [package.dependencies]
annotated-types = ">=0.4.0" annotated-types = ">=0.6.0"
pydantic-core = "2.20.1" pydantic-core = "2.23.4"
typing-extensions = [ typing-extensions = [
{version = ">=4.12.2", markers = "python_version >= \"3.13\""}, {version = ">=4.12.2", markers = "python_version >= \"3.13\""},
{version = ">=4.6.1", markers = "python_version < \"3.13\""}, {version = ">=4.6.1", markers = "python_version < \"3.13\""},
@ -1687,103 +1698,104 @@ typing-extensions = [
[package.extras] [package.extras]
email = ["email-validator (>=2.0.0)"] email = ["email-validator (>=2.0.0)"]
timezone = ["tzdata"]
[[package]] [[package]]
name = "pydantic-core" name = "pydantic-core"
version = "2.20.1" version = "2.23.4"
description = "Core functionality for Pydantic validation and serialization" description = "Core functionality for Pydantic validation and serialization"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "pydantic_core-2.20.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:3acae97ffd19bf091c72df4d726d552c473f3576409b2a7ca36b2f535ffff4a3"}, {file = "pydantic_core-2.23.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:b10bd51f823d891193d4717448fab065733958bdb6a6b351967bd349d48d5c9b"},
{file = "pydantic_core-2.20.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:41f4c96227a67a013e7de5ff8f20fb496ce573893b7f4f2707d065907bffdbd6"}, {file = "pydantic_core-2.23.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4fc714bdbfb534f94034efaa6eadd74e5b93c8fa6315565a222f7b6f42ca1166"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5f239eb799a2081495ea659d8d4a43a8f42cd1fe9ff2e7e436295c38a10c286a"}, {file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:63e46b3169866bd62849936de036f901a9356e36376079b05efa83caeaa02ceb"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:53e431da3fc53360db73eedf6f7124d1076e1b4ee4276b36fb25514544ceb4a3"}, {file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ed1a53de42fbe34853ba90513cea21673481cd81ed1be739f7f2efb931b24916"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f1f62b2413c3a0e846c3b838b2ecd6c7a19ec6793b2a522745b0869e37ab5bc1"}, {file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cfdd16ab5e59fc31b5e906d1a3f666571abc367598e3e02c83403acabc092e07"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5d41e6daee2813ecceea8eda38062d69e280b39df793f5a942fa515b8ed67953"}, {file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:255a8ef062cbf6674450e668482456abac99a5583bbafb73f9ad469540a3a232"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d482efec8b7dc6bfaedc0f166b2ce349df0011f5d2f1f25537ced4cfc34fd98"}, {file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4a7cd62e831afe623fbb7aabbb4fe583212115b3ef38a9f6b71869ba644624a2"},
{file = "pydantic_core-2.20.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e93e1a4b4b33daed65d781a57a522ff153dcf748dee70b40c7258c5861e1768a"}, {file = "pydantic_core-2.23.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f09e2ff1f17c2b51f2bc76d1cc33da96298f0a036a137f5440ab3ec5360b624f"},
{file = "pydantic_core-2.20.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e7c4ea22b6739b162c9ecaaa41d718dfad48a244909fe7ef4b54c0b530effc5a"}, {file = "pydantic_core-2.23.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e38e63e6f3d1cec5a27e0afe90a085af8b6806ee208b33030e65b6516353f1a3"},
{file = "pydantic_core-2.20.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:4f2790949cf385d985a31984907fecb3896999329103df4e4983a4a41e13e840"}, {file = "pydantic_core-2.23.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:0dbd8dbed2085ed23b5c04afa29d8fd2771674223135dc9bc937f3c09284d071"},
{file = "pydantic_core-2.20.1-cp310-none-win32.whl", hash = "sha256:5e999ba8dd90e93d57410c5e67ebb67ffcaadcea0ad973240fdfd3a135506250"}, {file = "pydantic_core-2.23.4-cp310-none-win32.whl", hash = "sha256:6531b7ca5f951d663c339002e91aaebda765ec7d61b7d1e3991051906ddde119"},
{file = "pydantic_core-2.20.1-cp310-none-win_amd64.whl", hash = "sha256:512ecfbefef6dac7bc5eaaf46177b2de58cdf7acac8793fe033b24ece0b9566c"}, {file = "pydantic_core-2.23.4-cp310-none-win_amd64.whl", hash = "sha256:7c9129eb40958b3d4500fa2467e6a83356b3b61bfff1b414c7361d9220f9ae8f"},
{file = "pydantic_core-2.20.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:d2a8fa9d6d6f891f3deec72f5cc668e6f66b188ab14bb1ab52422fe8e644f312"}, {file = "pydantic_core-2.23.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:77733e3892bb0a7fa797826361ce8a9184d25c8dffaec60b7ffe928153680ba8"},
{file = "pydantic_core-2.20.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:175873691124f3d0da55aeea1d90660a6ea7a3cfea137c38afa0a5ffabe37b88"}, {file = "pydantic_core-2.23.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1b84d168f6c48fabd1f2027a3d1bdfe62f92cade1fb273a5d68e621da0e44e6d"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:37eee5b638f0e0dcd18d21f59b679686bbd18917b87db0193ae36f9c23c355fc"}, {file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:df49e7a0861a8c36d089c1ed57d308623d60416dab2647a4a17fe050ba85de0e"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:25e9185e2d06c16ee438ed39bf62935ec436474a6ac4f9358524220f1b236e43"}, {file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ff02b6d461a6de369f07ec15e465a88895f3223eb75073ffea56b84d9331f607"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:150906b40ff188a3260cbee25380e7494ee85048584998c1e66df0c7a11c17a6"}, {file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:996a38a83508c54c78a5f41456b0103c30508fed9abcad0a59b876d7398f25fd"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8ad4aeb3e9a97286573c03df758fc7627aecdd02f1da04516a86dc159bf70121"}, {file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d97683ddee4723ae8c95d1eddac7c192e8c552da0c73a925a89fa8649bf13eea"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3f3ed29cd9f978c604708511a1f9c2fdcb6c38b9aae36a51905b8811ee5cbf1"}, {file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:216f9b2d7713eb98cb83c80b9c794de1f6b7e3145eef40400c62e86cee5f4e1e"},
{file = "pydantic_core-2.20.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0dae11d8f5ded51699c74d9548dcc5938e0804cc8298ec0aa0da95c21fff57b"}, {file = "pydantic_core-2.23.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6f783e0ec4803c787bcea93e13e9932edab72068f68ecffdf86a99fd5918878b"},
{file = "pydantic_core-2.20.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:faa6b09ee09433b87992fb5a2859efd1c264ddc37280d2dd5db502126d0e7f27"}, {file = "pydantic_core-2.23.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:d0776dea117cf5272382634bd2a5c1b6eb16767c223c6a5317cd3e2a757c61a0"},
{file = "pydantic_core-2.20.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9dc1b507c12eb0481d071f3c1808f0529ad41dc415d0ca11f7ebfc666e66a18b"}, {file = "pydantic_core-2.23.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d5f7a395a8cf1621939692dba2a6b6a830efa6b3cee787d82c7de1ad2930de64"},
{file = "pydantic_core-2.20.1-cp311-none-win32.whl", hash = "sha256:fa2fddcb7107e0d1808086ca306dcade7df60a13a6c347a7acf1ec139aa6789a"}, {file = "pydantic_core-2.23.4-cp311-none-win32.whl", hash = "sha256:74b9127ffea03643e998e0c5ad9bd3811d3dac8c676e47db17b0ee7c3c3bf35f"},
{file = "pydantic_core-2.20.1-cp311-none-win_amd64.whl", hash = "sha256:40a783fb7ee353c50bd3853e626f15677ea527ae556429453685ae32280c19c2"}, {file = "pydantic_core-2.23.4-cp311-none-win_amd64.whl", hash = "sha256:98d134c954828488b153d88ba1f34e14259284f256180ce659e8d83e9c05eaa3"},
{file = "pydantic_core-2.20.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:595ba5be69b35777474fa07f80fc260ea71255656191adb22a8c53aba4479231"}, {file = "pydantic_core-2.23.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:f3e0da4ebaef65158d4dfd7d3678aad692f7666877df0002b8a522cdf088f231"},
{file = "pydantic_core-2.20.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a4f55095ad087474999ee28d3398bae183a66be4823f753cd7d67dd0153427c9"}, {file = "pydantic_core-2.23.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f69a8e0b033b747bb3e36a44e7732f0c99f7edd5cea723d45bc0d6e95377ffee"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9aa05d09ecf4c75157197f27cdc9cfaeb7c5f15021c6373932bf3e124af029f"}, {file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:723314c1d51722ab28bfcd5240d858512ffd3116449c557a1336cbe3919beb87"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e97fdf088d4b31ff4ba35db26d9cc472ac7ef4a2ff2badeabf8d727b3377fc52"}, {file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bb2802e667b7051a1bebbfe93684841cc9351004e2badbd6411bf357ab8d5ac8"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bc633a9fe1eb87e250b5c57d389cf28998e4292336926b0b6cdaee353f89a237"}, {file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d18ca8148bebe1b0a382a27a8ee60350091a6ddaf475fa05ef50dc35b5df6327"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d573faf8eb7e6b1cbbcb4f5b247c60ca8be39fe2c674495df0eb4318303137fe"}, {file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33e3d65a85a2a4a0dc3b092b938a4062b1a05f3a9abde65ea93b233bca0e03f2"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:26dc97754b57d2fd00ac2b24dfa341abffc380b823211994c4efac7f13b9e90e"}, {file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:128585782e5bfa515c590ccee4b727fb76925dd04a98864182b22e89a4e6ed36"},
{file = "pydantic_core-2.20.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:33499e85e739a4b60c9dac710c20a08dc73cb3240c9a0e22325e671b27b70d24"}, {file = "pydantic_core-2.23.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:68665f4c17edcceecc112dfed5dbe6f92261fb9d6054b47d01bf6371a6196126"},
{file = "pydantic_core-2.20.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:bebb4d6715c814597f85297c332297c6ce81e29436125ca59d1159b07f423eb1"}, {file = "pydantic_core-2.23.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:20152074317d9bed6b7a95ade3b7d6054845d70584216160860425f4fbd5ee9e"},
{file = "pydantic_core-2.20.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:516d9227919612425c8ef1c9b869bbbee249bc91912c8aaffb66116c0b447ebd"}, {file = "pydantic_core-2.23.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:9261d3ce84fa1d38ed649c3638feefeae23d32ba9182963e465d58d62203bd24"},
{file = "pydantic_core-2.20.1-cp312-none-win32.whl", hash = "sha256:469f29f9093c9d834432034d33f5fe45699e664f12a13bf38c04967ce233d688"}, {file = "pydantic_core-2.23.4-cp312-none-win32.whl", hash = "sha256:4ba762ed58e8d68657fc1281e9bb72e1c3e79cc5d464be146e260c541ec12d84"},
{file = "pydantic_core-2.20.1-cp312-none-win_amd64.whl", hash = "sha256:035ede2e16da7281041f0e626459bcae33ed998cca6a0a007a5ebb73414ac72d"}, {file = "pydantic_core-2.23.4-cp312-none-win_amd64.whl", hash = "sha256:97df63000f4fea395b2824da80e169731088656d1818a11b95f3b173747b6cd9"},
{file = "pydantic_core-2.20.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:0827505a5c87e8aa285dc31e9ec7f4a17c81a813d45f70b1d9164e03a813a686"}, {file = "pydantic_core-2.23.4-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:7530e201d10d7d14abce4fb54cfe5b94a0aefc87da539d0346a484ead376c3cc"},
{file = "pydantic_core-2.20.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:19c0fa39fa154e7e0b7f82f88ef85faa2a4c23cc65aae2f5aea625e3c13c735a"}, {file = "pydantic_core-2.23.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:df933278128ea1cd77772673c73954e53a1c95a4fdf41eef97c2b779271bd0bd"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa223cd1e36b642092c326d694d8bf59b71ddddc94cdb752bbbb1c5c91d833b"}, {file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cb3da3fd1b6a5d0279a01877713dbda118a2a4fc6f0d821a57da2e464793f05"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c336a6d235522a62fef872c6295a42ecb0c4e1d0f1a3e500fe949415761b8a19"}, {file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:42c6dcb030aefb668a2b7009c85b27f90e51e6a3b4d5c9bc4c57631292015b0d"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7eb6a0587eded33aeefea9f916899d42b1799b7b14b8f8ff2753c0ac1741edac"}, {file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:696dd8d674d6ce621ab9d45b205df149399e4bb9aa34102c970b721554828510"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:70c8daf4faca8da5a6d655f9af86faf6ec2e1768f4b8b9d0226c02f3d6209703"}, {file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2971bb5ffe72cc0f555c13e19b23c85b654dd2a8f7ab493c262071377bfce9f6"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e9fa4c9bf273ca41f940bceb86922a7667cd5bf90e95dbb157cbb8441008482c"}, {file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8394d940e5d400d04cad4f75c0598665cbb81aecefaca82ca85bd28264af7f9b"},
{file = "pydantic_core-2.20.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:11b71d67b4725e7e2a9f6e9c0ac1239bbc0c48cce3dc59f98635efc57d6dac83"}, {file = "pydantic_core-2.23.4-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0dff76e0602ca7d4cdaacc1ac4c005e0ce0dcfe095d5b5259163a80d3a10d327"},
{file = "pydantic_core-2.20.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:270755f15174fb983890c49881e93f8f1b80f0b5e3a3cc1394a255706cabd203"}, {file = "pydantic_core-2.23.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7d32706badfe136888bdea71c0def994644e09fff0bfe47441deaed8e96fdbc6"},
{file = "pydantic_core-2.20.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:c81131869240e3e568916ef4c307f8b99583efaa60a8112ef27a366eefba8ef0"}, {file = "pydantic_core-2.23.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ed541d70698978a20eb63d8c5d72f2cc6d7079d9d90f6b50bad07826f1320f5f"},
{file = "pydantic_core-2.20.1-cp313-none-win32.whl", hash = "sha256:b91ced227c41aa29c672814f50dbb05ec93536abf8f43cd14ec9521ea09afe4e"}, {file = "pydantic_core-2.23.4-cp313-none-win32.whl", hash = "sha256:3d5639516376dce1940ea36edf408c554475369f5da2abd45d44621cb616f769"},
{file = "pydantic_core-2.20.1-cp313-none-win_amd64.whl", hash = "sha256:65db0f2eefcaad1a3950f498aabb4875c8890438bc80b19362cf633b87a8ab20"}, {file = "pydantic_core-2.23.4-cp313-none-win_amd64.whl", hash = "sha256:5a1504ad17ba4210df3a045132a7baeeba5a200e930f57512ee02909fc5c4cb5"},
{file = "pydantic_core-2.20.1-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:4745f4ac52cc6686390c40eaa01d48b18997cb130833154801a442323cc78f91"}, {file = "pydantic_core-2.23.4-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:d4488a93b071c04dc20f5cecc3631fc78b9789dd72483ba15d423b5b3689b555"},
{file = "pydantic_core-2.20.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a8ad4c766d3f33ba8fd692f9aa297c9058970530a32c728a2c4bfd2616d3358b"}, {file = "pydantic_core-2.23.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:81965a16b675b35e1d09dd14df53f190f9129c0202356ed44ab2728b1c905658"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41e81317dd6a0127cabce83c0c9c3fbecceae981c8391e6f1dec88a77c8a569a"}, {file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ffa2ebd4c8530079140dd2d7f794a9d9a73cbb8e9d59ffe24c63436efa8f271"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:04024d270cf63f586ad41fff13fde4311c4fc13ea74676962c876d9577bcc78f"}, {file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:61817945f2fe7d166e75fbfb28004034b48e44878177fc54d81688e7b85a3665"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:eaad4ff2de1c3823fddf82f41121bdf453d922e9a238642b1dedb33c4e4f98ad"}, {file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:29d2c342c4bc01b88402d60189f3df065fb0dda3654744d5a165a5288a657368"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:26ab812fa0c845df815e506be30337e2df27e88399b985d0bb4e3ecfe72df31c"}, {file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5e11661ce0fd30a6790e8bcdf263b9ec5988e95e63cf901972107efc49218b13"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3c5ebac750d9d5f2706654c638c041635c385596caf68f81342011ddfa1e5598"}, {file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d18368b137c6295db49ce7218b1a9ba15c5bc254c96d7c9f9e924a9bc7825ad"},
{file = "pydantic_core-2.20.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2aafc5a503855ea5885559eae883978c9b6d8c8993d67766ee73d82e841300dd"}, {file = "pydantic_core-2.23.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ec4e55f79b1c4ffb2eecd8a0cfba9955a2588497d96851f4c8f99aa4a1d39b12"},
{file = "pydantic_core-2.20.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:4868f6bd7c9d98904b748a2653031fc9c2f85b6237009d475b1008bfaeb0a5aa"}, {file = "pydantic_core-2.23.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:374a5e5049eda9e0a44c696c7ade3ff355f06b1fe0bb945ea3cac2bc336478a2"},
{file = "pydantic_core-2.20.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:aa2f457b4af386254372dfa78a2eda2563680d982422641a85f271c859df1987"}, {file = "pydantic_core-2.23.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:5c364564d17da23db1106787675fc7af45f2f7b58b4173bfdd105564e132e6fb"},
{file = "pydantic_core-2.20.1-cp38-none-win32.whl", hash = "sha256:225b67a1f6d602de0ce7f6c1c3ae89a4aa25d3de9be857999e9124f15dab486a"}, {file = "pydantic_core-2.23.4-cp38-none-win32.whl", hash = "sha256:d7a80d21d613eec45e3d41eb22f8f94ddc758a6c4720842dc74c0581f54993d6"},
{file = "pydantic_core-2.20.1-cp38-none-win_amd64.whl", hash = "sha256:6b507132dcfc0dea440cce23ee2182c0ce7aba7054576efc65634f080dbe9434"}, {file = "pydantic_core-2.23.4-cp38-none-win_amd64.whl", hash = "sha256:5f5ff8d839f4566a474a969508fe1c5e59c31c80d9e140566f9a37bba7b8d556"},
{file = "pydantic_core-2.20.1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:b03f7941783b4c4a26051846dea594628b38f6940a2fdc0df00b221aed39314c"}, {file = "pydantic_core-2.23.4-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:a4fa4fc04dff799089689f4fd502ce7d59de529fc2f40a2c8836886c03e0175a"},
{file = "pydantic_core-2.20.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1eedfeb6089ed3fad42e81a67755846ad4dcc14d73698c120a82e4ccf0f1f9f6"}, {file = "pydantic_core-2.23.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0a7df63886be5e270da67e0966cf4afbae86069501d35c8c1b3b6c168f42cb36"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:635fee4e041ab9c479e31edda27fcf966ea9614fff1317e280d99eb3e5ab6fe2"}, {file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dcedcd19a557e182628afa1d553c3895a9f825b936415d0dbd3cd0bbcfd29b4b"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:77bf3ac639c1ff567ae3b47f8d4cc3dc20f9966a2a6dd2311dcc055d3d04fb8a"}, {file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5f54b118ce5de9ac21c363d9b3caa6c800341e8c47a508787e5868c6b79c9323"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7ed1b0132f24beeec5a78b67d9388656d03e6a7c837394f99257e2d55b461611"}, {file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:86d2f57d3e1379a9525c5ab067b27dbb8a0642fb5d454e17a9ac434f9ce523e3"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c6514f963b023aeee506678a1cf821fe31159b925c4b76fe2afa94cc70b3222b"}, {file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:de6d1d1b9e5101508cb37ab0d972357cac5235f5c6533d1071964c47139257df"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10d4204d8ca33146e761c79f83cc861df20e7ae9f6487ca290a97702daf56006"}, {file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1278e0d324f6908e872730c9102b0112477a7f7cf88b308e4fc36ce1bdb6d58c"},
{file = "pydantic_core-2.20.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2d036c7187b9422ae5b262badb87a20a49eb6c5238b2004e96d4da1231badef1"}, {file = "pydantic_core-2.23.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9a6b5099eeec78827553827f4c6b8615978bb4b6a88e5d9b93eddf8bb6790f55"},
{file = "pydantic_core-2.20.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9ebfef07dbe1d93efb94b4700f2d278494e9162565a54f124c404a5656d7ff09"}, {file = "pydantic_core-2.23.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:e55541f756f9b3ee346b840103f32779c695a19826a4c442b7954550a0972040"},
{file = "pydantic_core-2.20.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:6b9d9bb600328a1ce523ab4f454859e9d439150abb0906c5a1983c146580ebab"}, {file = "pydantic_core-2.23.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a5c7ba8ffb6d6f8f2ab08743be203654bb1aaa8c9dcb09f82ddd34eadb695605"},
{file = "pydantic_core-2.20.1-cp39-none-win32.whl", hash = "sha256:784c1214cb6dd1e3b15dd8b91b9a53852aed16671cc3fbe4786f4f1db07089e2"}, {file = "pydantic_core-2.23.4-cp39-none-win32.whl", hash = "sha256:37b0fe330e4a58d3c58b24d91d1eb102aeec675a3db4c292ec3928ecd892a9a6"},
{file = "pydantic_core-2.20.1-cp39-none-win_amd64.whl", hash = "sha256:d2fe69c5434391727efa54b47a1e7986bb0186e72a41b203df8f5b0a19a4f669"}, {file = "pydantic_core-2.23.4-cp39-none-win_amd64.whl", hash = "sha256:1498bec4c05c9c787bde9125cfdcc63a41004ff167f495063191b863399b1a29"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:a45f84b09ac9c3d35dfcf6a27fd0634d30d183205230a0ebe8373a0e8cfa0906"}, {file = "pydantic_core-2.23.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:f455ee30a9d61d3e1a15abd5068827773d6e4dc513e795f380cdd59932c782d5"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d02a72df14dfdbaf228424573a07af10637bd490f0901cee872c4f434a735b94"}, {file = "pydantic_core-2.23.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1e90d2e3bd2c3863d48525d297cd143fe541be8bbf6f579504b9712cb6b643ec"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d2b27e6af28f07e2f195552b37d7d66b150adbaa39a6d327766ffd695799780f"}, {file = "pydantic_core-2.23.4-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e203fdf807ac7e12ab59ca2bfcabb38c7cf0b33c41efeb00f8e5da1d86af480"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:084659fac3c83fd674596612aeff6041a18402f1e1bc19ca39e417d554468482"}, {file = "pydantic_core-2.23.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e08277a400de01bc72436a0ccd02bdf596631411f592ad985dcee21445bd0068"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:242b8feb3c493ab78be289c034a1f659e8826e2233786e36f2893a950a719bb6"}, {file = "pydantic_core-2.23.4-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f220b0eea5965dec25480b6333c788fb72ce5f9129e8759ef876a1d805d00801"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:38cf1c40a921d05c5edc61a785c0ddb4bed67827069f535d794ce6bcded919fc"}, {file = "pydantic_core-2.23.4-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:d06b0c8da4f16d1d1e352134427cb194a0a6e19ad5db9161bf32b2113409e728"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:e0bbdd76ce9aa5d4209d65f2b27fc6e5ef1312ae6c5333c26db3f5ade53a1e99"}, {file = "pydantic_core-2.23.4-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:ba1a0996f6c2773bd83e63f18914c1de3c9dd26d55f4ac302a7efe93fb8e7433"},
{file = "pydantic_core-2.20.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:254ec27fdb5b1ee60684f91683be95e5133c994cc54e86a0b0963afa25c8f8a6"}, {file = "pydantic_core-2.23.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:9a5bce9d23aac8f0cf0836ecfc033896aa8443b501c58d0602dbfd5bd5b37753"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:407653af5617f0757261ae249d3fba09504d7a71ab36ac057c938572d1bc9331"}, {file = "pydantic_core-2.23.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:78ddaaa81421a29574a682b3179d4cf9e6d405a09b99d93ddcf7e5239c742e21"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:c693e916709c2465b02ca0ad7b387c4f8423d1db7b4649c551f27a529181c5ad"}, {file = "pydantic_core-2.23.4-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:883a91b5dd7d26492ff2f04f40fbb652de40fcc0afe07e8129e8ae779c2110eb"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5b5ff4911aea936a47d9376fd3ab17e970cc543d1b68921886e7f64bd28308d1"}, {file = "pydantic_core-2.23.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:88ad334a15b32a791ea935af224b9de1bf99bcd62fabf745d5f3442199d86d59"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:177f55a886d74f1808763976ac4efd29b7ed15c69f4d838bbd74d9d09cf6fa86"}, {file = "pydantic_core-2.23.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:233710f069d251feb12a56da21e14cca67994eab08362207785cf8c598e74577"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:964faa8a861d2664f0c7ab0c181af0bea66098b1919439815ca8803ef136fc4e"}, {file = "pydantic_core-2.23.4-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:19442362866a753485ba5e4be408964644dd6a09123d9416c54cd49171f50744"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:4dd484681c15e6b9a977c785a345d3e378d72678fd5f1f3c0509608da24f2ac0"}, {file = "pydantic_core-2.23.4-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:624e278a7d29b6445e4e813af92af37820fafb6dcc55c012c834f9e26f9aaaef"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f6d6cff3538391e8486a431569b77921adfcdef14eb18fbf19b7c0a5294d4e6a"}, {file = "pydantic_core-2.23.4-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f5ef8f42bec47f21d07668a043f077d507e5bf4e668d5c6dfe6aaba89de1a5b8"},
{file = "pydantic_core-2.20.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:a6d511cc297ff0883bc3708b465ff82d7560193169a8b93260f74ecb0a5e08a7"}, {file = "pydantic_core-2.23.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:aea443fffa9fbe3af1a9ba721a87f926fe548d32cab71d188a6ede77d0ff244e"},
{file = "pydantic_core-2.20.1.tar.gz", hash = "sha256:26ca695eeee5f9f1aeeb211ffc12f10bcb6f71e2989988fda61dabd65db878d4"}, {file = "pydantic_core-2.23.4.tar.gz", hash = "sha256:2584f7cf844ac4d970fba483a717dbe10c1c1c96a969bf65d61ffe94df1b2863"},
] ]
[package.dependencies] [package.dependencies]
@ -1962,18 +1974,15 @@ six = ">=1.5"
[[package]] [[package]]
name = "python-multipart" name = "python-multipart"
version = "0.0.9" version = "0.0.10"
description = "A streaming multipart parser for Python" description = "A streaming multipart parser for Python"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "python_multipart-0.0.9-py3-none-any.whl", hash = "sha256:97ca7b8ea7b05f977dc3849c3ba99d51689822fab725c3703af7c866a0c2b215"}, {file = "python_multipart-0.0.10-py3-none-any.whl", hash = "sha256:2b06ad9e8d50c7a8db80e3b56dab590137b323410605af2be20d62a5f1ba1dc8"},
{file = "python_multipart-0.0.9.tar.gz", hash = "sha256:03f54688c663f1b7977105f021043b0793151e4cb1c1a9d4a11fc13d622c4026"}, {file = "python_multipart-0.0.10.tar.gz", hash = "sha256:46eb3c6ce6fdda5fb1a03c7e11d490e407c6930a2703fe7aef4da71c374688fa"},
] ]
[package.extras]
dev = ["atomicwrites (==1.4.1)", "attrs (==23.2.0)", "coverage (==7.4.1)", "hatch", "invoke (==2.2.0)", "more-itertools (==10.2.0)", "pbr (==6.0.0)", "pluggy (==1.4.0)", "py (==1.11.0)", "pytest (==8.0.0)", "pytest-cov (==4.1.0)", "pytest-timeout (==2.2.0)", "pyyaml (==6.0.1)", "ruff (==0.2.1)"]
[[package]] [[package]]
name = "pytz" name = "pytz"
version = "2022.7.1" version = "2022.7.1"
@ -2268,29 +2277,29 @@ files = [
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.6.4" version = "0.6.7"
description = "An extremely fast Python linter and code formatter, written in Rust." description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "ruff-0.6.4-py3-none-linux_armv6l.whl", hash = "sha256:c4b153fc152af51855458e79e835fb6b933032921756cec9af7d0ba2aa01a258"}, {file = "ruff-0.6.7-py3-none-linux_armv6l.whl", hash = "sha256:08277b217534bfdcc2e1377f7f933e1c7957453e8a79764d004e44c40db923f2"},
{file = "ruff-0.6.4-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:bedff9e4f004dad5f7f76a9d39c4ca98af526c9b1695068198b3bda8c085ef60"}, {file = "ruff-0.6.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:c6707a32e03b791f4448dc0dce24b636cbcdee4dd5607adc24e5ee73fd86c00a"},
{file = "ruff-0.6.4-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d02a4127a86de23002e694d7ff19f905c51e338c72d8e09b56bfb60e1681724f"}, {file = "ruff-0.6.7-py3-none-macosx_11_0_arm64.whl", hash = "sha256:533d66b7774ef224e7cf91506a7dafcc9e8ec7c059263ec46629e54e7b1f90ab"},
{file = "ruff-0.6.4-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7862f42fc1a4aca1ea3ffe8a11f67819d183a5693b228f0bb3a531f5e40336fc"}, {file = "ruff-0.6.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17a86aac6f915932d259f7bec79173e356165518859f94649d8c50b81ff087e9"},
{file = "ruff-0.6.4-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eebe4ff1967c838a1a9618a5a59a3b0a00406f8d7eefee97c70411fefc353617"}, {file = "ruff-0.6.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b3f8822defd260ae2460ea3832b24d37d203c3577f48b055590a426a722d50ef"},
{file = "ruff-0.6.4-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:932063a03bac394866683e15710c25b8690ccdca1cf192b9a98260332ca93408"}, {file = "ruff-0.6.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9ba4efe5c6dbbb58be58dd83feedb83b5e95c00091bf09987b4baf510fee5c99"},
{file = "ruff-0.6.4-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:50e30b437cebef547bd5c3edf9ce81343e5dd7c737cb36ccb4fe83573f3d392e"}, {file = "ruff-0.6.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:525201b77f94d2b54868f0cbe5edc018e64c22563da6c5c2e5c107a4e85c1c0d"},
{file = "ruff-0.6.4-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c44536df7b93a587de690e124b89bd47306fddd59398a0fb12afd6133c7b3818"}, {file = "ruff-0.6.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8854450839f339e1049fdbe15d875384242b8e85d5c6947bb2faad33c651020b"},
{file = "ruff-0.6.4-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ea086601b22dc5e7693a78f3fcfc460cceabfdf3bdc36dc898792aba48fbad6"}, {file = "ruff-0.6.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2f0b62056246234d59cbf2ea66e84812dc9ec4540518e37553513392c171cb18"},
{file = "ruff-0.6.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0b52387d3289ccd227b62102c24714ed75fbba0b16ecc69a923a37e3b5e0aaaa"}, {file = "ruff-0.6.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b1462fa56c832dc0cea5b4041cfc9c97813505d11cce74ebc6d1aae068de36b"},
{file = "ruff-0.6.4-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:0308610470fcc82969082fc83c76c0d362f562e2f0cdab0586516f03a4e06ec6"}, {file = "ruff-0.6.7-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:02b083770e4cdb1495ed313f5694c62808e71764ec6ee5db84eedd82fd32d8f5"},
{file = "ruff-0.6.4-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:803b96dea21795a6c9d5bfa9e96127cc9c31a1987802ca68f35e5c95aed3fc0d"}, {file = "ruff-0.6.7-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:0c05fd37013de36dfa883a3854fae57b3113aaa8abf5dea79202675991d48624"},
{file = "ruff-0.6.4-py3-none-musllinux_1_2_i686.whl", hash = "sha256:66dbfea86b663baab8fcae56c59f190caba9398df1488164e2df53e216248baa"}, {file = "ruff-0.6.7-py3-none-musllinux_1_2_i686.whl", hash = "sha256:f49c9caa28d9bbfac4a637ae10327b3db00f47d038f3fbb2195c4d682e925b14"},
{file = "ruff-0.6.4-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:34d5efad480193c046c86608dbba2bccdc1c5fd11950fb271f8086e0c763a5d1"}, {file = "ruff-0.6.7-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a0e1655868164e114ba43a908fd2d64a271a23660195017c17691fb6355d59bb"},
{file = "ruff-0.6.4-py3-none-win32.whl", hash = "sha256:f0f8968feea5ce3777c0d8365653d5e91c40c31a81d95824ba61d871a11b8523"}, {file = "ruff-0.6.7-py3-none-win32.whl", hash = "sha256:a939ca435b49f6966a7dd64b765c9df16f1faed0ca3b6f16acdf7731969deb35"},
{file = "ruff-0.6.4-py3-none-win_amd64.whl", hash = "sha256:549daccee5227282289390b0222d0fbee0275d1db6d514550d65420053021a58"}, {file = "ruff-0.6.7-py3-none-win_amd64.whl", hash = "sha256:590445eec5653f36248584579c06252ad2e110a5d1f32db5420de35fb0e1c977"},
{file = "ruff-0.6.4-py3-none-win_arm64.whl", hash = "sha256:ac4b75e898ed189b3708c9ab3fc70b79a433219e1e87193b4f2b77251d058d14"}, {file = "ruff-0.6.7-py3-none-win_arm64.whl", hash = "sha256:b28f0d5e2f771c1fe3c7a45d3f53916fc74a480698c4b5731f0bea61e52137c8"},
{file = "ruff-0.6.4.tar.gz", hash = "sha256:ac3b5bfbee99973f80aa1b7cbd1c9cbce200883bdd067300c22a6cc1c7fba212"}, {file = "ruff-0.6.7.tar.gz", hash = "sha256:44e52129d82266fa59b587e2cd74def5637b730a69c4542525dfdecfaae38bd5"},
] ]
[[package]] [[package]]
@ -2325,13 +2334,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]] [[package]]
name = "sentry-sdk" name = "sentry-sdk"
version = "2.13.0" version = "2.14.0"
description = "Python client for Sentry (https://sentry.io)" description = "Python client for Sentry (https://sentry.io)"
optional = true optional = true
python-versions = ">=3.6" python-versions = ">=3.6"
files = [ files = [
{file = "sentry_sdk-2.13.0-py2.py3-none-any.whl", hash = "sha256:6beede8fc2ab4043da7f69d95534e320944690680dd9a963178a49de71d726c6"}, {file = "sentry_sdk-2.14.0-py2.py3-none-any.whl", hash = "sha256:b8bc3dc51d06590df1291b7519b85c75e2ced4f28d9ea655b6d54033503b5bf4"},
{file = "sentry_sdk-2.13.0.tar.gz", hash = "sha256:8d4a576f7a98eb2fdb40e13106e41f330e5c79d72a68be1316e7852cf4995260"}, {file = "sentry_sdk-2.14.0.tar.gz", hash = "sha256:1e0e2eaf6dad918c7d1e0edac868a7bf20017b177f242cefe2a6bcd47955961d"},
] ]
[package.dependencies] [package.dependencies]
@ -2578,13 +2587,13 @@ dev = ["furo (>=2024.05.06)", "nox", "packaging", "sphinx (>=5)", "twisted"]
[[package]] [[package]]
name = "treq" name = "treq"
version = "23.11.0" version = "24.9.1"
description = "High-level Twisted HTTP Client API" description = "High-level Twisted HTTP Client API"
optional = false optional = false
python-versions = ">=3.6" python-versions = ">=3.7"
files = [ files = [
{file = "treq-23.11.0-py3-none-any.whl", hash = "sha256:f494c2218d61cab2cabbee37cd6606d3eea9d16cf14190323095c95d22c467e9"}, {file = "treq-24.9.1-py3-none-any.whl", hash = "sha256:eee4756fd9a857c77f180fd5202b52c518f2d3e2826dce28b89066c03bfc45d0"},
{file = "treq-23.11.0.tar.gz", hash = "sha256:0914ff929fd1632ce16797235260f8bc19d20ff7c459c1deabd65b8c68cbeac5"}, {file = "treq-24.9.1.tar.gz", hash = "sha256:15da7fc404f3e4ed59d0abe5f8eef4966fabbe618039a2a23bc7c15305cefea8"},
] ]
[package.dependencies] [package.dependencies]
@ -2593,6 +2602,7 @@ hyperlink = ">=21.0.0"
incremental = "*" incremental = "*"
requests = ">=2.1.0" requests = ">=2.1.0"
Twisted = {version = ">=22.10.0", extras = ["tls"]} Twisted = {version = ">=22.10.0", extras = ["tls"]}
typing-extensions = ">=3.10.0"
[package.extras] [package.extras]
dev = ["httpbin (==0.7.0)", "pep8", "pyflakes", "werkzeug (==2.0.3)"] dev = ["httpbin (==0.7.0)", "pep8", "pyflakes", "werkzeug (==2.0.3)"]
@ -2798,24 +2808,24 @@ types-cffi = "*"
[[package]] [[package]]
name = "types-pyyaml" name = "types-pyyaml"
version = "6.0.12.20240808" version = "6.0.12.20240917"
description = "Typing stubs for PyYAML" description = "Typing stubs for PyYAML"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "types-PyYAML-6.0.12.20240808.tar.gz", hash = "sha256:b8f76ddbd7f65440a8bda5526a9607e4c7a322dc2f8e1a8c405644f9a6f4b9af"}, {file = "types-PyYAML-6.0.12.20240917.tar.gz", hash = "sha256:d1405a86f9576682234ef83bcb4e6fff7c9305c8b1fbad5e0bcd4f7dbdc9c587"},
{file = "types_PyYAML-6.0.12.20240808-py3-none-any.whl", hash = "sha256:deda34c5c655265fc517b546c902aa6eed2ef8d3e921e4765fe606fe2afe8d35"}, {file = "types_PyYAML-6.0.12.20240917-py3-none-any.whl", hash = "sha256:392b267f1c0fe6022952462bf5d6523f31e37f6cea49b14cee7ad634b6301570"},
] ]
[[package]] [[package]]
name = "types-requests" name = "types-requests"
version = "2.32.0.20240712" version = "2.32.0.20240914"
description = "Typing stubs for requests" description = "Typing stubs for requests"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "types-requests-2.32.0.20240712.tar.gz", hash = "sha256:90c079ff05e549f6bf50e02e910210b98b8ff1ebdd18e19c873cd237737c1358"}, {file = "types-requests-2.32.0.20240914.tar.gz", hash = "sha256:2850e178db3919d9bf809e434eef65ba49d0e7e33ac92d588f4a5e295fffd405"},
{file = "types_requests-2.32.0.20240712-py3-none-any.whl", hash = "sha256:f754283e152c752e46e70942fa2a146b5bc70393522257bb85bd1ef7e019dcc3"}, {file = "types_requests-2.32.0.20240914-py3-none-any.whl", hash = "sha256:59c2f673eb55f32a99b2894faf6020e1a9f4a402ad0f192bfee0b64469054310"},
] ]
[package.dependencies] [package.dependencies]
@ -2823,13 +2833,13 @@ urllib3 = ">=2"
[[package]] [[package]]
name = "types-setuptools" name = "types-setuptools"
version = "74.1.0.20240907" version = "75.1.0.20240917"
description = "Typing stubs for setuptools" description = "Typing stubs for setuptools"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "types-setuptools-74.1.0.20240907.tar.gz", hash = "sha256:0abdb082552ca966c1e5fc244e4853adc62971f6cd724fb1d8a3713b580e5a65"}, {file = "types-setuptools-75.1.0.20240917.tar.gz", hash = "sha256:12f12a165e7ed383f31def705e5c0fa1c26215dd466b0af34bd042f7d5331f55"},
{file = "types_setuptools-74.1.0.20240907-py3-none-any.whl", hash = "sha256:15b38c8e63ca34f42f6063ff4b1dd662ea20086166d5ad6a102e670a52574120"}, {file = "types_setuptools-75.1.0.20240917-py3-none-any.whl", hash = "sha256:06f78307e68d1bbde6938072c57b81cf8a99bc84bd6dc7e4c5014730b097dc0c"},
] ]
[[package]] [[package]]
@ -3104,4 +3114,4 @@ user-search = ["pyicu"]
[metadata] [metadata]
lock-version = "2.0" lock-version = "2.0"
python-versions = "^3.8.0" python-versions = "^3.8.0"
content-hash = "26ff23a6cafd8593141cb3d54d7b1e94328a02b863d347578d2b6e666ee2bc93" content-hash = "93c267fac3428b764f954e6faa17937b9c97b1ed2bdafc41dd8f6cb5d2ce085b"

View File

@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry] [tool.poetry]
name = "matrix-synapse" name = "matrix-synapse"
version = "1.115.0" version = "1.116.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol" description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later" license = "AGPL-3.0-or-later"
@ -320,7 +320,7 @@ all = [
# failing on new releases. Keeping lower bounds loose here means that dependabot # failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile. # can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates. # This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.6.4" ruff = "0.6.7"
# Type checking only works with the pydantic.v1 compat module from pydantic v2 # Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2" pydantic = "^2"

1172
requirements.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@ -45,7 +45,6 @@ import traceback
import unittest.mock import unittest.mock
from contextlib import contextmanager from contextlib import contextmanager
from typing import ( from typing import (
TYPE_CHECKING,
Any, Any,
Callable, Callable,
Dict, Dict,
@ -57,30 +56,17 @@ from typing import (
) )
from parameterized import parameterized from parameterized import parameterized
from synapse._pydantic_compat import HAS_PYDANTIC_V2
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import (
BaseModel as PydanticBaseModel,
conbytes,
confloat,
conint,
constr,
)
from pydantic.v1.typing import get_args
else:
from pydantic import (
BaseModel as PydanticBaseModel,
conbytes,
confloat,
conint,
constr,
)
from pydantic.typing import get_args
from typing_extensions import ParamSpec from typing_extensions import ParamSpec
from synapse._pydantic_compat import (
BaseModel as PydanticBaseModel,
conbytes,
confloat,
conint,
constr,
get_args,
)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG: List[Callable] = [ CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG: List[Callable] = [
@ -183,22 +169,16 @@ def monkeypatch_pydantic() -> Generator[None, None, None]:
# Most Synapse code ought to import the patched objects directly from # Most Synapse code ought to import the patched objects directly from
# `pydantic`. But we also patch their containing modules `pydantic.main` and # `pydantic`. But we also patch their containing modules `pydantic.main` and
# `pydantic.types` for completeness. # `pydantic.types` for completeness.
patch_basemodel1 = unittest.mock.patch( patch_basemodel = unittest.mock.patch(
"pydantic.BaseModel", new=PatchedBaseModel "synapse._pydantic_compat.BaseModel", new=PatchedBaseModel
) )
patch_basemodel2 = unittest.mock.patch( patches.enter_context(patch_basemodel)
"pydantic.main.BaseModel", new=PatchedBaseModel
)
patches.enter_context(patch_basemodel1)
patches.enter_context(patch_basemodel2)
for factory in CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG: for factory in CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG:
wrapper: Callable = make_wrapper(factory) wrapper: Callable = make_wrapper(factory)
patch1 = unittest.mock.patch(f"pydantic.{factory.__name__}", new=wrapper) patch = unittest.mock.patch(
patch2 = unittest.mock.patch( f"synapse._pydantic_compat.{factory.__name__}", new=wrapper
f"pydantic.types.{factory.__name__}", new=wrapper
) )
patches.enter_context(patch1) patches.enter_context(patch)
patches.enter_context(patch2)
yield yield

View File

@ -223,6 +223,7 @@ test_packages=(
./tests/msc3930 ./tests/msc3930
./tests/msc3902 ./tests/msc3902
./tests/msc3967 ./tests/msc3967
./tests/msc4140
) )
# Enable dirty runs, so tests will reuse the same container where possible. # Enable dirty runs, so tests will reuse the same container where possible.

View File

@ -19,6 +19,8 @@
# #
# #
from typing import TYPE_CHECKING
from packaging.version import Version from packaging.version import Version
try: try:
@ -30,4 +32,64 @@ except ImportError:
HAS_PYDANTIC_V2: bool = Version(pydantic_version).major == 2 HAS_PYDANTIC_V2: bool = Version(pydantic_version).major == 2
__all__ = ("HAS_PYDANTIC_V2",) if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import (
BaseModel,
Extra,
Field,
MissingError,
PydanticValueError,
StrictBool,
StrictInt,
StrictStr,
ValidationError,
conbytes,
confloat,
conint,
constr,
parse_obj_as,
validator,
)
from pydantic.v1.error_wrappers import ErrorWrapper
from pydantic.v1.typing import get_args
else:
from pydantic import (
BaseModel,
Extra,
Field,
MissingError,
PydanticValueError,
StrictBool,
StrictInt,
StrictStr,
ValidationError,
conbytes,
confloat,
conint,
constr,
parse_obj_as,
validator,
)
from pydantic.error_wrappers import ErrorWrapper
from pydantic.typing import get_args
__all__ = (
"HAS_PYDANTIC_V2",
"BaseModel",
"constr",
"conbytes",
"conint",
"confloat",
"ErrorWrapper",
"Extra",
"Field",
"get_args",
"MissingError",
"parse_obj_as",
"PydanticValueError",
"StrictBool",
"StrictInt",
"StrictStr",
"ValidationError",
"validator",
)

View File

@ -65,6 +65,7 @@ from synapse.storage.databases.main.appservice import (
) )
from synapse.storage.databases.main.censor_events import CensorEventsStore from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
from synapse.storage.databases.main.delayed_events import DelayedEventsStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.devices import DeviceWorkerStore from synapse.storage.databases.main.devices import DeviceWorkerStore
from synapse.storage.databases.main.directory import DirectoryWorkerStore from synapse.storage.databases.main.directory import DirectoryWorkerStore
@ -161,6 +162,7 @@ class GenericWorkerStore(
TaskSchedulerWorkerStore, TaskSchedulerWorkerStore,
ExperimentalFeaturesStore, ExperimentalFeaturesStore,
SlidingSyncStore, SlidingSyncStore,
DelayedEventsStore,
): ):
# Properties that multiple storage classes define. Tell mypy what the # Properties that multiple storage classes define. Tell mypy what the
# expected type is. # expected type is.

View File

@ -36,6 +36,7 @@ from synapse.config import ( # noqa: F401
jwt, jwt,
key, key,
logger, logger,
meow,
metrics, metrics,
modules, modules,
oembed, oembed,
@ -92,6 +93,7 @@ class RootConfig:
voip: voip.VoipConfig voip: voip.VoipConfig
registration: registration.RegistrationConfig registration: registration.RegistrationConfig
account_validity: account_validity.AccountValidityConfig account_validity: account_validity.AccountValidityConfig
meow: meow.MeowConfig
metrics: metrics.MetricsConfig metrics: metrics.MetricsConfig
api: api.ApiConfig api: api.ApiConfig
appservice: appservice.AppServiceConfig appservice: appservice.AppServiceConfig

View File

@ -18,17 +18,11 @@
# [This file includes modifications made by New Vector Limited] # [This file includes modifications made by New Vector Limited]
# #
# #
from typing import TYPE_CHECKING, Any, Dict, Type, TypeVar from typing import Any, Dict, Type, TypeVar
import jsonschema import jsonschema
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import BaseModel, ValidationError, parse_obj_as
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import BaseModel, ValidationError, parse_obj_as
else:
from pydantic import BaseModel, ValidationError, parse_obj_as
from synapse.config._base import ConfigError from synapse.config._base import ConfigError
from synapse.types import JsonDict, StrSequence from synapse.types import JsonDict, StrSequence

View File

@ -19,6 +19,7 @@
# #
# #
from ._base import RootConfig from ._base import RootConfig
from .meow import MeowConfig
from .account_validity import AccountValidityConfig from .account_validity import AccountValidityConfig
from .api import ApiConfig from .api import ApiConfig
from .appservice import AppServiceConfig from .appservice import AppServiceConfig
@ -65,6 +66,7 @@ from .workers import WorkerConfig
class HomeServerConfig(RootConfig): class HomeServerConfig(RootConfig):
config_classes = [ config_classes = [
MeowConfig,
ModulesConfig, ModulesConfig,
ServerConfig, ServerConfig,
RetentionConfig, RetentionConfig,

33
synapse/config/meow.py Normal file
View File

@ -0,0 +1,33 @@
# -*- coding: utf-8 -*-
# Copyright 2020 Maunium
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class MeowConfig(Config):
"""Meow Configuration
Configuration for disabling dumb limits in Synapse
"""
section = "meow"
def read_config(self, config, **kwargs):
meow_config = config.get("meow", {})
self.validation_override = set(meow_config.get("validation_override", []))
self.filter_override = set(meow_config.get("filter_override", []))
self.timestamp_override = set(meow_config.get("timestamp_override", []))
self.admin_api_register_invalid = meow_config.get(
"admin_api_register_invalid", True
)

View File

@ -54,10 +54,8 @@ THUMBNAIL_SIZE_YAML = """\
THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP = { THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP = {
"image/jpeg": "jpeg", "image/jpeg": "jpeg",
"image/jpg": "jpeg", "image/jpg": "jpeg",
"image/webp": "jpeg", "image/webp": "webp",
# Thumbnails can only be jpeg or png. We choose png thumbnails for gif "image/gif": "webp",
# because it can have transparency.
"image/gif": "png",
"image/png": "png", "image/png": "png",
} }
@ -109,6 +107,10 @@ def parse_thumbnail_requirements(
requirement.append( requirement.append(
ThumbnailRequirement(width, height, method, "image/png") ThumbnailRequirement(width, height, method, "image/png")
) )
elif thumbnail_format == "webp":
requirement.append(
ThumbnailRequirement(width, height, method, "image/webp")
)
else: else:
raise Exception( raise Exception(
"Unknown thumbnail mapping from %s to %s. This is a Synapse problem, please report!" "Unknown thumbnail mapping from %s to %s. This is a Synapse problem, please report!"

View File

@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3. # This file is licensed under the Affero General Public License (AGPL) version 3.
# #
# Copyright 2014-2021 The Matrix.org Foundation C.I.C. # Copyright 2014-2021 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023 New Vector, Ltd # Copyright (C) 2023-2024 New Vector, Ltd
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as # it under the terms of the GNU Affero General Public License as
@ -780,6 +780,17 @@ class ServerConfig(Config):
else: else:
self.delete_stale_devices_after = None self.delete_stale_devices_after = None
# The maximum allowed delay duration for delayed events (MSC4140).
max_event_delay_duration = config.get("max_event_delay_duration")
if max_event_delay_duration is not None:
self.max_event_delay_ms: Optional[int] = self.parse_duration(
max_event_delay_duration
)
if self.max_event_delay_ms <= 0:
raise ConfigError("max_event_delay_duration must be a positive value")
else:
self.max_event_delay_ms = None
def has_tls_listener(self) -> bool: def has_tls_listener(self) -> bool:
return any(listener.is_tls() for listener in self.listeners) return any(listener.is_tls() for listener in self.listeners)

View File

@ -23,7 +23,12 @@ from typing import Any
from synapse.types import JsonDict from synapse.types import JsonDict
from ._base import Config from ._base import Config, ConfigError, read_file
CONFLICTING_SHARED_SECRET_OPTS_ERROR = """\
You have configured both `turn_shared_secret` and `turn_shared_secret_path`.
These are mutually incompatible.
"""
class VoipConfig(Config): class VoipConfig(Config):
@ -32,6 +37,13 @@ class VoipConfig(Config):
def read_config(self, config: JsonDict, **kwargs: Any) -> None: def read_config(self, config: JsonDict, **kwargs: Any) -> None:
self.turn_uris = config.get("turn_uris", []) self.turn_uris = config.get("turn_uris", [])
self.turn_shared_secret = config.get("turn_shared_secret") self.turn_shared_secret = config.get("turn_shared_secret")
turn_shared_secret_path = config.get("turn_shared_secret_path")
if turn_shared_secret_path:
if self.turn_shared_secret:
raise ConfigError(CONFLICTING_SHARED_SECRET_OPTS_ERROR)
self.turn_shared_secret = read_file(
turn_shared_secret_path, ("turn_shared_secret_path",)
).strip()
self.turn_username = config.get("turn_username") self.turn_username = config.get("turn_username")
self.turn_password = config.get("turn_password") self.turn_password = config.get("turn_password")
self.turn_user_lifetime = self.parse_duration( self.turn_user_lifetime = self.parse_duration(

View File

@ -22,17 +22,17 @@
import argparse import argparse
import logging import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union from typing import Any, Dict, List, Optional, Union
import attr import attr
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import (
BaseModel,
if TYPE_CHECKING or HAS_PYDANTIC_V2: Extra,
from pydantic.v1 import BaseModel, Extra, StrictBool, StrictInt, StrictStr StrictBool,
else: StrictInt,
from pydantic import BaseModel, Extra, StrictBool, StrictInt, StrictStr StrictStr,
)
from synapse.config._base import ( from synapse.config._base import (
Config, Config,
ConfigError, ConfigError,

View File

@ -19,17 +19,11 @@
# #
# #
import collections.abc import collections.abc
from typing import TYPE_CHECKING, List, Type, Union, cast from typing import List, Type, Union, cast
import jsonschema import jsonschema
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import Field, StrictBool, StrictStr
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import Field, StrictBool, StrictStr
else:
from pydantic import Field, StrictBool, StrictStr
from synapse.api.constants import ( from synapse.api.constants import (
MAX_ALIAS_LENGTH, MAX_ALIAS_LENGTH,
EventContentFields, EventContentFields,
@ -64,7 +58,7 @@ class EventValidator:
event: The event to validate. event: The event to validate.
config: The homeserver's configuration. config: The homeserver's configuration.
""" """
self.validate_builder(event) self.validate_builder(event, config)
if event.format_version == EventFormatVersions.ROOM_V1_V2: if event.format_version == EventFormatVersions.ROOM_V1_V2:
EventID.from_string(event.event_id) EventID.from_string(event.event_id)
@ -95,6 +89,12 @@ class EventValidator:
# Note that only the client controlled portion of the event is # Note that only the client controlled portion of the event is
# checked, since we trust the portions of the event we created. # checked, since we trust the portions of the event we created.
validate_canonicaljson(event.content) validate_canonicaljson(event.content)
if not 0 < event.origin_server_ts < 2**53:
raise SynapseError(400, "Event timestamp is out of range")
# meow: allow specific users to send potentially dangerous events.
if event.sender in config.meow.validation_override:
return
if event.type == EventTypes.Aliases: if event.type == EventTypes.Aliases:
if "aliases" in event.content: if "aliases" in event.content:
@ -193,7 +193,9 @@ class EventValidator:
errcode=Codes.BAD_JSON, errcode=Codes.BAD_JSON,
) )
def validate_builder(self, event: Union[EventBase, EventBuilder]) -> None: def validate_builder(
self, event: Union[EventBase, EventBuilder], config: HomeServerConfig
) -> None:
"""Validates that the builder/event has roughly the right format. Only """Validates that the builder/event has roughly the right format. Only
checks values that we expect a proto event to have, rather than all the checks values that we expect a proto event to have, rather than all the
fields an event would have fields an event would have
@ -211,6 +213,10 @@ class EventValidator:
RoomID.from_string(event.room_id) RoomID.from_string(event.room_id)
UserID.from_string(event.sender) UserID.from_string(event.sender)
# meow: allow specific users to send so-called invalid events
if event.sender in config.meow.validation_override:
return
if event.type == EventTypes.Message: if event.type == EventTypes.Message:
strings = ["body", "msgtype"] strings = ["body", "msgtype"]

View File

@ -33,7 +33,7 @@ from synapse.replication.http.account_data import (
ReplicationRemoveUserAccountDataRestServlet, ReplicationRemoveUserAccountDataRestServlet,
) )
from synapse.streams import EventSource from synapse.streams import EventSource
from synapse.types import JsonDict, StrCollection, StreamKeyType, UserID from synapse.types import JsonDict, JsonMapping, StrCollection, StreamKeyType, UserID
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -253,7 +253,7 @@ class AccountDataHandler:
return response["max_stream_id"] return response["max_stream_id"]
async def add_tag_to_room( async def add_tag_to_room(
self, user_id: str, room_id: str, tag: str, content: JsonDict self, user_id: str, room_id: str, tag: str, content: JsonMapping
) -> int: ) -> int:
"""Add a tag to a room for a user. """Add a tag to a room for a user.

View File

@ -21,13 +21,34 @@
import abc import abc
import logging import logging
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Sequence, Set from typing import (
TYPE_CHECKING,
Any,
Dict,
List,
Mapping,
Optional,
Sequence,
Set,
Tuple,
)
import attr import attr
from synapse.api.constants import Direction, Membership from synapse.api.constants import Direction, EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.events import EventBase from synapse.events import EventBase
from synapse.types import JsonMapping, RoomStreamToken, StateMap, UserID, UserInfo from synapse.types import (
JsonMapping,
Requester,
RoomStreamToken,
ScheduledTask,
StateMap,
TaskStatus,
UserID,
UserInfo,
create_requester,
)
from synapse.visibility import filter_events_for_client from synapse.visibility import filter_events_for_client
if TYPE_CHECKING: if TYPE_CHECKING:
@ -35,6 +56,8 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
REDACT_ALL_EVENTS_ACTION_NAME = "redact_all_events"
class AdminHandler: class AdminHandler:
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
@ -43,6 +66,20 @@ class AdminHandler:
self._storage_controllers = hs.get_storage_controllers() self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state self._state_storage_controller = self._storage_controllers.state
self._msc3866_enabled = hs.config.experimental.msc3866.enabled self._msc3866_enabled = hs.config.experimental.msc3866.enabled
self.event_creation_handler = hs.get_event_creation_handler()
self._task_scheduler = hs.get_task_scheduler()
self._task_scheduler.register_action(
self._redact_all_events, REDACT_ALL_EVENTS_ACTION_NAME
)
async def get_redact_task(self, redact_id: str) -> Optional[ScheduledTask]:
"""Get the current status of an active redaction process
Args:
redact_id: redact_id returned by start_redact_events.
"""
return await self._task_scheduler.get_task(redact_id)
async def get_whois(self, user: UserID) -> JsonMapping: async def get_whois(self, user: UserID) -> JsonMapping:
connections = [] connections = []
@ -313,6 +350,153 @@ class AdminHandler:
return writer.finished() return writer.finished()
async def start_redact_events(
self,
user_id: str,
rooms: list,
requester: JsonMapping,
reason: Optional[str],
limit: Optional[int],
) -> str:
"""
Start a task redacting the events of the given user in the given rooms
Args:
user_id: the user ID of the user whose events should be redacted
rooms: the rooms in which to redact the user's events
requester: the user requesting the events
reason: reason for requesting the redaction, ie spam, etc
limit: limit on the number of events in each room to redact
Returns:
a unique ID which can be used to query the status of the task
"""
active_tasks = await self._task_scheduler.get_tasks(
actions=[REDACT_ALL_EVENTS_ACTION_NAME],
resource_id=user_id,
statuses=[TaskStatus.ACTIVE],
)
if len(active_tasks) > 0:
raise SynapseError(
400, "Redact already in progress for user %s" % (user_id,)
)
if not limit:
limit = 1000
redact_id = await self._task_scheduler.schedule_task(
REDACT_ALL_EVENTS_ACTION_NAME,
resource_id=user_id,
params={
"rooms": rooms,
"requester": requester,
"user_id": user_id,
"reason": reason,
"limit": limit,
},
)
logger.info(
"starting redact events with redact_id %s",
redact_id,
)
return redact_id
async def _redact_all_events(
self, task: ScheduledTask
) -> Tuple[TaskStatus, Optional[Mapping[str, Any]], Optional[str]]:
"""
Task to redact all of a users events in the given rooms, tracking which, if any, events
whose redaction failed
"""
assert task.params is not None
rooms = task.params.get("rooms")
assert rooms is not None
r = task.params.get("requester")
assert r is not None
admin = Requester.deserialize(self._store, r)
user_id = task.params.get("user_id")
assert user_id is not None
requester = create_requester(
user_id, authenticated_entity=admin.user.to_string()
)
reason = task.params.get("reason")
limit = task.params.get("limit")
assert limit is not None
result: Mapping[str, Any] = (
task.result if task.result else {"failed_redactions": {}}
)
for room in rooms:
room_version = await self._store.get_room_version(room)
event_ids = await self._store.get_events_sent_by_user_in_room(
user_id,
room,
limit,
["m.room.member", "m.room.message"],
)
if not event_ids:
# there's nothing to redact
return TaskStatus.COMPLETE, result, None
events = await self._store.get_events_as_list(event_ids)
for event in events:
# we care about join events but not other membership events
if event.type == "m.room.member":
content = event.content
if content:
if content.get("membership") == Membership.JOIN:
pass
else:
continue
relations = await self._store.get_relations_for_event(
room, event.event_id, event, event_type=EventTypes.Redaction
)
# if we've already successfully redacted this event then skip processing it
if relations[0]:
continue
event_dict = {
"type": EventTypes.Redaction,
"content": {"reason": reason} if reason else {},
"room_id": room,
"sender": user_id,
}
if room_version.updated_redaction_rules:
event_dict["content"]["redacts"] = event.event_id
else:
event_dict["redacts"] = event.event_id
try:
# set the prev event to the offending message to allow for redactions
# to be processed in the case where the user has been kicked/banned before
# redactions are requested
(
redaction,
_,
) = await self.event_creation_handler.create_and_send_nonmember_event(
requester,
event_dict,
prev_event_ids=[event.event_id],
ratelimit=False,
)
except Exception as ex:
logger.info(
f"Redaction of event {event.event_id} failed due to: {ex}"
)
result["failed_redactions"][event.event_id] = str(ex)
await self._task_scheduler.update_task(task.id, result=result)
return TaskStatus.COMPLETE, result, None
class ExfiltrationWriter(metaclass=abc.ABCMeta): class ExfiltrationWriter(metaclass=abc.ABCMeta):
"""Interface used to specify how to write exported data.""" """Interface used to specify how to write exported data."""

View File

@ -0,0 +1,484 @@
import logging
from typing import TYPE_CHECKING, List, Optional, Set, Tuple
from twisted.internet.interfaces import IDelayedCall
from synapse.api.constants import EventTypes
from synapse.api.errors import ShadowBanError
from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME
from synapse.logging.opentracing import set_tag
from synapse.metrics import event_processing_positions
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.delayed_events import (
ReplicationAddedDelayedEventRestServlet,
)
from synapse.storage.databases.main.delayed_events import (
DelayedEventDetails,
DelayID,
EventType,
StateKey,
Timestamp,
UserLocalpart,
)
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import (
JsonDict,
Requester,
RoomID,
UserID,
create_requester,
)
from synapse.util.events import generate_fake_event_id
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class DelayedEventsHandler:
def __init__(self, hs: "HomeServer"):
self._store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self._config = hs.config
self._clock = hs.get_clock()
self._request_ratelimiter = hs.get_request_ratelimiter()
self._event_creation_handler = hs.get_event_creation_handler()
self._room_member_handler = hs.get_room_member_handler()
self._next_delayed_event_call: Optional[IDelayedCall] = None
# The current position in the current_state_delta stream
self._event_pos: Optional[int] = None
# Guard to ensure we only process event deltas one at a time
self._event_processing = False
if hs.config.worker.worker_app is None:
self._repl_client = None
async def _schedule_db_events() -> None:
# We kick this off to pick up outstanding work from before the last restart.
# Block until we're up to date.
await self._unsafe_process_new_event()
hs.get_notifier().add_replication_callback(self.notify_new_event)
# Kick off again (without blocking) to catch any missed notifications
# that may have fired before the callback was added.
self._clock.call_later(0, self.notify_new_event)
# Delayed events that are already marked as processed on startup might not have been
# sent properly on the last run of the server, so unmark them to send them again.
# Caveat: this will double-send delayed events that successfully persisted, but failed
# to be removed from the DB table of delayed events.
# TODO: To avoid double-sending, scan the timeline to find which of these events were
# already sent. To do so, must store delay_ids in sent events to retrieve them later.
await self._store.unprocess_delayed_events()
events, next_send_ts = await self._store.process_timeout_delayed_events(
self._get_current_ts()
)
if next_send_ts:
self._schedule_next_at(next_send_ts)
# Can send the events in background after having awaited on marking them as processed
run_as_background_process(
"_send_events",
self._send_events,
events,
)
self._initialized_from_db = run_as_background_process(
"_schedule_db_events", _schedule_db_events
)
else:
self._repl_client = ReplicationAddedDelayedEventRestServlet.make_client(hs)
@property
def _is_master(self) -> bool:
return self._repl_client is None
def notify_new_event(self) -> None:
"""
Called when there may be more state event deltas to process,
which should cancel pending delayed events for the same state.
"""
if self._event_processing:
return
self._event_processing = True
async def process() -> None:
try:
await self._unsafe_process_new_event()
finally:
self._event_processing = False
run_as_background_process("delayed_events.notify_new_event", process)
async def _unsafe_process_new_event(self) -> None:
# If self._event_pos is None then means we haven't fetched it from the DB yet
if self._event_pos is None:
self._event_pos = await self._store.get_delayed_events_stream_pos()
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos > room_max_stream_ordering:
# apparently, we've processed more events than exist in the database!
# this can happen if events are removed with history purge or similar.
logger.warning(
"Event stream ordering appears to have gone backwards (%i -> %i): "
"rewinding delayed events processor",
self._event_pos,
room_max_stream_ordering,
)
self._event_pos = room_max_stream_ordering
# Loop round handling deltas until we're up to date
while True:
with Measure(self._clock, "delayed_events_delta"):
room_max_stream_ordering = self._store.get_room_max_stream_ordering()
if self._event_pos == room_max_stream_ordering:
return
logger.debug(
"Processing delayed events %s->%s",
self._event_pos,
room_max_stream_ordering,
)
(
max_pos,
deltas,
) = await self._storage_controllers.state.get_current_state_deltas(
self._event_pos, room_max_stream_ordering
)
logger.debug(
"Handling %d state deltas for delayed events processing",
len(deltas),
)
await self._handle_state_deltas(deltas)
self._event_pos = max_pos
# Expose current event processing position to prometheus
event_processing_positions.labels("delayed_events").set(max_pos)
await self._store.update_delayed_events_stream_pos(max_pos)
async def _handle_state_deltas(self, deltas: List[StateDelta]) -> None:
"""
Process current state deltas to cancel pending delayed events
that target the same state.
"""
for delta in deltas:
logger.debug(
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
)
next_send_ts = await self._store.cancel_delayed_state_events(
room_id=delta.room_id,
event_type=delta.event_type,
state_key=delta.state_key,
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at_or_none(next_send_ts)
async def add(
self,
requester: Requester,
*,
room_id: str,
event_type: str,
state_key: Optional[str],
origin_server_ts: Optional[int],
content: JsonDict,
delay: int,
) -> str:
"""
Creates a new delayed event and schedules its delivery.
Args:
requester: The requester of the delayed event, who will be its owner.
room_id: The ID of the room where the event should be sent to.
event_type: The type of event to be sent.
state_key: The state key of the event to be sent, or None if it is not a state event.
origin_server_ts: The custom timestamp to send the event with.
If None, the timestamp will be the actual time when the event is sent.
content: The content of the event to be sent.
delay: How long (in milliseconds) to wait before automatically sending the event.
Returns: The ID of the added delayed event.
Raises:
SynapseError: if the delayed event fails validation checks.
"""
await self._request_ratelimiter.ratelimit(requester)
self._event_creation_handler.validator.validate_builder(
self._event_creation_handler.event_builder_factory.for_room_version(
await self._store.get_room_version(room_id),
{
"type": event_type,
"content": content,
"room_id": room_id,
"sender": str(requester.user),
**({"state_key": state_key} if state_key is not None else {}),
},
)
)
creation_ts = self._get_current_ts()
delay_id, next_send_ts = await self._store.add_delayed_event(
user_localpart=requester.user.localpart,
device_id=requester.device_id,
creation_ts=creation_ts,
room_id=room_id,
event_type=event_type,
state_key=state_key,
origin_server_ts=origin_server_ts,
content=content,
delay=delay,
)
if self._repl_client is not None:
# NOTE: If this throws, the delayed event will remain in the DB and
# will be picked up once the main worker gets another delayed event.
await self._repl_client(
instance_name=MAIN_PROCESS_INSTANCE_NAME,
next_send_ts=next_send_ts,
)
elif self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
return delay_id
def on_added(self, next_send_ts: int) -> None:
next_send_ts = Timestamp(next_send_ts)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
async def cancel(self, requester: Requester, delay_id: str) -> None:
"""
Cancels the scheduled delivery of the matching delayed event.
Args:
requester: The owner of the delayed event to act on.
delay_id: The ID of the delayed event to act on.
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db
next_send_ts = await self._store.cancel_delayed_event(
delay_id=delay_id,
user_localpart=requester.user.localpart,
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at_or_none(next_send_ts)
async def restart(self, requester: Requester, delay_id: str) -> None:
"""
Restarts the scheduled delivery of the matching delayed event.
Args:
requester: The owner of the delayed event to act on.
delay_id: The ID of the delayed event to act on.
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db
next_send_ts = await self._store.restart_delayed_event(
delay_id=delay_id,
user_localpart=requester.user.localpart,
current_ts=self._get_current_ts(),
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at(next_send_ts)
async def send(self, requester: Requester, delay_id: str) -> None:
"""
Immediately sends the matching delayed event, instead of waiting for its scheduled delivery.
Args:
requester: The owner of the delayed event to act on.
delay_id: The ID of the delayed event to act on.
Raises:
NotFoundError: if no matching delayed event could be found.
"""
assert self._is_master
await self._request_ratelimiter.ratelimit(requester)
await self._initialized_from_db
event, next_send_ts = await self._store.process_target_delayed_event(
delay_id=delay_id,
user_localpart=requester.user.localpart,
)
if self._next_send_ts_changed(next_send_ts):
self._schedule_next_at_or_none(next_send_ts)
await self._send_event(
DelayedEventDetails(
delay_id=DelayID(delay_id),
user_localpart=UserLocalpart(requester.user.localpart),
room_id=event.room_id,
type=event.type,
state_key=event.state_key,
origin_server_ts=event.origin_server_ts,
content=event.content,
device_id=event.device_id,
)
)
async def _send_on_timeout(self) -> None:
self._next_delayed_event_call = None
events, next_send_ts = await self._store.process_timeout_delayed_events(
self._get_current_ts()
)
if next_send_ts:
self._schedule_next_at(next_send_ts)
await self._send_events(events)
async def _send_events(self, events: List[DelayedEventDetails]) -> None:
sent_state: Set[Tuple[RoomID, EventType, StateKey]] = set()
for event in events:
if event.state_key is not None:
state_info = (event.room_id, event.type, event.state_key)
if state_info in sent_state:
continue
else:
state_info = None
try:
# TODO: send in background if message event or non-conflicting state event
await self._send_event(event)
if state_info is not None:
sent_state.add(state_info)
except Exception:
logger.exception("Failed to send delayed event")
for room_id, event_type, state_key in sent_state:
await self._store.delete_processed_delayed_state_events(
room_id=str(room_id),
event_type=event_type,
state_key=state_key,
)
def _schedule_next_at_or_none(self, next_send_ts: Optional[Timestamp]) -> None:
if next_send_ts is not None:
self._schedule_next_at(next_send_ts)
elif self._next_delayed_event_call is not None:
self._next_delayed_event_call.cancel()
self._next_delayed_event_call = None
def _schedule_next_at(self, next_send_ts: Timestamp) -> None:
delay = next_send_ts - self._get_current_ts()
delay_sec = delay / 1000 if delay > 0 else 0
if self._next_delayed_event_call is None:
self._next_delayed_event_call = self._clock.call_later(
delay_sec,
run_as_background_process,
"_send_on_timeout",
self._send_on_timeout,
)
else:
self._next_delayed_event_call.reset(delay_sec)
async def get_all_for_user(self, requester: Requester) -> List[JsonDict]:
"""Return all pending delayed events requested by the given user."""
await self._request_ratelimiter.ratelimit(requester)
return await self._store.get_all_delayed_events_for_user(
requester.user.localpart
)
async def _send_event(
self,
event: DelayedEventDetails,
txn_id: Optional[str] = None,
) -> None:
user_id = UserID(event.user_localpart, self._config.server.server_name)
user_id_str = user_id.to_string()
# Create a new requester from what data is currently available
requester = create_requester(
user_id,
is_guest=await self._store.is_guest(user_id_str),
device_id=event.device_id,
)
try:
if event.state_key is not None and event.type == EventTypes.Member:
membership = event.content.get("membership")
assert membership is not None
event_id, _ = await self._room_member_handler.update_membership(
requester,
target=UserID.from_string(event.state_key),
room_id=event.room_id.to_string(),
action=membership,
content=event.content,
origin_server_ts=event.origin_server_ts,
)
else:
event_dict: JsonDict = {
"type": event.type,
"content": event.content,
"room_id": event.room_id.to_string(),
"sender": user_id_str,
}
if event.origin_server_ts is not None:
event_dict["origin_server_ts"] = event.origin_server_ts
if event.state_key is not None:
event_dict["state_key"] = event.state_key
(
sent_event,
_,
) = await self._event_creation_handler.create_and_send_nonmember_event(
requester,
event_dict,
txn_id=txn_id,
)
event_id = sent_event.event_id
except ShadowBanError:
event_id = generate_fake_event_id()
finally:
# TODO: If this is a temporary error, retry. Otherwise, consider notifying clients of the failure
try:
await self._store.delete_processed_delayed_event(
event.delay_id, event.user_localpart
)
except Exception:
logger.exception("Failed to delete processed delayed event")
set_tag("event_id", event_id)
def _get_current_ts(self) -> Timestamp:
return Timestamp(self._clock.time_msec())
def _next_send_ts_changed(self, next_send_ts: Optional[Timestamp]) -> bool:
# The DB alone knows if the next send time changed after adding/modifying
# a delayed event, but if we were to ever miss updating our delayed call's
# firing time, we may miss other updates. So, keep track of changes to the
# the next send time here instead of in the DB.
cached_next_send_ts = (
int(self._next_delayed_event_call.getTime() * 1000)
if self._next_delayed_event_call is not None
else None
)
return next_send_ts != cached_next_send_ts

View File

@ -80,9 +80,11 @@ class DirectoryHandler:
) -> None: ) -> None:
# general association creation for both human users and app services # general association creation for both human users and app services
for wchar in string.whitespace: # meow: allow specific users to include anything in room aliases
if wchar in room_alias.localpart: if creator not in self.config.meow.validation_override:
raise SynapseError(400, "Invalid characters in room alias") for wchar in string.whitespace:
if wchar in room_alias.localpart:
raise SynapseError(400, "Invalid characters in room alias")
if ":" in room_alias.localpart: if ":" in room_alias.localpart:
raise SynapseError(400, "Invalid character in room alias localpart: ':'.") raise SynapseError(400, "Invalid character in room alias localpart: ':'.")
@ -127,7 +129,10 @@ class DirectoryHandler:
user_id = requester.user.to_string() user_id = requester.user.to_string()
room_alias_str = room_alias.to_string() room_alias_str = room_alias.to_string()
if len(room_alias_str) > MAX_ALIAS_LENGTH: if (
user_id not in self.hs.config.meow.validation_override
and len(room_alias_str) > MAX_ALIAS_LENGTH
):
raise SynapseError( raise SynapseError(
400, 400,
"Can't create aliases longer than %s characters" % MAX_ALIAS_LENGTH, "Can't create aliases longer than %s characters" % MAX_ALIAS_LENGTH,
@ -169,7 +174,7 @@ class DirectoryHandler:
if not self.config.roomdirectory.is_alias_creation_allowed( if not self.config.roomdirectory.is_alias_creation_allowed(
user_id, room_id, room_alias_str user_id, room_id, room_alias_str
): ) and not is_admin:
# Let's just return a generic message, as there may be all sorts of # Let's just return a generic message, as there may be all sorts of
# reasons why we said no. TODO: Allow configurable error messages # reasons why we said no. TODO: Allow configurable error messages
# per alias creation rule? # per alias creation rule?
@ -505,7 +510,7 @@ class DirectoryHandler:
if not self.config.roomdirectory.is_publishing_room_allowed( if not self.config.roomdirectory.is_publishing_room_allowed(
user_id, room_id, room_aliases user_id, room_id, room_aliases
): ) and not await self.auth.is_server_admin(requester):
# Let's just return a generic message, as there may be all sorts of # Let's just return a generic message, as there may be all sorts of
# reasons why we said no. TODO: Allow configurable error messages # reasons why we said no. TODO: Allow configurable error messages
# per alias creation rule? # per alias creation rule?

View File

@ -1452,7 +1452,7 @@ class FederationHandler:
room_version_obj, event_dict room_version_obj, event_dict
) )
EventValidator().validate_builder(builder) EventValidator().validate_builder(builder, self.hs.config)
# Try several times, it could fail with PartialStateConflictError # Try several times, it could fail with PartialStateConflictError
# in send_membership_event, cf comment in except block. # in send_membership_event, cf comment in except block.
@ -1617,7 +1617,7 @@ class FederationHandler:
builder = self.event_builder_factory.for_room_version( builder = self.event_builder_factory.for_room_version(
room_version_obj, event_dict room_version_obj, event_dict
) )
EventValidator().validate_builder(builder) EventValidator().validate_builder(builder, self.hs.config)
( (
event, event,

View File

@ -672,7 +672,7 @@ class EventCreationHandler:
room_version_obj, event_dict room_version_obj, event_dict
) )
self.validator.validate_builder(builder) self.validator.validate_builder(builder, self.config)
is_exempt = await self._is_exempt_from_privacy_policy(builder, requester) is_exempt = await self._is_exempt_from_privacy_policy(builder, requester)
if require_consent and not is_exempt: if require_consent and not is_exempt:
@ -1330,6 +1330,8 @@ class EventCreationHandler:
Raises: Raises:
SynapseError if the event is invalid. SynapseError if the event is invalid.
""" """
if event.sender in self.config.meow.validation_override:
return
relation = relation_from_event(event) relation = relation_from_event(event)
if not relation: if not relation:
@ -1749,7 +1751,8 @@ class EventCreationHandler:
await self._maybe_kick_guest_users(event, context) await self._maybe_kick_guest_users(event, context)
if event.type == EventTypes.CanonicalAlias: validation_override = event.sender in self.config.meow.validation_override
if event.type == EventTypes.CanonicalAlias and not validation_override:
# Validate a newly added alias or newly added alt_aliases. # Validate a newly added alias or newly added alt_aliases.
original_alias = None original_alias = None
@ -2104,7 +2107,7 @@ class EventCreationHandler:
builder = self.event_builder_factory.for_room_version( builder = self.event_builder_factory.for_room_version(
original_event.room_version, third_party_result original_event.room_version, third_party_result
) )
self.validator.validate_builder(builder) self.validator.validate_builder(builder, self.config)
except SynapseError as e: except SynapseError as e:
raise Exception( raise Exception(
"Third party rules module created an invalid event: " + e.msg, "Third party rules module created an invalid event: " + e.msg,

View File

@ -20,11 +20,12 @@
# #
import logging import logging
from typing import TYPE_CHECKING from typing import TYPE_CHECKING, Optional
from synapse.api.constants import ReceiptTypes from synapse.api.constants import ReceiptTypes
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.util.async_helpers import Linearizer from synapse.util.async_helpers import Linearizer
from synapse.types import JsonDict
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -39,7 +40,11 @@ class ReadMarkerHandler:
self.read_marker_linearizer = Linearizer(name="read_marker") self.read_marker_linearizer = Linearizer(name="read_marker")
async def received_client_read_marker( async def received_client_read_marker(
self, room_id: str, user_id: str, event_id: str self,
room_id: str,
user_id: str,
event_id: str,
extra_content: Optional[JsonDict] = None,
) -> None: ) -> None:
"""Updates the read marker for a given user in a given room if the event ID given """Updates the read marker for a given user in a given room if the event ID given
is ahead in the stream relative to the current read marker. is ahead in the stream relative to the current read marker.
@ -71,7 +76,7 @@ class ReadMarkerHandler:
should_update = event_ordering > old_event_ordering should_update = event_ordering > old_event_ordering
if should_update: if should_update:
content = {"event_id": event_id} content = {"event_id": event_id, **(extra_content or {})}
await self.account_data_handler.add_account_data_to_room( await self.account_data_handler.add_account_data_to_room(
user_id, room_id, ReceiptTypes.FULLY_READ, content user_id, room_id, ReceiptTypes.FULLY_READ, content
) )

View File

@ -181,6 +181,7 @@ class ReceiptsHandler:
user_id: UserID, user_id: UserID,
event_id: str, event_id: str,
thread_id: Optional[str], thread_id: Optional[str],
extra_content: Optional[JsonDict] = None,
) -> None: ) -> None:
"""Called when a client tells us a local user has read up to the given """Called when a client tells us a local user has read up to the given
event_id in the room. event_id in the room.
@ -197,7 +198,7 @@ class ReceiptsHandler:
user_id=user_id.to_string(), user_id=user_id.to_string(),
event_ids=[event_id], event_ids=[event_id],
thread_id=thread_id, thread_id=thread_id,
data={"ts": int(self.clock.time_msec())}, data={"ts": int(self.clock.time_msec()), **(extra_content or {})},
) )
is_new = await self._handle_new_receipts([receipt]) is_new = await self._handle_new_receipts([receipt])

View File

@ -148,22 +148,25 @@ class RegistrationHandler:
localpart: str, localpart: str,
guest_access_token: Optional[str] = None, guest_access_token: Optional[str] = None,
assigned_user_id: Optional[str] = None, assigned_user_id: Optional[str] = None,
allow_invalid: bool = False,
inhibit_user_in_use_error: bool = False, inhibit_user_in_use_error: bool = False,
) -> None: ) -> None:
if types.contains_invalid_mxid_characters(localpart): # meow: allow admins to register invalid user ids
raise SynapseError( if not allow_invalid:
400, if types.contains_invalid_mxid_characters(localpart):
"User ID can only contain characters a-z, 0-9, or '=_-./+'", raise SynapseError(
Codes.INVALID_USERNAME, 400,
) "User ID can only contain characters a-z, 0-9, or '=_-./+'",
Codes.INVALID_USERNAME,
)
if not localpart: if not localpart:
raise SynapseError(400, "User ID cannot be empty", Codes.INVALID_USERNAME) raise SynapseError(400, "User ID cannot be empty", Codes.INVALID_USERNAME)
if localpart[0] == "_": if localpart[0] == "_":
raise SynapseError( raise SynapseError(
400, "User ID may not begin with _", Codes.INVALID_USERNAME 400, "User ID may not begin with _", Codes.INVALID_USERNAME
) )
user = UserID(localpart, self.hs.hostname) user = UserID(localpart, self.hs.hostname)
user_id = user.to_string() user_id = user.to_string()
@ -177,14 +180,16 @@ class RegistrationHandler:
"A different user ID has already been registered for this session", "A different user ID has already been registered for this session",
) )
self.check_user_id_not_appservice_exclusive(user_id) # meow: allow admins to register reserved user ids and long user ids
if not allow_invalid:
self.check_user_id_not_appservice_exclusive(user_id)
if len(user_id) > MAX_USERID_LENGTH: if len(user_id) > MAX_USERID_LENGTH:
raise SynapseError( raise SynapseError(
400, 400,
"User ID may not be longer than %s characters" % (MAX_USERID_LENGTH,), "User ID may not be longer than %s characters" % (MAX_USERID_LENGTH,),
Codes.INVALID_USERNAME, Codes.INVALID_USERNAME,
) )
users = await self.store.get_users_by_id_case_insensitive(user_id) users = await self.store.get_users_by_id_case_insensitive(user_id)
if users: if users:
@ -290,7 +295,12 @@ class RegistrationHandler:
await self.auth_blocking.check_auth_blocking(threepid=threepid) await self.auth_blocking.check_auth_blocking(threepid=threepid)
if localpart is not None: if localpart is not None:
await self.check_username(localpart, guest_access_token=guest_access_token) allow_invalid = by_admin and self.hs.config.meow.admin_api_register_invalid
await self.check_username(
localpart,
guest_access_token=guest_access_token,
allow_invalid=allow_invalid,
)
was_guest = guest_access_token is not None was_guest = guest_access_token is not None

View File

@ -894,11 +894,23 @@ class RoomCreationHandler:
self._validate_room_config(config, visibility) self._validate_room_config(config, visibility)
room_id = await self._generate_and_create_room_id( if "room_id" in config:
creator_id=user_id, room_id = config["room_id"]
is_public=is_public, try:
room_version=room_version, await self.store.store_room(
) room_id=room_id,
room_creator_user_id=user_id,
is_public=is_public,
room_version=room_version,
)
except StoreError:
raise SynapseError(409, "Room ID already in use", errcode="M_CONFLICT")
else:
room_id = await self._generate_and_create_room_id(
creator_id=user_id,
is_public=is_public,
room_version=room_version,
)
# Check whether this visibility value is blocked by a third party module # Check whether this visibility value is blocked by a third party module
allowed_by_third_party_rules = await ( allowed_by_third_party_rules = await (
@ -915,7 +927,7 @@ class RoomCreationHandler:
room_aliases.append(room_alias.to_string()) room_aliases.append(room_alias.to_string())
if not self.config.roomdirectory.is_publishing_room_allowed( if not self.config.roomdirectory.is_publishing_room_allowed(
user_id, room_id, room_aliases user_id, room_id, room_aliases
): ) and not is_requester_admin:
# allow room creation to continue but do not publish room # allow room creation to continue but do not publish room
await self.store.set_room_is_public(room_id, False) await self.store.set_room_is_public(room_id, False)
@ -1190,7 +1202,7 @@ class RoomCreationHandler:
# Please update the docs for `default_power_level_content_override` when # Please update the docs for `default_power_level_content_override` when
# updating the `events` dict below # updating the `events` dict below
power_level_content: JsonDict = { power_level_content: JsonDict = {
"users": {creator_id: 100}, "users": {creator_id: 9001},
"users_default": 0, "users_default": 0,
"events": { "events": {
EventTypes.Name: 50, EventTypes.Name: 50,

View File

@ -797,26 +797,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content.pop("displayname", None) content.pop("displayname", None)
content.pop("avatar_url", None) content.pop("avatar_url", None)
if len(content.get("displayname") or "") > MAX_DISPLAYNAME_LEN:
raise SynapseError(
400,
f"Displayname is too long (max {MAX_DISPLAYNAME_LEN})",
errcode=Codes.BAD_JSON,
)
if len(content.get("avatar_url") or "") > MAX_AVATAR_URL_LEN:
raise SynapseError(
400,
f"Avatar URL is too long (max {MAX_AVATAR_URL_LEN})",
errcode=Codes.BAD_JSON,
)
if "avatar_url" in content and content.get("avatar_url") is not None:
if not await self.profile_handler.check_avatar_size_and_mime_type(
content["avatar_url"],
):
raise SynapseError(403, "This avatar is not allowed", Codes.FORBIDDEN)
# The event content should *not* include the authorising user as # The event content should *not* include the authorising user as
# it won't be properly signed. Strip it out since it might come # it won't be properly signed. Strip it out since it might come
# back from a client updating a display name / avatar. # back from a client updating a display name / avatar.

View File

@ -267,7 +267,7 @@ class SlidingSyncHandler:
if relevant_rooms_to_send_map: if relevant_rooms_to_send_map:
with start_active_span("sliding_sync.generate_room_entries"): with start_active_span("sliding_sync.generate_room_entries"):
await concurrently_execute(handle_room, relevant_rooms_to_send_map, 10) await concurrently_execute(handle_room, relevant_rooms_to_send_map, 20)
extensions = await self.extensions.get_extensions_response( extensions = await self.extensions.get_extensions_response(
sync_config=sync_config, sync_config=sync_config,
@ -495,6 +495,24 @@ class SlidingSyncHandler:
room_sync_config.timeline_limit, room_sync_config.timeline_limit,
) )
# Handle state resets. For example, if we see
# `room_membership_for_user_at_to_token.event_id=None and
# room_membership_for_user_at_to_token.membership is not None`, we should
# indicate to the client that a state reset happened. Perhaps we should indicate
# this by setting `initial: True` and empty `required_state: []`.
state_reset_out_of_room = False
if (
room_membership_for_user_at_to_token.event_id is None
and room_membership_for_user_at_to_token.membership is not None
):
# We only expect the `event_id` to be `None` if you've been state reset out
# of the room (meaning you're no longer in the room). We could put this as
# part of the if-statement above but we want to handle every case where
# `event_id` is `None`.
assert room_membership_for_user_at_to_token.membership is Membership.LEAVE
state_reset_out_of_room = True
# Determine whether we should limit the timeline to the token range. # Determine whether we should limit the timeline to the token range.
# #
# We should return historical messages (before token range) in the # We should return historical messages (before token range) in the
@ -527,7 +545,7 @@ class SlidingSyncHandler:
from_bound = None from_bound = None
initial = True initial = True
ignore_timeline_bound = False ignore_timeline_bound = False
if from_token and not newly_joined: if from_token and not newly_joined and not state_reset_out_of_room:
room_status = previous_connection_state.rooms.have_sent_room(room_id) room_status = previous_connection_state.rooms.have_sent_room(room_id)
if room_status.status == HaveSentRoomFlag.LIVE: if room_status.status == HaveSentRoomFlag.LIVE:
from_bound = from_token.stream_token.room_key from_bound = from_token.stream_token.room_key
@ -732,12 +750,6 @@ class SlidingSyncHandler:
stripped_state.append(strip_event(invite_or_knock_event)) stripped_state.append(strip_event(invite_or_knock_event))
# TODO: Handle state resets. For example, if we see
# `room_membership_for_user_at_to_token.event_id=None and
# room_membership_for_user_at_to_token.membership is not None`, we should
# indicate to the client that a state reset happened. Perhaps we should indicate
# this by setting `initial: True` and empty `required_state`.
# Get the changes to current state in the token range from the # Get the changes to current state in the token range from the
# `current_state_delta_stream` table. # `current_state_delta_stream` table.
# #
@ -1051,6 +1063,22 @@ class SlidingSyncHandler:
if new_bump_stamp is not None: if new_bump_stamp is not None:
bump_stamp = new_bump_stamp bump_stamp = new_bump_stamp
if bump_stamp < 0:
# We never want to send down negative stream orderings, as you can't
# sensibly compare positive and negative stream orderings (they have
# different meanings).
#
# A negative bump stamp here can only happen if the stream ordering
# of the membership event is negative (and there are no further bump
# stamps), which can happen if the server leaves and deletes a room,
# and then rejoins it.
#
# To deal with this, we just set the bump stamp to zero, which will
# shove this room to the bottom of the list. This is OK as the
# moment a new message happens in the room it will get put into a
# sensible order again.
bump_stamp = 0
unstable_expanded_timeline = False unstable_expanded_timeline = False
prev_room_sync_config = previous_connection_state.room_configs.get(room_id) prev_room_sync_config = previous_connection_state.room_configs.get(room_id)
# Record the `room_sync_config` if we're `ignore_timeline_bound` (which means # Record the `room_sync_config` if we're `ignore_timeline_bound` (which means
@ -1171,8 +1199,8 @@ class SlidingSyncHandler:
# `SCHEMA_COMPAT_VERSION` and run the foreground update for # `SCHEMA_COMPAT_VERSION` and run the foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` # `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots`
# (tracked by https://github.com/element-hq/synapse/issues/17623) # (tracked by https://github.com/element-hq/synapse/issues/17623)
await self.store.have_finished_sliding_sync_background_jobs() latest_room_bump_stamp is None
and latest_room_bump_stamp is None and await self.store.have_finished_sliding_sync_background_jobs()
): ):
return None return None

View File

@ -14,7 +14,18 @@
import itertools import itertools
import logging import logging
from typing import TYPE_CHECKING, AbstractSet, Dict, Mapping, Optional, Sequence, Set from typing import (
TYPE_CHECKING,
AbstractSet,
ChainMap,
Dict,
Mapping,
MutableMapping,
Optional,
Sequence,
Set,
cast,
)
from typing_extensions import assert_never from typing_extensions import assert_never
@ -38,6 +49,7 @@ from synapse.types.handlers.sliding_sync import (
SlidingSyncConfig, SlidingSyncConfig,
SlidingSyncResult, SlidingSyncResult,
) )
from synapse.util.async_helpers import concurrently_execute
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -106,6 +118,8 @@ class SlidingSyncExtensionHandler:
if sync_config.extensions.account_data is not None: if sync_config.extensions.account_data is not None:
account_data_response = await self.get_account_data_extension_response( account_data_response = await self.get_account_data_extension_response(
sync_config=sync_config, sync_config=sync_config,
previous_connection_state=previous_connection_state,
new_connection_state=new_connection_state,
actual_lists=actual_lists, actual_lists=actual_lists,
actual_room_ids=actual_room_ids, actual_room_ids=actual_room_ids,
account_data_request=sync_config.extensions.account_data, account_data_request=sync_config.extensions.account_data,
@ -348,6 +362,8 @@ class SlidingSyncExtensionHandler:
async def get_account_data_extension_response( async def get_account_data_extension_response(
self, self,
sync_config: SlidingSyncConfig, sync_config: SlidingSyncConfig,
previous_connection_state: "PerConnectionState",
new_connection_state: "MutablePerConnectionState",
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList], actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str], actual_room_ids: Set[str],
account_data_request: SlidingSyncConfig.Extensions.AccountDataExtension, account_data_request: SlidingSyncConfig.Extensions.AccountDataExtension,
@ -380,29 +396,39 @@ class SlidingSyncExtensionHandler:
) )
) )
# TODO: This should take into account the `from_token` and `to_token`
have_push_rules_changed = await self.store.have_push_rules_changed_for_user( have_push_rules_changed = await self.store.have_push_rules_changed_for_user(
user_id, from_token.stream_token.push_rules_key user_id, from_token.stream_token.push_rules_key
) )
if have_push_rules_changed: if have_push_rules_changed:
global_account_data_map = dict(global_account_data_map)
# TODO: This should take into account the `from_token` and `to_token` # TODO: This should take into account the `from_token` and `to_token`
global_account_data_map[ global_account_data_map[
AccountDataTypes.PUSH_RULES AccountDataTypes.PUSH_RULES
] = await self.push_rules_handler.push_rules_for_user(sync_config.user) ] = await self.push_rules_handler.push_rules_for_user(sync_config.user)
else: else:
# TODO: This should take into account the `to_token` # TODO: This should take into account the `to_token`
all_global_account_data = await self.store.get_global_account_data_for_user( immutable_global_account_data_map = (
user_id await self.store.get_global_account_data_for_user(user_id)
) )
global_account_data_map = dict(all_global_account_data) # Use a `ChainMap` to avoid copying the immutable data from the cache
# TODO: This should take into account the `to_token` global_account_data_map = ChainMap(
global_account_data_map[ {
AccountDataTypes.PUSH_RULES # TODO: This should take into account the `to_token`
] = await self.push_rules_handler.push_rules_for_user(sync_config.user) AccountDataTypes.PUSH_RULES: await self.push_rules_handler.push_rules_for_user(
sync_config.user
)
},
# Cast is safe because `ChainMap` only mutates the top-most map,
# see https://github.com/python/typeshed/issues/8430
cast(
MutableMapping[str, JsonMapping], immutable_global_account_data_map
),
)
# Fetch room account data # Fetch room account data
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]] = {} #
account_data_by_room_map: MutableMapping[str, Mapping[str, JsonMapping]] = {}
relevant_room_ids = self.find_relevant_room_ids_for_extension( relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=account_data_request.lists, requested_lists=account_data_request.lists,
requested_room_ids=account_data_request.rooms, requested_room_ids=account_data_request.rooms,
@ -410,25 +436,153 @@ class SlidingSyncExtensionHandler:
actual_room_ids=actual_room_ids, actual_room_ids=actual_room_ids,
) )
if len(relevant_room_ids) > 0: if len(relevant_room_ids) > 0:
# We need to handle the different cases depending on if we have sent
# down account data previously or not, so we split the relevant
# rooms up into different collections based on status.
live_rooms = set()
previously_rooms: Dict[str, int] = {}
initial_rooms = set()
for room_id in relevant_room_ids:
if not from_token:
initial_rooms.add(room_id)
continue
room_status = previous_connection_state.account_data.have_sent_room(
room_id
)
if room_status.status == HaveSentRoomFlag.LIVE:
live_rooms.add(room_id)
elif room_status.status == HaveSentRoomFlag.PREVIOUSLY:
assert room_status.last_token is not None
previously_rooms[room_id] = room_status.last_token
elif room_status.status == HaveSentRoomFlag.NEVER:
initial_rooms.add(room_id)
else:
assert_never(room_status.status)
# We fetch all room account data since the from_token. This is so
# that we can record which rooms have updates that haven't been sent
# down.
#
# Mapping from room_id to mapping of `type` to `content` of room account
# data events.
all_updates_since_the_from_token: Mapping[
str, Mapping[str, JsonMapping]
] = {}
if from_token is not None: if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token` # TODO: This should take into account the `from_token` and `to_token`
account_data_by_room_map = ( all_updates_since_the_from_token = (
await self.store.get_updated_room_account_data_for_user( await self.store.get_updated_room_account_data_for_user(
user_id, from_token.stream_token.account_data_key user_id, from_token.stream_token.account_data_key
) )
) )
else:
# TODO: This should take into account the `to_token` # Add room tags
account_data_by_room_map = ( #
await self.store.get_room_account_data_for_user(user_id) # TODO: This should take into account the `from_token` and `to_token`
tags_by_room = await self.store.get_updated_tags(
user_id, from_token.stream_token.account_data_key
)
for room_id, tags in tags_by_room.items():
all_updates_since_the_from_token.setdefault(room_id, {})[
AccountDataTypes.TAG
] = {"tags": tags}
# For live rooms we just get the updates from `all_updates_since_the_from_token`
if live_rooms:
for room_id in all_updates_since_the_from_token.keys() & live_rooms:
account_data_by_room_map[room_id] = (
all_updates_since_the_from_token[room_id]
)
# For previously and initial rooms we query each room individually.
if previously_rooms or initial_rooms:
async def handle_previously(room_id: str) -> None:
# Either get updates or all account data in the room
# depending on if the room state is PREVIOUSLY or NEVER.
previous_token = previously_rooms.get(room_id)
if previous_token is not None:
room_account_data = await (
self.store.get_updated_room_account_data_for_user_for_room(
user_id=user_id,
room_id=room_id,
from_stream_id=previous_token,
to_stream_id=to_token.account_data_key,
)
)
# Add room tags
changed = await self.store.has_tags_changed_for_room(
user_id=user_id,
room_id=room_id,
from_stream_id=previous_token,
to_stream_id=to_token.account_data_key,
)
if changed:
# XXX: Ideally, this should take into account the `to_token`
# and return the set of tags at that time but we don't track
# changes to tags so we just have to return all tags for the
# room.
immutable_tag_map = await self.store.get_tags_for_room(
user_id, room_id
)
room_account_data[AccountDataTypes.TAG] = {
"tags": immutable_tag_map
}
# Only add an entry if there were any updates.
if room_account_data:
account_data_by_room_map[room_id] = room_account_data
else:
# TODO: This should take into account the `to_token`
immutable_room_account_data = (
await self.store.get_account_data_for_room(user_id, room_id)
)
# Add room tags
#
# XXX: Ideally, this should take into account the `to_token`
# and return the set of tags at that time but we don't track
# changes to tags so we just have to return all tags for the
# room.
immutable_tag_map = await self.store.get_tags_for_room(
user_id, room_id
)
account_data_by_room_map[room_id] = ChainMap(
{AccountDataTypes.TAG: {"tags": immutable_tag_map}}
if immutable_tag_map
else {},
# Cast is safe because `ChainMap` only mutates the top-most map,
# see https://github.com/python/typeshed/issues/8430
cast(
MutableMapping[str, JsonMapping],
immutable_room_account_data,
),
)
# We handle these rooms concurrently to speed it up.
await concurrently_execute(
handle_previously,
previously_rooms.keys() | initial_rooms,
limit=20,
) )
# Filter down to the relevant rooms # Now record which rooms are now up to data, and which rooms have
account_data_by_room_map = { # pending updates to send.
room_id: account_data_map new_connection_state.account_data.record_sent_rooms(relevant_room_ids)
for room_id, account_data_map in account_data_by_room_map.items() missing_updates = (
if room_id in relevant_room_ids all_updates_since_the_from_token.keys() - relevant_room_ids
} )
if missing_updates:
# If we have missing updates then we must have had a from_token.
assert from_token is not None
new_connection_state.account_data.record_unsent_rooms(
missing_updates, from_token.stream_token.account_data_key
)
return SlidingSyncResult.Extensions.AccountDataExtension( return SlidingSyncResult.Extensions.AccountDataExtension(
global_account_data_map=global_account_data_map, global_account_data_map=global_account_data_map,
@ -534,7 +688,10 @@ class SlidingSyncExtensionHandler:
# For rooms we've previously sent down, but aren't up to date, we # For rooms we've previously sent down, but aren't up to date, we
# need to use the from token from the room status. # need to use the from token from the room status.
if previously_rooms: if previously_rooms:
for room_id, receipt_token in previously_rooms.items(): # Fetch any missing rooms concurrently.
async def handle_previously_room(room_id: str) -> None:
receipt_token = previously_rooms[room_id]
# TODO: Limit the number of receipts we're about to send down # TODO: Limit the number of receipts we're about to send down
# for the room, if its too many we should TODO # for the room, if its too many we should TODO
previously_receipts = ( previously_receipts = (
@ -546,6 +703,10 @@ class SlidingSyncExtensionHandler:
) )
fetched_receipts.extend(previously_receipts) fetched_receipts.extend(previously_receipts)
await concurrently_execute(
handle_previously_room, previously_rooms.keys(), 20
)
if initial_rooms: if initial_rooms:
# We also always send down receipts for the current user. # We also always send down receipts for the current user.
user_receipts = ( user_receipts = (

View File

@ -27,6 +27,7 @@ from typing import (
Set, Set,
Tuple, Tuple,
Union, Union,
cast,
) )
import attr import attr
@ -39,6 +40,7 @@ from synapse.api.constants import (
EventTypes, EventTypes,
Membership, Membership,
) )
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import StrippedStateEvent from synapse.events import StrippedStateEvent
from synapse.events.utils import parse_stripped_state_event from synapse.events.utils import parse_stripped_state_event
from synapse.logging.opentracing import start_active_span, trace from synapse.logging.opentracing import start_active_span, trace
@ -54,7 +56,6 @@ from synapse.storage.roommember import (
) )
from synapse.types import ( from synapse.types import (
MutableStateMap, MutableStateMap,
PersistedEventPosition,
RoomStreamToken, RoomStreamToken,
StateMap, StateMap,
StrCollection, StrCollection,
@ -79,6 +80,12 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup and subsequent type narrowing.
UNSET_SENTINEL = object()
# Helper definition for the types that we might return. We do this to avoid # Helper definition for the types that we might return. We do this to avoid
# copying data between types (which can be expensive for many rooms). # copying data between types (which can be expensive for many rooms).
RoomsForUserType = Union[RoomsForUserStateReset, RoomsForUser, RoomsForUserSlidingSync] RoomsForUserType = Union[RoomsForUserStateReset, RoomsForUser, RoomsForUserSlidingSync]
@ -117,12 +124,6 @@ class SlidingSyncInterestedRooms:
dm_room_ids: AbstractSet[str] dm_room_ids: AbstractSet[str]
class Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup and subsequent type narrowing.
UNSET_SENTINEL = object()
def filter_membership_for_sync( def filter_membership_for_sync(
*, *,
user_id: str, user_id: str,
@ -219,19 +220,38 @@ class SlidingSyncRoomLists:
# include rooms that are outside the list ranges. # include rooms that are outside the list ranges.
all_rooms: Set[str] = set() all_rooms: Set[str] = set()
# Note: this won't include rooms the user has left themselves. We add back
# `newly_left` rooms below. This is more efficient than fetching all rooms and
# then filtering out the old left rooms.
room_membership_for_user_map = await self.store.get_sliding_sync_rooms_for_user( room_membership_for_user_map = await self.store.get_sliding_sync_rooms_for_user(
user_id user_id
) )
# Remove invites from ignored users
ignored_users = await self.store.ignored_users(user_id)
if ignored_users:
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map)
# Make a copy so we don't run into an error: `dictionary changed size during
# iteration`, when we remove items
for room_id in list(room_membership_for_user_map.keys()):
room_for_user_sliding_sync = room_membership_for_user_map[room_id]
if (
room_for_user_sliding_sync.membership == Membership.INVITE
and room_for_user_sliding_sync.sender in ignored_users
):
room_membership_for_user_map.pop(room_id, None)
changes = await self._get_rewind_changes_to_current_membership_to_token( changes = await self._get_rewind_changes_to_current_membership_to_token(
sync_config.user, room_membership_for_user_map, to_token=to_token sync_config.user, room_membership_for_user_map, to_token=to_token
) )
if changes: if changes:
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map) room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id, change in changes.items(): for room_id, change in changes.items():
if change is None: if change is None:
# Remove rooms that the user joined after the `to_token` # Remove rooms that the user joined after the `to_token`
room_membership_for_user_map.pop(room_id) room_membership_for_user_map.pop(room_id, None)
continue continue
existing_room = room_membership_for_user_map.get(room_id) existing_room = room_membership_for_user_map.get(room_id)
@ -244,34 +264,11 @@ class SlidingSyncRoomLists:
event_id=change.event_id, event_id=change.event_id,
event_pos=change.event_pos, event_pos=change.event_pos,
room_version_id=change.room_version_id, room_version_id=change.room_version_id,
# We keep the current state of the room though # We keep the state of the room though
has_known_state=existing_room.has_known_state,
room_type=existing_room.room_type, room_type=existing_room.room_type,
is_encrypted=existing_room.is_encrypted, is_encrypted=existing_room.is_encrypted,
) )
else:
# This can happen if we get "state reset" out of the room
# after the `to_token`. In other words, there is no membership
# for the room after the `to_token` but we see membership in
# the token range.
# Get the state at the time. Note that room type never changes,
# so we can just get current room type
room_type = await self.store.get_room_type(room_id)
is_encrypted = await self.get_is_encrypted_for_room_at_token(
room_id, to_token.room_key
)
# Add back rooms that the user was state-reset out of after `to_token`
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=change.sender,
membership=change.membership,
event_id=change.event_id,
event_pos=change.event_pos,
room_version_id=change.room_version_id,
room_type=room_type,
is_encrypted=is_encrypted,
)
( (
newly_joined_room_ids, newly_joined_room_ids,
@ -281,43 +278,88 @@ class SlidingSyncRoomLists:
) )
dm_room_ids = await self._get_dm_rooms_for_user(user_id) dm_room_ids = await self._get_dm_rooms_for_user(user_id)
# Handle state resets in the from -> to token range. # Add back `newly_left` rooms (rooms left in the from -> to token range).
state_reset_rooms = ( #
# We do this because `get_sliding_sync_rooms_for_user(...)` doesn't include
# rooms that the user left themselves as it's more efficient to add them back
# here than to fetch all rooms and then filter out the old left rooms. The user
# only leaves a room once in a blue moon so this barely needs to run.
#
missing_newly_left_rooms = (
newly_left_room_map.keys() - room_membership_for_user_map.keys() newly_left_room_map.keys() - room_membership_for_user_map.keys()
) )
if state_reset_rooms: if missing_newly_left_rooms:
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map) room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id in ( for room_id in missing_newly_left_rooms:
newly_left_room_map.keys() - room_membership_for_user_map.keys() newly_left_room_for_user = newly_left_room_map[room_id]
): # This should be a given
# Get the state at the time. Note that room type never changes, assert newly_left_room_for_user.membership == Membership.LEAVE
# so we can just get current room type
room_type = await self.store.get_room_type(room_id)
is_encrypted = await self.get_is_encrypted_for_room_at_token(
room_id, newly_left_room_map[room_id].to_room_stream_token()
)
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync( # Add back `newly_left` rooms
room_id=room_id, #
sender=None, # Check for membership and state in the Sliding Sync tables as it's just
membership=Membership.LEAVE, # another membership
event_id=None, newly_left_room_for_user_sliding_sync = (
event_pos=newly_left_room_map[room_id], await self.store.get_sliding_sync_room_for_user(user_id, room_id)
room_version_id=await self.store.get_room_version_id(room_id),
room_type=room_type,
is_encrypted=is_encrypted,
) )
# If the membership exists, it's just a normal user left the room on
# their own
if newly_left_room_for_user_sliding_sync is not None:
room_membership_for_user_map[room_id] = (
newly_left_room_for_user_sliding_sync
)
change = changes.get(room_id)
if change is not None:
# Update room membership events to the point in time of the `to_token`
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=change.sender,
membership=change.membership,
event_id=change.event_id,
event_pos=change.event_pos,
room_version_id=change.room_version_id,
# We keep the state of the room though
has_known_state=newly_left_room_for_user_sliding_sync.has_known_state,
room_type=newly_left_room_for_user_sliding_sync.room_type,
is_encrypted=newly_left_room_for_user_sliding_sync.is_encrypted,
)
# If we are `newly_left` from the room but can't find any membership,
# then we have been "state reset" out of the room
else:
# Get the state at the time. We can't read from the Sliding Sync
# tables because the user has no membership in the room according to
# the state (thanks to the state reset).
#
# Note: `room_type` never changes, so we can just get current room
# type
room_type = await self.store.get_room_type(room_id)
has_known_state = room_type is not ROOM_UNKNOWN_SENTINEL
if isinstance(room_type, StateSentinel):
room_type = None
# Get the encryption status at the time of the token
is_encrypted = await self.get_is_encrypted_for_room_at_token(
room_id,
newly_left_room_for_user.event_pos.to_room_stream_token(),
)
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_id=room_id,
sender=newly_left_room_for_user.sender,
membership=newly_left_room_for_user.membership,
event_id=newly_left_room_for_user.event_id,
event_pos=newly_left_room_for_user.event_pos,
room_version_id=newly_left_room_for_user.room_version_id,
has_known_state=has_known_state,
room_type=room_type,
is_encrypted=is_encrypted,
)
if sync_config.lists: if sync_config.lists:
sync_room_map = { sync_room_map = room_membership_for_user_map
room_id: room_membership_for_user
for room_id, room_membership_for_user in room_membership_for_user_map.items()
if filter_membership_for_sync(
user_id=user_id,
room_membership_for_user=room_membership_for_user,
newly_left=room_id in newly_left_room_map,
)
}
with start_active_span("assemble_sliding_window_lists"): with start_active_span("assemble_sliding_window_lists"):
for list_key, list_config in sync_config.lists.items(): for list_key, list_config in sync_config.lists.items():
# Apply filters # Apply filters
@ -326,6 +368,7 @@ class SlidingSyncRoomLists:
filtered_sync_room_map = await self.filter_rooms_using_tables( filtered_sync_room_map = await self.filter_rooms_using_tables(
user_id, user_id,
sync_room_map, sync_room_map,
previous_connection_state,
list_config.filters, list_config.filters,
to_token, to_token,
dm_room_ids, dm_room_ids,
@ -353,13 +396,34 @@ class SlidingSyncRoomLists:
ops: List[SlidingSyncResult.SlidingWindowList.Operation] = [] ops: List[SlidingSyncResult.SlidingWindowList.Operation] = []
if list_config.ranges: if list_config.ranges:
if list_config.ranges == [(0, len(filtered_sync_room_map) - 1)]: # Optimization: If we are asking for the full range, we don't
# If we are asking for the full range, we don't need to sort the list. # need to sort the list.
sorted_room_info = list(filtered_sync_room_map.values()) if (
# We're looking for a single range that covers the entire list
len(list_config.ranges) == 1
# Range starts at 0
and list_config.ranges[0][0] == 0
# And the range extends to the end of the list or more. Each
# side is inclusive.
and list_config.ranges[0][1]
>= len(filtered_sync_room_map) - 1
):
sorted_room_info: List[RoomsForUserType] = list(
filtered_sync_room_map.values()
)
else: else:
# Sort the list # Sort the list
sorted_room_info = await self.sort_rooms_using_tables( sorted_room_info = await self.sort_rooms(
filtered_sync_room_map, to_token # Cast is safe because RoomsForUserSlidingSync is part
# of the `RoomsForUserType` union. Why can't it detect this?
cast(
Dict[str, RoomsForUserType], filtered_sync_room_map
),
to_token,
# We only need to sort the rooms up to the end
# of the largest range. Both sides of range are
# inclusive so we `+ 1`.
limit=max(range[1] + 1 for range in list_config.ranges),
) )
for range in list_config.ranges: for range in list_config.ranges:
@ -402,12 +466,15 @@ class SlidingSyncRoomLists:
) )
lists[list_key] = SlidingSyncResult.SlidingWindowList( lists[list_key] = SlidingSyncResult.SlidingWindowList(
count=len(sorted_room_info), count=len(filtered_sync_room_map),
ops=ops, ops=ops,
) )
if sync_config.room_subscriptions: if sync_config.room_subscriptions:
with start_active_span("assemble_room_subscriptions"): with start_active_span("assemble_room_subscriptions"):
# TODO: It would be nice to avoid these copies
room_membership_for_user_map = dict(room_membership_for_user_map)
# Find which rooms are partially stated and may need to be filtered out # Find which rooms are partially stated and may need to be filtered out
# depending on the `required_state` requested (see below). # depending on the `required_state` requested (see below).
partial_state_rooms = await self.store.get_partial_rooms() partial_state_rooms = await self.store.get_partial_rooms()
@ -416,10 +483,20 @@ class SlidingSyncRoomLists:
room_id, room_id,
room_subscription, room_subscription,
) in sync_config.room_subscriptions.items(): ) in sync_config.room_subscriptions.items():
if room_id not in room_membership_for_user_map: # Check if we have a membership for the room, but didn't pull it out
# above. This could be e.g. a leave that we don't pull out by
# default.
current_room_entry = (
await self.store.get_sliding_sync_room_for_user(
user_id, room_id
)
)
if not current_room_entry:
# TODO: Handle rooms the user isn't in. # TODO: Handle rooms the user isn't in.
continue continue
room_membership_for_user_map[room_id] = current_room_entry
all_rooms.add(room_id) all_rooms.add(room_id)
# Take the superset of the `RoomSyncConfig` for each room. # Take the superset of the `RoomSyncConfig` for each room.
@ -433,8 +510,6 @@ class SlidingSyncRoomLists:
if room_id in partial_state_rooms: if room_id in partial_state_rooms:
continue continue
all_rooms.add(room_id)
# Update our `relevant_room_map` with the room we're going to display # Update our `relevant_room_map` with the room we're going to display
# and need to fetch more info about. # and need to fetch more info about.
existing_room_sync_config = relevant_room_map.get(room_id) existing_room_sync_config = relevant_room_map.get(room_id)
@ -449,7 +524,7 @@ class SlidingSyncRoomLists:
# Filtered subset of `relevant_room_map` for rooms that may have updates # Filtered subset of `relevant_room_map` for rooms that may have updates
# (in the event stream) # (in the event stream)
relevant_rooms_to_send_map = await self._filter_relevant_room_to_send( relevant_rooms_to_send_map = await self._filter_relevant_rooms_to_send(
previous_connection_state, from_token, relevant_room_map previous_connection_state, from_token, relevant_room_map
) )
@ -506,6 +581,7 @@ class SlidingSyncRoomLists:
filtered_sync_room_map = await self.filter_rooms( filtered_sync_room_map = await self.filter_rooms(
sync_config.user, sync_config.user,
sync_room_map, sync_room_map,
previous_connection_state,
list_config.filters, list_config.filters,
to_token, to_token,
dm_room_ids, dm_room_ids,
@ -636,7 +712,7 @@ class SlidingSyncRoomLists:
# Filtered subset of `relevant_room_map` for rooms that may have updates # Filtered subset of `relevant_room_map` for rooms that may have updates
# (in the event stream) # (in the event stream)
relevant_rooms_to_send_map = await self._filter_relevant_room_to_send( relevant_rooms_to_send_map = await self._filter_relevant_rooms_to_send(
previous_connection_state, from_token, relevant_room_map previous_connection_state, from_token, relevant_room_map
) )
@ -651,7 +727,7 @@ class SlidingSyncRoomLists:
dm_room_ids=dm_room_ids, dm_room_ids=dm_room_ids,
) )
async def _filter_relevant_room_to_send( async def _filter_relevant_rooms_to_send(
self, self,
previous_connection_state: PerConnectionState, previous_connection_state: PerConnectionState,
from_token: Optional[StreamToken], from_token: Optional[StreamToken],
@ -915,8 +991,38 @@ class SlidingSyncRoomLists:
excluded_rooms=self.rooms_to_exclude_globally, excluded_rooms=self.rooms_to_exclude_globally,
) )
# If the user has never joined any rooms before, we can just return an empty list # We filter out unknown room versions before we try and load any
if not room_for_user_list: # metadata about the room. They shouldn't go down sync anyway, and their
# metadata may be in a broken state.
room_for_user_list = [
room_for_user
for room_for_user in room_for_user_list
if room_for_user.room_version_id in KNOWN_ROOM_VERSIONS
]
# Remove invites from ignored users
ignored_users = await self.store.ignored_users(user_id)
if ignored_users:
room_for_user_list = [
room_for_user
for room_for_user in room_for_user_list
if not (
room_for_user.membership == Membership.INVITE
and room_for_user.sender in ignored_users
)
]
(
newly_joined_room_ids,
newly_left_room_map,
) = await self._get_newly_joined_and_left_rooms(
user_id, to_token=to_token, from_token=from_token
)
# If the user has never joined any rooms before, we can just return an empty
# list. We also have to check the `newly_left_room_map` in case someone was
# state reset out of all of the rooms they were in.
if not room_for_user_list and not newly_left_room_map:
return {}, set(), set() return {}, set(), set()
# Since we fetched the users room list at some point in time after the # Since we fetched the users room list at some point in time after the
@ -934,30 +1040,22 @@ class SlidingSyncRoomLists:
else: else:
rooms_for_user[room_id] = change_room_for_user rooms_for_user[room_id] = change_room_for_user
(
newly_joined_room_ids,
newly_left_room_ids,
) = await self._get_newly_joined_and_left_rooms(
user_id, to_token=to_token, from_token=from_token
)
# Ensure we have entries for rooms that the user has been "state reset" # Ensure we have entries for rooms that the user has been "state reset"
# out of. These are rooms appear in the `newly_left_rooms` map but # out of. These are rooms appear in the `newly_left_rooms` map but
# aren't in the `rooms_for_user` map. # aren't in the `rooms_for_user` map.
for room_id, left_event_pos in newly_left_room_ids.items(): for room_id, newly_left_room_for_user in newly_left_room_map.items():
# If we already know about the room, it's not a state reset
if room_id in rooms_for_user: if room_id in rooms_for_user:
continue continue
rooms_for_user[room_id] = RoomsForUserStateReset( # This should be true if it's a state reset
room_id=room_id, assert newly_left_room_for_user.membership is Membership.LEAVE
event_id=None, assert newly_left_room_for_user.event_id is None
event_pos=left_event_pos, assert newly_left_room_for_user.sender is None
membership=Membership.LEAVE,
sender=None,
room_version_id=await self.store.get_room_version_id(room_id),
)
return rooms_for_user, newly_joined_room_ids, set(newly_left_room_ids) rooms_for_user[room_id] = newly_left_room_for_user
return rooms_for_user, newly_joined_room_ids, set(newly_left_room_map)
@trace @trace
async def _get_newly_joined_and_left_rooms( async def _get_newly_joined_and_left_rooms(
@ -965,7 +1063,7 @@ class SlidingSyncRoomLists:
user_id: str, user_id: str,
to_token: StreamToken, to_token: StreamToken,
from_token: Optional[StreamToken], from_token: Optional[StreamToken],
) -> Tuple[AbstractSet[str], Mapping[str, PersistedEventPosition]]: ) -> Tuple[AbstractSet[str], Mapping[str, RoomsForUserStateReset]]:
"""Fetch the sets of rooms that the user newly joined or left in the """Fetch the sets of rooms that the user newly joined or left in the
given token range. given token range.
@ -974,11 +1072,18 @@ class SlidingSyncRoomLists:
"current memberships" of the user. "current memberships" of the user.
Returns: Returns:
A 2-tuple of newly joined room IDs and a map of newly left room A 2-tuple of newly joined room IDs and a map of newly_left room
IDs to the event position the leave happened at. IDs to the `RoomsForUserStateReset` entry.
We're using `RoomsForUserStateReset` but that doesn't necessarily mean the
user was state reset of the rooms. It's just that the `event_id`/`sender`
are optional and we can't tell the difference between the server leaving the
room when the user was the last person participating in the room and left or
was state reset out of the room. To actually check for a state reset, you
need to check if a membership still exists in the room.
""" """
newly_joined_room_ids: Set[str] = set() newly_joined_room_ids: Set[str] = set()
newly_left_room_map: Dict[str, PersistedEventPosition] = {} newly_left_room_map: Dict[str, RoomsForUserStateReset] = {}
# We need to figure out the # We need to figure out the
# #
@ -1049,8 +1154,13 @@ class SlidingSyncRoomLists:
# 1) Figure out newly_left rooms (> `from_token` and <= `to_token`). # 1) Figure out newly_left rooms (> `from_token` and <= `to_token`).
if last_membership_change_in_from_to_range.membership == Membership.LEAVE: if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
# 1) Mark this room as `newly_left` # 1) Mark this room as `newly_left`
newly_left_room_map[room_id] = ( newly_left_room_map[room_id] = RoomsForUserStateReset(
last_membership_change_in_from_to_range.event_pos room_id=room_id,
sender=last_membership_change_in_from_to_range.sender,
membership=Membership.LEAVE,
event_id=last_membership_change_in_from_to_range.event_id,
event_pos=last_membership_change_in_from_to_range.event_pos,
room_version_id=await self.store.get_room_version_id(room_id),
) )
# 2) Figure out `newly_joined` # 2) Figure out `newly_joined`
@ -1494,6 +1604,7 @@ class SlidingSyncRoomLists:
self, self,
user: UserID, user: UserID,
sync_room_map: Dict[str, RoomsForUserType], sync_room_map: Dict[str, RoomsForUserType],
previous_connection_state: PerConnectionState,
filters: SlidingSyncConfig.SlidingSyncList.Filters, filters: SlidingSyncConfig.SlidingSyncList.Filters,
to_token: StreamToken, to_token: StreamToken,
dm_room_ids: AbstractSet[str], dm_room_ids: AbstractSet[str],
@ -1513,6 +1624,8 @@ class SlidingSyncRoomLists:
A filtered dictionary of room IDs along with membership information in the A filtered dictionary of room IDs along with membership information in the
room at the time of `to_token`. room at the time of `to_token`.
""" """
user_id = user.to_string()
room_id_to_stripped_state_map: Dict[ room_id_to_stripped_state_map: Dict[
str, Optional[StateMap[StrippedStateEvent]] str, Optional[StateMap[StrippedStateEvent]]
] = {} ] = {}
@ -1622,12 +1735,14 @@ class SlidingSyncRoomLists:
and room_type not in filters.room_types and room_type not in filters.room_types
): ):
filtered_room_id_set.remove(room_id) filtered_room_id_set.remove(room_id)
continue
if ( if (
filters.not_room_types is not None filters.not_room_types is not None
and room_type in filters.not_room_types and room_type in filters.not_room_types
): ):
filtered_room_id_set.remove(room_id) filtered_room_id_set.remove(room_id)
continue
if filters.room_name_like is not None: if filters.room_name_like is not None:
with start_active_span("filters.room_name_like"): with start_active_span("filters.room_name_like"):
@ -1644,18 +1759,64 @@ class SlidingSyncRoomLists:
# ) # )
raise NotImplementedError() raise NotImplementedError()
# Filter by room tags according to the users account data
if filters.tags is not None or filters.not_tags is not None: if filters.tags is not None or filters.not_tags is not None:
with start_active_span("filters.tags"): with start_active_span("filters.tags"):
raise NotImplementedError() # Fetch the user tags for their rooms
room_tags = await self.store.get_tags_for_user(user_id)
room_id_to_tag_name_set: Dict[str, Set[str]] = {
room_id: set(tags.keys()) for room_id, tags in room_tags.items()
}
if filters.tags is not None:
tags_set = set(filters.tags)
filtered_room_id_set = {
room_id
for room_id in filtered_room_id_set
# Remove rooms that don't have one of the tags in the filter
if room_id_to_tag_name_set.get(room_id, set()).intersection(
tags_set
)
}
if filters.not_tags is not None:
not_tags_set = set(filters.not_tags)
filtered_room_id_set = {
room_id
for room_id in filtered_room_id_set
# Remove rooms if they have any of the tags in the filter
if not room_id_to_tag_name_set.get(room_id, set()).intersection(
not_tags_set
)
}
# Keep rooms if the user has been state reset out of it but we previously sent
# down the connection before. We want to make sure that we send these down to
# the client regardless of filters so they find out about the state reset.
#
# We don't always have access to the state in a room after being state reset if
# no one else locally on the server is participating in the room so we patch
# these back in manually.
state_reset_out_of_room_id_set = {
room_id
for room_id in sync_room_map.keys()
if sync_room_map[room_id].event_id is None
and previous_connection_state.rooms.have_sent_room(room_id).status
!= HaveSentRoomFlag.NEVER
}
# Assemble a new sync room map but only with the `filtered_room_id_set` # Assemble a new sync room map but only with the `filtered_room_id_set`
return {room_id: sync_room_map[room_id] for room_id in filtered_room_id_set} return {
room_id: sync_room_map[room_id]
for room_id in filtered_room_id_set | state_reset_out_of_room_id_set
}
@trace @trace
async def filter_rooms_using_tables( async def filter_rooms_using_tables(
self, self,
user_id: str, user_id: str,
sync_room_map: Mapping[str, RoomsForUserSlidingSync], sync_room_map: Mapping[str, RoomsForUserSlidingSync],
previous_connection_state: PerConnectionState,
filters: SlidingSyncConfig.SlidingSyncList.Filters, filters: SlidingSyncConfig.SlidingSyncList.Filters,
to_token: StreamToken, to_token: StreamToken,
dm_room_ids: AbstractSet[str], dm_room_ids: AbstractSet[str],
@ -1670,6 +1831,7 @@ class SlidingSyncRoomLists:
filters: Filters to apply filters: Filters to apply
to_token: We filter based on the state of the room at this token to_token: We filter based on the state of the room at this token
dm_room_ids: Set of room IDs which are DMs dm_room_ids: Set of room IDs which are DMs
room_tags: Mapping of room ID to tags
Returns: Returns:
A filtered dictionary of room IDs along with membership information in the A filtered dictionary of room IDs along with membership information in the
@ -1697,7 +1859,10 @@ class SlidingSyncRoomLists:
filtered_room_id_set = { filtered_room_id_set = {
room_id room_id
for room_id in filtered_room_id_set for room_id in filtered_room_id_set
if sync_room_map[room_id].is_encrypted == filters.is_encrypted # Remove rooms if we can't figure out what the encryption status is
if sync_room_map[room_id].has_known_state
# Or remove if it doesn't match the filter
and sync_room_map[room_id].is_encrypted == filters.is_encrypted
} }
# Filter for rooms that the user has been invited to # Filter for rooms that the user has been invited to
@ -1726,6 +1891,11 @@ class SlidingSyncRoomLists:
# Make a copy so we don't run into an error: `Set changed size during # Make a copy so we don't run into an error: `Set changed size during
# iteration`, when we filter out and remove items # iteration`, when we filter out and remove items
for room_id in filtered_room_id_set.copy(): for room_id in filtered_room_id_set.copy():
# Remove rooms if we can't figure out what room type it is
if not sync_room_map[room_id].has_known_state:
filtered_room_id_set.remove(room_id)
continue
room_type = sync_room_map[room_id].room_type room_type = sync_room_map[room_id].room_type
if ( if (
@ -1733,12 +1903,14 @@ class SlidingSyncRoomLists:
and room_type not in filters.room_types and room_type not in filters.room_types
): ):
filtered_room_id_set.remove(room_id) filtered_room_id_set.remove(room_id)
continue
if ( if (
filters.not_room_types is not None filters.not_room_types is not None
and room_type in filters.not_room_types and room_type in filters.not_room_types
): ):
filtered_room_id_set.remove(room_id) filtered_room_id_set.remove(room_id)
continue
if filters.room_name_like is not None: if filters.room_name_like is not None:
with start_active_span("filters.room_name_like"): with start_active_span("filters.room_name_like"):
@ -1755,84 +1927,78 @@ class SlidingSyncRoomLists:
# ) # )
raise NotImplementedError() raise NotImplementedError()
# Filter by room tags according to the users account data
if filters.tags is not None or filters.not_tags is not None: if filters.tags is not None or filters.not_tags is not None:
with start_active_span("filters.tags"): with start_active_span("filters.tags"):
raise NotImplementedError() # Fetch the user tags for their rooms
room_tags = await self.store.get_tags_for_user(user_id)
room_id_to_tag_name_set: Dict[str, Set[str]] = {
room_id: set(tags.keys()) for room_id, tags in room_tags.items()
}
if filters.tags is not None:
tags_set = set(filters.tags)
filtered_room_id_set = {
room_id
for room_id in filtered_room_id_set
# Remove rooms that don't have one of the tags in the filter
if room_id_to_tag_name_set.get(room_id, set()).intersection(
tags_set
)
}
if filters.not_tags is not None:
not_tags_set = set(filters.not_tags)
filtered_room_id_set = {
room_id
for room_id in filtered_room_id_set
# Remove rooms if they have any of the tags in the filter
if not room_id_to_tag_name_set.get(room_id, set()).intersection(
not_tags_set
)
}
# Keep rooms if the user has been state reset out of it but we previously sent
# down the connection before. We want to make sure that we send these down to
# the client regardless of filters so they find out about the state reset.
#
# We don't always have access to the state in a room after being state reset if
# no one else locally on the server is participating in the room so we patch
# these back in manually.
state_reset_out_of_room_id_set = {
room_id
for room_id in sync_room_map.keys()
if sync_room_map[room_id].event_id is None
and previous_connection_state.rooms.have_sent_room(room_id).status
!= HaveSentRoomFlag.NEVER
}
# Assemble a new sync room map but only with the `filtered_room_id_set` # Assemble a new sync room map but only with the `filtered_room_id_set`
return {room_id: sync_room_map[room_id] for room_id in filtered_room_id_set} return {
room_id: sync_room_map[room_id]
@trace for room_id in filtered_room_id_set | state_reset_out_of_room_id_set
async def sort_rooms_using_tables( }
self,
sync_room_map: Mapping[str, RoomsForUserSlidingSync],
to_token: StreamToken,
) -> List[RoomsForUserSlidingSync]:
"""
Sort by `stream_ordering` of the last event that the user should see in the
room. `stream_ordering` is unique so we get a stable sort.
Args:
sync_room_map: Dictionary of room IDs to sort along with membership
information in the room at the time of `to_token`.
to_token: We sort based on the events in the room at this token (<= `to_token`)
Returns:
A sorted list of room IDs by `stream_ordering` along with membership information.
"""
# Assemble a map of room ID to the `stream_ordering` of the last activity that the
# user should see in the room (<= `to_token`)
last_activity_in_room_map: Dict[str, int] = {}
for room_id, room_for_user in sync_room_map.items():
if room_for_user.membership != Membership.JOIN:
# If the user has left/been invited/knocked/been banned from a
# room, they shouldn't see anything past that point.
#
# FIXME: It's possible that people should see beyond this point
# in invited/knocked cases if for example the room has
# `invite`/`world_readable` history visibility, see
# https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1653045932
last_activity_in_room_map[room_id] = room_for_user.event_pos.stream
# For fully-joined rooms, we find the latest activity at/before the
# `to_token`.
joined_room_positions = (
await self.store.bulk_get_last_event_pos_in_room_before_stream_ordering(
[
room_id
for room_id, room_for_user in sync_room_map.items()
if room_for_user.membership == Membership.JOIN
],
to_token.room_key,
)
)
last_activity_in_room_map.update(joined_room_positions)
return sorted(
sync_room_map.values(),
# Sort by the last activity (stream_ordering) in the room
key=lambda room_info: last_activity_in_room_map[room_info.room_id],
# We want descending order
reverse=True,
)
@trace @trace
async def sort_rooms( async def sort_rooms(
self, self,
sync_room_map: Dict[str, RoomsForUserType], sync_room_map: Dict[str, RoomsForUserType],
to_token: StreamToken, to_token: StreamToken,
limit: Optional[int] = None,
) -> List[RoomsForUserType]: ) -> List[RoomsForUserType]:
""" """
Sort by `stream_ordering` of the last event that the user should see in the Sort by `stream_ordering` of the last event that the user should see in the
room. `stream_ordering` is unique so we get a stable sort. room. `stream_ordering` is unique so we get a stable sort.
If `limit` is specified then sort may return fewer entries, but will
always return at least the top N rooms. This is useful as we don't always
need to sort the full list, but are just interested in the top N.
Args: Args:
sync_room_map: Dictionary of room IDs to sort along with membership sync_room_map: Dictionary of room IDs to sort along with membership
information in the room at the time of `to_token`. information in the room at the time of `to_token`.
to_token: We sort based on the events in the room at this token (<= `to_token`) to_token: We sort based on the events in the room at this token (<= `to_token`)
limit: The number of rooms that we need to return from the top of the list.
Returns: Returns:
A sorted list of room IDs by `stream_ordering` along with membership information. A sorted list of room IDs by `stream_ordering` along with membership information.
@ -1842,8 +2008,23 @@ class SlidingSyncRoomLists:
# user should see in the room (<= `to_token`) # user should see in the room (<= `to_token`)
last_activity_in_room_map: Dict[str, int] = {} last_activity_in_room_map: Dict[str, int] = {}
# Same as above, except for positions that we know are in the event
# stream cache.
cached_positions: Dict[str, int] = {}
earliest_cache_position = (
self.store._events_stream_cache.get_earliest_known_position()
)
for room_id, room_for_user in sync_room_map.items(): for room_id, room_for_user in sync_room_map.items():
if room_for_user.membership != Membership.JOIN: if room_for_user.membership == Membership.JOIN:
# For joined rooms check the stream change cache.
cached_position = (
self.store._events_stream_cache.get_max_pos_of_last_change(room_id)
)
if cached_position is not None:
cached_positions[room_id] = cached_position
else:
# If the user has left/been invited/knocked/been banned from a # If the user has left/been invited/knocked/been banned from a
# room, they shouldn't see anything past that point. # room, they shouldn't see anything past that point.
# #
@ -1853,6 +2034,48 @@ class SlidingSyncRoomLists:
# https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1653045932 # https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1653045932
last_activity_in_room_map[room_id] = room_for_user.event_pos.stream last_activity_in_room_map[room_id] = room_for_user.event_pos.stream
# If the stream position is in range of the stream change cache
# we can include it.
if room_for_user.event_pos.stream > earliest_cache_position:
cached_positions[room_id] = room_for_user.event_pos.stream
# If we are only asked for the top N rooms, and we have enough from
# looking in the stream change cache, then we can return early. This
# is because the cache must include all entries above
# `.get_earliest_known_position()`.
if limit is not None and len(cached_positions) >= limit:
# ... but first we need to handle the case where the cached max
# position is greater than the to_token, in which case we do
# actually query the DB. This should happen rarely, so can do it in
# a loop.
for room_id, position in list(cached_positions.items()):
if position > to_token.room_key.stream:
result = await self.store.get_last_event_pos_in_room_before_stream_ordering(
room_id, to_token.room_key
)
if (
result is not None
and result[1].stream > earliest_cache_position
):
# We have a stream position in the cached range.
cached_positions[room_id] = result[1].stream
else:
# No position in the range, so we remove the entry.
cached_positions.pop(room_id)
if limit is not None and len(cached_positions) >= limit:
return sorted(
(
room
for room in sync_room_map.values()
if room.room_id in cached_positions
),
# Sort by the last activity (stream_ordering) in the room
key=lambda room_info: cached_positions[room_info.room_id],
# We want descending order
reverse=True,
)
# For fully-joined rooms, we find the latest activity at/before the # For fully-joined rooms, we find the latest activity at/before the
# `to_token`. # `to_token`.
joined_room_positions = ( joined_room_positions = (

View File

@ -1310,7 +1310,6 @@ class SyncHandler:
for e in await sync_config.filter_collection.filter_room_state( for e in await sync_config.filter_collection.filter_room_state(
list(state.values()) list(state.values())
) )
if e.type != EventTypes.Aliases # until MSC2261 or alternative solution
} }
async def _compute_state_delta_for_full_sync( async def _compute_state_delta_for_full_sync(

View File

@ -19,7 +19,6 @@
# #
# #
import abc import abc
import cgi
import codecs import codecs
import logging import logging
import random import random
@ -792,7 +791,7 @@ class MatrixFederationHttpClient:
url_str, url_str,
_flatten_response_never_received(e), _flatten_response_never_received(e),
) )
body = None body = b""
exc = HttpResponseException( exc = HttpResponseException(
response.code, response_phrase, body response.code, response_phrase, body
@ -1813,8 +1812,9 @@ def check_content_type_is(headers: Headers, expected_content_type: str) -> None:
) )
c_type = content_type_headers[0].decode("ascii") # only the first header c_type = content_type_headers[0].decode("ascii") # only the first header
val, options = cgi.parse_header(c_type) # Extract the 'essence' of the mimetype, removing any parameter
if val != expected_content_type: c_type_parsed = c_type.split(";", 1)[0].strip()
if c_type_parsed != expected_content_type:
raise RequestSendFailed( raise RequestSendFailed(
RuntimeError( RuntimeError(
f"Remote server sent Content-Type header of '{c_type}', not '{expected_content_type}'", f"Remote server sent Content-Type header of '{c_type}', not '{expected_content_type}'",

View File

@ -37,19 +37,17 @@ from typing import (
overload, overload,
) )
from synapse._pydantic_compat import HAS_PYDANTIC_V2
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import BaseModel, MissingError, PydanticValueError, ValidationError
from pydantic.v1.error_wrappers import ErrorWrapper
else:
from pydantic import BaseModel, MissingError, PydanticValueError, ValidationError
from pydantic.error_wrappers import ErrorWrapper
from typing_extensions import Literal from typing_extensions import Literal
from twisted.web.server import Request from twisted.web.server import Request
from synapse._pydantic_compat import (
BaseModel,
ErrorWrapper,
MissingError,
PydanticValueError,
ValidationError,
)
from synapse.api.errors import Codes, SynapseError from synapse.api.errors import Codes, SynapseError
from synapse.http import redact_uri from synapse.http import redact_uri
from synapse.http.server import HttpServer from synapse.http.server import HttpServer

View File

@ -86,6 +86,7 @@ INLINE_CONTENT_TYPES = [
"text/csv", "text/csv",
"application/json", "application/json",
"application/ld+json", "application/ld+json",
"application/pdf",
# We allow some media files deemed as safe, which comes from the matrix-react-sdk. # We allow some media files deemed as safe, which comes from the matrix-react-sdk.
# https://github.com/matrix-org/matrix-react-sdk/blob/a70fcfd0bcf7f8c85986da18001ea11597989a7c/src/utils/blobs.ts#L51 # https://github.com/matrix-org/matrix-react-sdk/blob/a70fcfd0bcf7f8c85986da18001ea11597989a7c/src/utils/blobs.ts#L51
# SVGs are *intentionally* omitted. # SVGs are *intentionally* omitted.
@ -229,7 +230,9 @@ def add_file_headers(
# recommend caching as it's sensitive or private - or at least # recommend caching as it's sensitive or private - or at least
# select private. don't bother setting Expires as all our # select private. don't bother setting Expires as all our
# clients are smart enough to be happy with Cache-Control # clients are smart enough to be happy with Cache-Control
request.setHeader(b"Cache-Control", b"public,max-age=86400,s-maxage=86400") request.setHeader(
b"Cache-Control", b"public,immutable,max-age=86400,s-maxage=86400"
)
if file_size is not None: if file_size is not None:
request.setHeader(b"Content-Length", b"%d" % (file_size,)) request.setHeader(b"Content-Length", b"%d" % (file_size,))

View File

@ -65,7 +65,7 @@ class ThumbnailError(Exception):
class Thumbnailer: class Thumbnailer:
FORMATS = {"image/jpeg": "JPEG", "image/png": "PNG"} FORMATS = {"image/jpeg": "JPEG", "image/png": "PNG", "image/webp": "WEBP"}
@staticmethod @staticmethod
def set_limits(max_image_pixels: int) -> None: def set_limits(max_image_pixels: int) -> None:

View File

@ -140,13 +140,6 @@ class HttpPusher(Pusher):
url = self.data["url"] url = self.data["url"]
if not isinstance(url, str): if not isinstance(url, str):
raise PusherConfigException("'url' must be a string") raise PusherConfigException("'url' must be a string")
url_parts = urllib.parse.urlparse(url)
# Note that the specification also says the scheme must be HTTPS, but
# it isn't up to the homeserver to verify that.
if url_parts.path != "/_matrix/push/v1/notify":
raise PusherConfigException(
"'url' must have a path of '/_matrix/push/v1/notify'"
)
self.url = url self.url = url
self.http_client = hs.get_proxied_blocklisted_http_client() self.http_client = hs.get_proxied_blocklisted_http_client()

View File

@ -23,6 +23,7 @@ from typing import TYPE_CHECKING
from synapse.http.server import JsonResource from synapse.http.server import JsonResource
from synapse.replication.http import ( from synapse.replication.http import (
account_data, account_data,
delayed_events,
devices, devices,
federation, federation,
login, login,
@ -64,3 +65,4 @@ class ReplicationRestResource(JsonResource):
login.register_servlets(hs, self) login.register_servlets(hs, self)
register.register_servlets(hs, self) register.register_servlets(hs, self)
devices.register_servlets(hs, self) devices.register_servlets(hs, self)
delayed_events.register_servlets(hs, self)

View File

@ -0,0 +1,48 @@
import logging
from typing import TYPE_CHECKING, Dict, Optional, Tuple
from twisted.web.server import Request
from synapse.http.server import HttpServer
from synapse.replication.http._base import ReplicationEndpoint
from synapse.types import JsonDict, JsonMapping
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class ReplicationAddedDelayedEventRestServlet(ReplicationEndpoint):
"""Handle a delayed event being added by another worker.
Request format:
POST /_synapse/replication/delayed_event_added/
{}
"""
NAME = "added_delayed_event"
PATH_ARGS = ()
CACHE = False
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.handler = hs.get_delayed_events_handler()
@staticmethod
async def _serialize_payload(next_send_ts: int) -> JsonDict: # type: ignore[override]
return {"next_send_ts": next_send_ts}
async def _handle_request( # type: ignore[override]
self, request: Request, content: JsonDict
) -> Tuple[int, Dict[str, Optional[JsonMapping]]]:
self.handler.on_added(int(content["next_send_ts"]))
return 200, {}
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
ReplicationAddedDelayedEventRestServlet(hs).register(http_server)

View File

@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3. # This file is licensed under the Affero General Public License (AGPL) version 3.
# #
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright (C) 2023 New Vector, Ltd # Copyright (C) 2023-2024 New Vector, Ltd
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as # it under the terms of the GNU Affero General Public License as
@ -31,6 +31,7 @@ from synapse.rest.client import (
auth, auth,
auth_issuer, auth_issuer,
capabilities, capabilities,
delayed_events,
devices, devices,
directory, directory,
events, events,
@ -81,6 +82,7 @@ CLIENT_SERVLET_FUNCTIONS: Tuple[RegisterServletsFunc, ...] = (
room.register_deprecated_servlets, room.register_deprecated_servlets,
events.register_servlets, events.register_servlets,
room.register_servlets, room.register_servlets,
delayed_events.register_servlets,
login.register_servlets, login.register_servlets,
profile.register_servlets, profile.register_servlets,
presence.register_servlets, presence.register_servlets,

View File

@ -98,6 +98,8 @@ from synapse.rest.admin.users import (
DeactivateAccountRestServlet, DeactivateAccountRestServlet,
PushersRestServlet, PushersRestServlet,
RateLimitRestServlet, RateLimitRestServlet,
RedactUser,
RedactUserStatus,
ResetPasswordRestServlet, ResetPasswordRestServlet,
SearchUsersRestServlet, SearchUsersRestServlet,
ShadowBanRestServlet, ShadowBanRestServlet,
@ -319,6 +321,8 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
UserReplaceMasterCrossSigningKeyRestServlet(hs).register(http_server) UserReplaceMasterCrossSigningKeyRestServlet(hs).register(http_server)
UserByExternalId(hs).register(http_server) UserByExternalId(hs).register(http_server)
UserByThreePid(hs).register(http_server) UserByThreePid(hs).register(http_server)
RedactUser(hs).register(http_server)
RedactUserStatus(hs).register(http_server)
DeviceRestServlet(hs).register(http_server) DeviceRestServlet(hs).register(http_server)
DevicesRestServlet(hs).register(http_server) DevicesRestServlet(hs).register(http_server)

View File

@ -27,7 +27,7 @@ from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
import attr import attr
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import StrictBool
from synapse.api.constants import Direction, UserTypes from synapse.api.constants import Direction, UserTypes
from synapse.api.errors import Codes, NotFoundError, SynapseError from synapse.api.errors import Codes, NotFoundError, SynapseError
from synapse.http.servlet import ( from synapse.http.servlet import (
@ -50,17 +50,12 @@ from synapse.rest.admin._base import (
from synapse.rest.client._base import client_patterns from synapse.rest.client._base import client_patterns
from synapse.storage.databases.main.registration import ExternalIDReuseException from synapse.storage.databases.main.registration import ExternalIDReuseException
from synapse.storage.databases.main.stats import UserSortOrder from synapse.storage.databases.main.stats import UserSortOrder
from synapse.types import JsonDict, JsonMapping, UserID from synapse.types import JsonDict, JsonMapping, TaskStatus, UserID
from synapse.types.rest import RequestBodyModel from synapse.types.rest import RequestBodyModel
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import StrictBool
else:
from pydantic import StrictBool
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -1410,3 +1405,100 @@ class UserByThreePid(RestServlet):
raise NotFoundError("User not found") raise NotFoundError("User not found")
return HTTPStatus.OK, {"user_id": user_id} return HTTPStatus.OK, {"user_id": user_id}
class RedactUser(RestServlet):
"""
Redact all the events of a given user in the given rooms or if empty dict is provided
then all events in all rooms user is member of. Kicks off a background process and
returns an id that can be used to check on the progress of the redaction progress
"""
PATTERNS = admin_patterns("/user/(?P<user_id>[^/]*)/redact")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self._store = hs.get_datastores().main
self.admin_handler = hs.get_admin_handler()
async def on_POST(
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester)
body = parse_json_object_from_request(request, allow_empty_body=True)
rooms = body.get("rooms")
if rooms is None:
raise SynapseError(
HTTPStatus.BAD_REQUEST, "Must provide a value for rooms."
)
reason = body.get("reason")
if reason:
if not isinstance(reason, str):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"If a reason is provided it must be a string.",
)
limit = body.get("limit")
if limit:
if not isinstance(limit, int) or limit <= 0:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"If limit is provided it must be a non-negative integer greater than 0.",
)
if not rooms:
rooms = await self._store.get_rooms_for_user(user_id)
redact_id = await self.admin_handler.start_redact_events(
user_id, list(rooms), requester.serialize(), reason, limit
)
return HTTPStatus.OK, {"redact_id": redact_id}
class RedactUserStatus(RestServlet):
"""
Check on the progress of the redaction request represented by the provided ID, returning
the status of the process and a dict of events that were unable to be redacted, if any
"""
PATTERNS = admin_patterns("/user/redact_status/(?P<redact_id>[^/]*)$")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self.admin_handler = hs.get_admin_handler()
async def on_GET(
self, request: SynapseRequest, redact_id: str
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self._auth, request)
task = await self.admin_handler.get_redact_task(redact_id)
if task:
if task.status == TaskStatus.ACTIVE:
return HTTPStatus.OK, {"status": TaskStatus.ACTIVE}
elif task.status == TaskStatus.COMPLETE:
assert task.result is not None
failed_redactions = task.result.get("failed_redactions")
return HTTPStatus.OK, {
"status": TaskStatus.COMPLETE,
"failed_redactions": failed_redactions if failed_redactions else {},
}
elif task.status == TaskStatus.SCHEDULED:
return HTTPStatus.OK, {"status": TaskStatus.SCHEDULED}
else:
return HTTPStatus.OK, {
"status": TaskStatus.FAILED,
"error": (
task.error
if task.error
else "Unknown error, please check the logs for more information."
),
}
else:
raise NotFoundError("redact id '%s' not found" % redact_id)

View File

@ -24,18 +24,12 @@ import random
from typing import TYPE_CHECKING, List, Optional, Tuple from typing import TYPE_CHECKING, List, Optional, Tuple
from urllib.parse import urlparse from urllib.parse import urlparse
from synapse._pydantic_compat import HAS_PYDANTIC_V2
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import StrictBool, StrictStr, constr
else:
from pydantic import StrictBool, StrictStr, constr
import attr import attr
from typing_extensions import Literal from typing_extensions import Literal
from twisted.web.server import Request from twisted.web.server import Request
from synapse._pydantic_compat import StrictBool, StrictStr, constr
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.api.errors import ( from synapse.api.errors import (
Codes, Codes,

View File

@ -0,0 +1,97 @@
# This module contains REST servlets to do with delayed events: /delayed_events/<paths>
import logging
from enum import Enum
from http import HTTPStatus
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import Codes, SynapseError
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.rest.client._base import client_patterns
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class _UpdateDelayedEventAction(Enum):
CANCEL = "cancel"
RESTART = "restart"
SEND = "send"
class UpdateDelayedEventServlet(RestServlet):
PATTERNS = client_patterns(
r"/org\.matrix\.msc4140/delayed_events/(?P<delay_id>[^/]+)$",
releases=(),
)
CATEGORY = "Delayed event management requests"
def __init__(self, hs: "HomeServer"):
super().__init__()
self.auth = hs.get_auth()
self.delayed_events_handler = hs.get_delayed_events_handler()
async def on_POST(
self, request: SynapseRequest, delay_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
body = parse_json_object_from_request(request)
try:
action = str(body["action"])
except KeyError:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"'action' is missing",
Codes.MISSING_PARAM,
)
try:
enum_action = _UpdateDelayedEventAction(action)
except ValueError:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"'action' is not one of "
+ ", ".join(f"'{m.value}'" for m in _UpdateDelayedEventAction),
Codes.INVALID_PARAM,
)
if enum_action == _UpdateDelayedEventAction.CANCEL:
await self.delayed_events_handler.cancel(requester, delay_id)
elif enum_action == _UpdateDelayedEventAction.RESTART:
await self.delayed_events_handler.restart(requester, delay_id)
elif enum_action == _UpdateDelayedEventAction.SEND:
await self.delayed_events_handler.send(requester, delay_id)
return 200, {}
class DelayedEventsServlet(RestServlet):
PATTERNS = client_patterns(
r"/org\.matrix\.msc4140/delayed_events$",
releases=(),
)
CATEGORY = "Delayed event management requests"
def __init__(self, hs: "HomeServer"):
super().__init__()
self.auth = hs.get_auth()
self.delayed_events_handler = hs.get_delayed_events_handler()
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
# TODO: Support Pagination stream API ("from" query parameter)
delayed_events = await self.delayed_events_handler.get_all_for_user(requester)
ret = {"delayed_events": delayed_events}
return 200, ret
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
# The following can't currently be instantiated on workers.
if hs.config.worker.worker_app is None:
UpdateDelayedEventServlet(hs).register(http_server)
DelayedEventsServlet(hs).register(http_server)

View File

@ -24,13 +24,7 @@ import logging
from http import HTTPStatus from http import HTTPStatus
from typing import TYPE_CHECKING, List, Optional, Tuple from typing import TYPE_CHECKING, List, Optional, Tuple
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import Extra, StrictStr
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import Extra, StrictStr
else:
from pydantic import Extra, StrictStr
from synapse.api import errors from synapse.api import errors
from synapse.api.errors import NotFoundError, SynapseError, UnrecognizedRequestError from synapse.api.errors import NotFoundError, SynapseError, UnrecognizedRequestError
from synapse.handlers.device import DeviceHandler from synapse.handlers.device import DeviceHandler

View File

@ -22,17 +22,11 @@
import logging import logging
from typing import TYPE_CHECKING, List, Optional, Tuple from typing import TYPE_CHECKING, List, Optional, Tuple
from synapse._pydantic_compat import HAS_PYDANTIC_V2
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import StrictStr
else:
from pydantic import StrictStr
from typing_extensions import Literal from typing_extensions import Literal
from twisted.web.server import Request from twisted.web.server import Request
from synapse._pydantic_compat import StrictStr
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
from synapse.http.server import HttpServer from synapse.http.server import HttpServer
from synapse.http.servlet import ( from synapse.http.servlet import (

View File

@ -138,7 +138,7 @@ class ThumbnailResource(RestServlet):
) -> None: ) -> None:
# Validate the server name, raising if invalid # Validate the server name, raising if invalid
parse_and_validate_server_name(server_name) parse_and_validate_server_name(server_name)
await self.auth.get_user_by_req(request) await self.auth.get_user_by_req(request, allow_guest=True)
set_cors_headers(request) set_cors_headers(request)
set_corp_headers(request) set_corp_headers(request)
@ -229,7 +229,7 @@ class DownloadResource(RestServlet):
# Validate the server name, raising if invalid # Validate the server name, raising if invalid
parse_and_validate_server_name(server_name) parse_and_validate_server_name(server_name)
await self.auth.get_user_by_req(request) await self.auth.get_user_by_req(request, allow_guest=True)
set_cors_headers(request) set_cors_headers(request)
set_corp_headers(request) set_corp_headers(request)

View File

@ -80,12 +80,16 @@ class ReadMarkerRestServlet(RestServlet):
# TODO Add validation to reject non-string event IDs. # TODO Add validation to reject non-string event IDs.
if not event_id: if not event_id:
continue continue
extra_content = body.get(
receipt_type.replace("m.", "com.beeper.") + ".extra", None
)
if receipt_type == ReceiptTypes.FULLY_READ: if receipt_type == ReceiptTypes.FULLY_READ:
await self.read_marker_handler.received_client_read_marker( await self.read_marker_handler.received_client_read_marker(
room_id, room_id,
user_id=requester.user.to_string(), user_id=requester.user.to_string(),
event_id=event_id, event_id=event_id,
extra_content=extra_content,
) )
else: else:
await self.receipts_handler.received_client_receipt( await self.receipts_handler.received_client_receipt(
@ -95,6 +99,7 @@ class ReadMarkerRestServlet(RestServlet):
event_id=event_id, event_id=event_id,
# Setting the thread ID is not possible with the /read_markers endpoint. # Setting the thread ID is not possible with the /read_markers endpoint.
thread_id=None, thread_id=None,
extra_content=extra_content,
) )
return 200, {} return 200, {}

View File

@ -73,7 +73,7 @@ class ReceiptRestServlet(RestServlet):
f"Receipt type must be {', '.join(self._known_receipt_types)}", f"Receipt type must be {', '.join(self._known_receipt_types)}",
) )
body = parse_json_object_from_request(request) body = parse_json_object_from_request(request, allow_empty_body=False)
# Pull the thread ID, if one exists. # Pull the thread ID, if one exists.
thread_id = None thread_id = None
@ -110,6 +110,7 @@ class ReceiptRestServlet(RestServlet):
room_id, room_id,
user_id=requester.user.to_string(), user_id=requester.user.to_string(),
event_id=event_id, event_id=event_id,
extra_content=body,
) )
else: else:
await self.receipts_handler.received_client_receipt( await self.receipts_handler.received_client_receipt(
@ -118,6 +119,7 @@ class ReceiptRestServlet(RestServlet):
user_id=requester.user, user_id=requester.user,
event_id=event_id, event_id=event_id,
thread_id=thread_id, thread_id=thread_id,
extra_content=body,
) )
return 200, {} return 200, {}

View File

@ -23,7 +23,7 @@ import logging
from http import HTTPStatus from http import HTTPStatus
from typing import TYPE_CHECKING, Tuple from typing import TYPE_CHECKING, Tuple
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import StrictStr
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
from synapse.http.server import HttpServer from synapse.http.server import HttpServer
from synapse.http.servlet import ( from synapse.http.servlet import (
@ -40,10 +40,6 @@ from ._base import client_patterns
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import StrictStr
else:
from pydantic import StrictStr
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3. # This file is licensed under the Affero General Public License (AGPL) version 3.
# #
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright (C) 2023 New Vector, Ltd # Copyright (C) 2023-2024 New Vector, Ltd
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as # it under the terms of the GNU Affero General Public License as
@ -195,7 +195,9 @@ class RoomStateEventRestServlet(RestServlet):
self.event_creation_handler = hs.get_event_creation_handler() self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler() self.room_member_handler = hs.get_room_member_handler()
self.message_handler = hs.get_message_handler() self.message_handler = hs.get_message_handler()
self.delayed_events_handler = hs.get_delayed_events_handler()
self.auth = hs.get_auth() self.auth = hs.get_auth()
self._max_event_delay_ms = hs.config.server.max_event_delay_ms
def register(self, http_server: HttpServer) -> None: def register(self, http_server: HttpServer) -> None:
# /rooms/$roomid/state/$eventtype # /rooms/$roomid/state/$eventtype
@ -291,6 +293,22 @@ class RoomStateEventRestServlet(RestServlet):
if requester.app_service: if requester.app_service:
origin_server_ts = parse_integer(request, "ts") origin_server_ts = parse_integer(request, "ts")
delay = _parse_request_delay(request, self._max_event_delay_ms)
if delay is not None:
delay_id = await self.delayed_events_handler.add(
requester,
room_id=room_id,
event_type=event_type,
state_key=state_key,
origin_server_ts=origin_server_ts,
content=content,
delay=delay,
)
set_tag("delay_id", delay_id)
ret = {"delay_id": delay_id}
return 200, ret
try: try:
if event_type == EventTypes.Member: if event_type == EventTypes.Member:
membership = content.get("membership", None) membership = content.get("membership", None)
@ -341,7 +359,10 @@ class RoomSendEventRestServlet(TransactionRestServlet):
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
super().__init__(hs) super().__init__(hs)
self.event_creation_handler = hs.get_event_creation_handler() self.event_creation_handler = hs.get_event_creation_handler()
self.delayed_events_handler = hs.get_delayed_events_handler()
self.auth = hs.get_auth() self.auth = hs.get_auth()
self._max_event_delay_ms = hs.config.server.max_event_delay_ms
self.hs = hs
def register(self, http_server: HttpServer) -> None: def register(self, http_server: HttpServer) -> None:
# /rooms/$roomid/send/$event_type[/$txn_id] # /rooms/$roomid/send/$event_type[/$txn_id]
@ -358,6 +379,29 @@ class RoomSendEventRestServlet(TransactionRestServlet):
) -> Tuple[int, JsonDict]: ) -> Tuple[int, JsonDict]:
content = parse_json_object_from_request(request) content = parse_json_object_from_request(request)
origin_server_ts = None
if (
requester.app_service
or requester.user.to_string() in self.hs.config.meow.timestamp_override
):
origin_server_ts = parse_integer(request, "ts")
delay = _parse_request_delay(request, self._max_event_delay_ms)
if delay is not None:
delay_id = await self.delayed_events_handler.add(
requester,
room_id=room_id,
event_type=event_type,
state_key=None,
origin_server_ts=origin_server_ts,
content=content,
delay=delay,
)
set_tag("delay_id", delay_id)
ret = {"delay_id": delay_id}
return 200, ret
event_dict: JsonDict = { event_dict: JsonDict = {
"type": event_type, "type": event_type,
"content": content, "content": content,
@ -365,10 +409,8 @@ class RoomSendEventRestServlet(TransactionRestServlet):
"sender": requester.user.to_string(), "sender": requester.user.to_string(),
} }
if requester.app_service: if origin_server_ts is not None:
origin_server_ts = parse_integer(request, "ts") event_dict["origin_server_ts"] = origin_server_ts
if origin_server_ts is not None:
event_dict["origin_server_ts"] = origin_server_ts
try: try:
( (
@ -411,6 +453,49 @@ class RoomSendEventRestServlet(TransactionRestServlet):
) )
def _parse_request_delay(
request: SynapseRequest,
max_delay: Optional[int],
) -> Optional[int]:
"""Parses from the request string the delay parameter for
delayed event requests, and checks it for correctness.
Args:
request: the twisted HTTP request.
max_delay: the maximum allowed value of the delay parameter,
or None if no delay parameter is allowed.
Returns:
The value of the requested delay, or None if it was absent.
Raises:
SynapseError: if the delay parameter is present and forbidden,
or if it exceeds the maximum allowed value.
"""
delay = parse_integer(request, "org.matrix.msc4140.delay")
if delay is None:
return None
if max_delay is None:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Delayed events are not supported on this server",
Codes.UNKNOWN,
{
"org.matrix.msc4140.errcode": "M_MAX_DELAY_UNSUPPORTED",
},
)
if delay > max_delay:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"The requested delay exceeds the allowed maximum.",
Codes.UNKNOWN,
{
"org.matrix.msc4140.errcode": "M_MAX_DELAY_EXCEEDED",
"org.matrix.msc4140.max_delay": max_delay,
},
)
return delay
# TODO: Needs unit testing for room ID + alias joins # TODO: Needs unit testing for room ID + alias joins
class JoinRoomAliasServlet(ResolveRoomIdMixin, TransactionRestServlet): class JoinRoomAliasServlet(ResolveRoomIdMixin, TransactionRestServlet):
CATEGORY = "Event sending requests" CATEGORY = "Event sending requests"

View File

@ -1044,7 +1044,7 @@ class SlidingSyncRestServlet(RestServlet):
serialized_rooms[room_id]["heroes"] = serialized_heroes serialized_rooms[room_id]["heroes"] = serialized_heroes
# We should only include the `initial` key if it's `True` to save bandwidth. # We should only include the `initial` key if it's `True` to save bandwidth.
# The absense of this flag means `False`. # The absence of this flag means `False`.
if room_result.initial: if room_result.initial:
serialized_rooms[room_id]["initial"] = room_result.initial serialized_rooms[room_id]["initial"] = room_result.initial

View File

@ -4,7 +4,7 @@
# Copyright 2019 The Matrix.org Foundation C.I.C. # Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright 2017 Vector Creations Ltd # Copyright 2017 Vector Creations Ltd
# Copyright 2016 OpenMarket Ltd # Copyright 2016 OpenMarket Ltd
# Copyright (C) 2023 New Vector, Ltd # Copyright (C) 2023-2024 New Vector, Ltd
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as # it under the terms of the GNU Affero General Public License as
@ -171,6 +171,8 @@ class VersionsRestServlet(RestServlet):
is not None is not None
) )
), ),
# MSC4140: Delayed events
"org.matrix.msc4140": True,
# MSC4151: Report room API (Client-Server API) # MSC4151: Report room API (Client-Server API)
"org.matrix.msc4151": self.config.experimental.msc4151_enabled, "org.matrix.msc4151": self.config.experimental.msc4151_enabled,
# Simplified sliding sync # Simplified sliding sync

View File

@ -23,17 +23,11 @@ import logging
import re import re
from typing import TYPE_CHECKING, Dict, Mapping, Optional, Set, Tuple from typing import TYPE_CHECKING, Dict, Mapping, Optional, Set, Tuple
from synapse._pydantic_compat import HAS_PYDANTIC_V2
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import Extra, StrictInt, StrictStr
else:
from pydantic import Extra, StrictInt, StrictStr
from signedjson.sign import sign_json from signedjson.sign import sign_json
from twisted.web.server import Request from twisted.web.server import Request
from synapse._pydantic_compat import Extra, StrictInt, StrictStr
from synapse.crypto.keyring import ServerKeyFetcher from synapse.crypto.keyring import ServerKeyFetcher
from synapse.http.server import HttpServer from synapse.http.server import HttpServer
from synapse.http.servlet import ( from synapse.http.servlet import (

View File

@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3. # This file is licensed under the Affero General Public License (AGPL) version 3.
# #
# Copyright 2021 The Matrix.org Foundation C.I.C. # Copyright 2021 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023 New Vector, Ltd # Copyright (C) 2023-2024 New Vector, Ltd
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as # it under the terms of the GNU Affero General Public License as
@ -68,6 +68,7 @@ from synapse.handlers.appservice import ApplicationServicesHandler
from synapse.handlers.auth import AuthHandler, PasswordAuthProvider from synapse.handlers.auth import AuthHandler, PasswordAuthProvider
from synapse.handlers.cas import CasHandler from synapse.handlers.cas import CasHandler
from synapse.handlers.deactivate_account import DeactivateAccountHandler from synapse.handlers.deactivate_account import DeactivateAccountHandler
from synapse.handlers.delayed_events import DelayedEventsHandler
from synapse.handlers.device import DeviceHandler, DeviceWorkerHandler from synapse.handlers.device import DeviceHandler, DeviceWorkerHandler
from synapse.handlers.devicemessage import DeviceMessageHandler from synapse.handlers.devicemessage import DeviceMessageHandler
from synapse.handlers.directory import DirectoryHandler from synapse.handlers.directory import DirectoryHandler
@ -251,6 +252,7 @@ class HomeServer(metaclass=abc.ABCMeta):
"account_validity", "account_validity",
"auth", "auth",
"deactivate_account", "deactivate_account",
"delayed_events",
"message", "message",
"pagination", "pagination",
"profile", "profile",
@ -964,3 +966,7 @@ class HomeServer(metaclass=abc.ABCMeta):
register_threadpool("media", media_threadpool) register_threadpool("media", media_threadpool)
return media_threadpool return media_threadpool
@cache_in_self
def get_delayed_events_handler(self) -> DelayedEventsHandler:
return DelayedEventsHandler(self)

View File

@ -136,6 +136,7 @@ class SQLBaseStore(metaclass=ABCMeta):
self._attempt_to_invalidate_cache("get_partial_current_state_ids", (room_id,)) self._attempt_to_invalidate_cache("get_partial_current_state_ids", (room_id,))
self._attempt_to_invalidate_cache("get_room_type", (room_id,)) self._attempt_to_invalidate_cache("get_room_type", (room_id,))
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,)) self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
def _invalidate_state_caches_all(self, room_id: str) -> None: def _invalidate_state_caches_all(self, room_id: str) -> None:
"""Invalidates caches that are based on the current state, but does """Invalidates caches that are based on the current state, but does

View File

@ -40,7 +40,7 @@ from typing import (
import attr import attr
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import BaseModel
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.engines import PostgresEngine from synapse.storage.engines import PostgresEngine
from synapse.storage.types import Connection, Cursor from synapse.storage.types import Connection, Cursor
@ -49,11 +49,6 @@ from synapse.util import Clock, json_encoder
from . import engines from . import engines
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import BaseModel
else:
from pydantic import BaseModel
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.database import ( from synapse.storage.database import (
@ -495,6 +490,12 @@ class BackgroundUpdater:
if self._all_done: if self._all_done:
return True return True
# We now check if we have completed all pending background updates. We
# do this as once this returns True then it will set `self._all_done`
# and we can skip checking the database in future.
if await self.has_completed_background_updates():
return True
rows = await self.db_pool.simple_select_many_batch( rows = await self.db_pool.simple_select_many_batch(
table="background_updates", table="background_updates",
column="update_name", column="update_name",

View File

@ -45,6 +45,7 @@ class StorageControllers:
# rewrite all the existing code to split it into high vs low level # rewrite all the existing code to split it into high vs low level
# interfaces. # interfaces.
self.main = stores.main self.main = stores.main
self.hs = hs
self.purge_events = PurgeEventsStorageController(hs, stores) self.purge_events = PurgeEventsStorageController(hs, stores)
self.state = StateStorageController(hs, stores) self.state = StateStorageController(hs, stores)

View File

@ -3,7 +3,7 @@
# #
# Copyright 2019-2021 The Matrix.org Foundation C.I.C. # Copyright 2019-2021 The Matrix.org Foundation C.I.C.
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# Copyright (C) 2023 New Vector, Ltd # Copyright (C) 2023-2024 New Vector, Ltd
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as # it under the terms of the GNU Affero General Public License as
@ -44,6 +44,7 @@ from .appservice import ApplicationServiceStore, ApplicationServiceTransactionSt
from .cache import CacheInvalidationWorkerStore from .cache import CacheInvalidationWorkerStore
from .censor_events import CensorEventsStore from .censor_events import CensorEventsStore
from .client_ips import ClientIpWorkerStore from .client_ips import ClientIpWorkerStore
from .delayed_events import DelayedEventsStore
from .deviceinbox import DeviceInboxStore from .deviceinbox import DeviceInboxStore
from .devices import DeviceStore from .devices import DeviceStore
from .directory import DirectoryStore from .directory import DirectoryStore
@ -158,6 +159,7 @@ class DataStore(
SessionStore, SessionStore,
TaskSchedulerWorkerStore, TaskSchedulerWorkerStore,
SlidingSyncStore, SlidingSyncStore,
DelayedEventsStore,
): ):
def __init__( def __init__(
self, self,

View File

@ -177,7 +177,7 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
def get_room_account_data_for_user_txn( def get_room_account_data_for_user_txn(
txn: LoggingTransaction, txn: LoggingTransaction,
) -> Dict[str, Dict[str, JsonDict]]: ) -> Dict[str, Dict[str, JsonMapping]]:
# The 'content != '{}' condition below prevents us from using # The 'content != '{}' condition below prevents us from using
# `simple_select_list_txn` here, as it doesn't support conditions # `simple_select_list_txn` here, as it doesn't support conditions
# other than 'equals'. # other than 'equals'.
@ -194,7 +194,7 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
txn.execute(sql, (user_id,)) txn.execute(sql, (user_id,))
by_room: Dict[str, Dict[str, JsonDict]] = {} by_room: Dict[str, Dict[str, JsonMapping]] = {}
for room_id, account_data_type, content in txn: for room_id, account_data_type, content in txn:
room_data = by_room.setdefault(room_id, {}) room_data = by_room.setdefault(room_id, {})
@ -394,7 +394,7 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
async def get_updated_global_account_data_for_user( async def get_updated_global_account_data_for_user(
self, user_id: str, stream_id: int self, user_id: str, stream_id: int
) -> Mapping[str, JsonMapping]: ) -> Dict[str, JsonMapping]:
"""Get all the global account_data that's changed for a user. """Get all the global account_data that's changed for a user.
Args: Args:
@ -407,7 +407,7 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
def get_updated_global_account_data_for_user( def get_updated_global_account_data_for_user(
txn: LoggingTransaction, txn: LoggingTransaction,
) -> Dict[str, JsonDict]: ) -> Dict[str, JsonMapping]:
sql = """ sql = """
SELECT account_data_type, content FROM account_data SELECT account_data_type, content FROM account_data
WHERE user_id = ? AND stream_id > ? WHERE user_id = ? AND stream_id > ?
@ -429,7 +429,7 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
async def get_updated_room_account_data_for_user( async def get_updated_room_account_data_for_user(
self, user_id: str, stream_id: int self, user_id: str, stream_id: int
) -> Dict[str, Dict[str, JsonDict]]: ) -> Dict[str, Dict[str, JsonMapping]]:
"""Get all the room account_data that's changed for a user. """Get all the room account_data that's changed for a user.
Args: Args:
@ -442,14 +442,14 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
def get_updated_room_account_data_for_user_txn( def get_updated_room_account_data_for_user_txn(
txn: LoggingTransaction, txn: LoggingTransaction,
) -> Dict[str, Dict[str, JsonDict]]: ) -> Dict[str, Dict[str, JsonMapping]]:
sql = """ sql = """
SELECT room_id, account_data_type, content FROM room_account_data SELECT room_id, account_data_type, content FROM room_account_data
WHERE user_id = ? AND stream_id > ? WHERE user_id = ? AND stream_id > ?
""" """
txn.execute(sql, (user_id, stream_id)) txn.execute(sql, (user_id, stream_id))
account_data_by_room: Dict[str, Dict[str, JsonDict]] = {} account_data_by_room: Dict[str, Dict[str, JsonMapping]] = {}
for row in txn: for row in txn:
room_account_data = account_data_by_room.setdefault(row[0], {}) room_account_data = account_data_by_room.setdefault(row[0], {})
room_account_data[row[1]] = db_to_json(row[2]) room_account_data[row[1]] = db_to_json(row[2])
@ -467,6 +467,56 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
get_updated_room_account_data_for_user_txn, get_updated_room_account_data_for_user_txn,
) )
async def get_updated_room_account_data_for_user_for_room(
self,
# Since there are multiple arguments with the same type, force keyword arguments
# so people don't accidentally swap the order
*,
user_id: str,
room_id: str,
from_stream_id: int,
to_stream_id: int,
) -> Dict[str, JsonMapping]:
"""Get the room account_data that's changed for a user in a room.
(> `from_stream_id` and <= `to_stream_id`)
Args:
user_id: The user to get the account_data for.
room_id: The room to check
from_stream_id: The point in the stream to fetch from
to_stream_id: The point in the stream to fetch to
Returns:
A dict of the room account data.
"""
def get_updated_room_account_data_for_user_for_room_txn(
txn: LoggingTransaction,
) -> Dict[str, JsonMapping]:
sql = """
SELECT account_data_type, content FROM room_account_data
WHERE user_id = ? AND room_id = ? AND stream_id > ? AND stream_id <= ?
"""
txn.execute(sql, (user_id, room_id, from_stream_id, to_stream_id))
room_account_data: Dict[str, JsonMapping] = {}
for row in txn:
room_account_data[row[0]] = db_to_json(row[1])
return room_account_data
changed = self._account_data_stream_cache.has_entity_changed(
user_id, int(from_stream_id)
)
if not changed:
return {}
return await self.db_pool.runInteraction(
"get_updated_room_account_data_for_user_for_room",
get_updated_room_account_data_for_user_for_room_txn,
)
@cached(max_entries=5000, iterable=True) @cached(max_entries=5000, iterable=True)
async def ignored_by(self, user_id: str) -> FrozenSet[str]: async def ignored_by(self, user_id: str) -> FrozenSet[str]:
""" """

View File

@ -41,6 +41,7 @@ from synapse.storage.database import (
LoggingDatabaseConnection, LoggingDatabaseConnection,
LoggingTransaction, LoggingTransaction,
) )
from synapse.storage.databases.main.events import SLIDING_SYNC_RELEVANT_STATE_SET
from synapse.storage.engines import PostgresEngine from synapse.storage.engines import PostgresEngine
from synapse.storage.util.id_generators import MultiWriterIdGenerator from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.util.caches.descriptors import CachedFunction from synapse.util.caches.descriptors import CachedFunction
@ -271,12 +272,20 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache( self._attempt_to_invalidate_cache(
"get_rooms_for_user", (data.state_key,) "get_rooms_for_user", (data.state_key,)
) )
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user", None
)
elif data.type == EventTypes.RoomEncryption: elif data.type == EventTypes.RoomEncryption:
self._attempt_to_invalidate_cache( self._attempt_to_invalidate_cache(
"get_room_encryption", (data.room_id,) "get_room_encryption", (data.room_id,)
) )
elif data.type == EventTypes.Create: elif data.type == EventTypes.Create:
self._attempt_to_invalidate_cache("get_room_type", (data.room_id,)) self._attempt_to_invalidate_cache("get_room_type", (data.room_id,))
if (data.type, data.state_key) in SLIDING_SYNC_RELEVANT_STATE_SET:
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user", None
)
elif row.type == EventsStreamAllStateRow.TypeId: elif row.type == EventsStreamAllStateRow.TypeId:
assert isinstance(data, EventsStreamAllStateRow) assert isinstance(data, EventsStreamAllStateRow)
# Similar to the above, but the entire caches are invalidated. This is # Similar to the above, but the entire caches are invalidated. This is
@ -285,6 +294,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache("get_rooms_for_user", None) self._attempt_to_invalidate_cache("get_rooms_for_user", None)
self._attempt_to_invalidate_cache("get_room_type", (data.room_id,)) self._attempt_to_invalidate_cache("get_room_type", (data.room_id,))
self._attempt_to_invalidate_cache("get_room_encryption", (data.room_id,)) self._attempt_to_invalidate_cache("get_room_encryption", (data.room_id,))
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
else: else:
raise Exception("Unknown events stream row type %s" % (row.type,)) raise Exception("Unknown events stream row type %s" % (row.type,))
@ -365,6 +375,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
elif etype == EventTypes.RoomEncryption: elif etype == EventTypes.RoomEncryption:
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,)) self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
if (etype, state_key) in SLIDING_SYNC_RELEVANT_STATE_SET:
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
if relates_to: if relates_to:
self._attempt_to_invalidate_cache( self._attempt_to_invalidate_cache(
"get_relations_for_event", "get_relations_for_event",
@ -458,6 +471,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache("get_account_data_for_room", None) self._attempt_to_invalidate_cache("get_account_data_for_room", None)
self._attempt_to_invalidate_cache("get_account_data_for_room_and_type", None) self._attempt_to_invalidate_cache("get_account_data_for_room_and_type", None)
self._attempt_to_invalidate_cache("get_tags_for_room", None)
self._attempt_to_invalidate_cache("get_aliases_for_room", (room_id,)) self._attempt_to_invalidate_cache("get_aliases_for_room", (room_id,))
self._attempt_to_invalidate_cache("get_latest_event_ids_in_room", (room_id,)) self._attempt_to_invalidate_cache("get_latest_event_ids_in_room", (room_id,))
self._attempt_to_invalidate_cache("_get_forward_extremeties_for_room", None) self._attempt_to_invalidate_cache("_get_forward_extremeties_for_room", None)
@ -477,6 +491,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache( self._attempt_to_invalidate_cache(
"get_current_hosts_in_room_ordered", (room_id,) "get_current_hosts_in_room_ordered", (room_id,)
) )
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
self._attempt_to_invalidate_cache("did_forget", None) self._attempt_to_invalidate_cache("did_forget", None)
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None) self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None) self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)

View File

@ -0,0 +1,523 @@
import logging
from typing import List, NewType, Optional, Tuple
import attr
from synapse.api.errors import NotFoundError
from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage.database import LoggingTransaction, StoreError
from synapse.storage.engines import PostgresEngine
from synapse.types import JsonDict, RoomID
from synapse.util import json_encoder, stringutils as stringutils
logger = logging.getLogger(__name__)
DelayID = NewType("DelayID", str)
UserLocalpart = NewType("UserLocalpart", str)
DeviceID = NewType("DeviceID", str)
EventType = NewType("EventType", str)
StateKey = NewType("StateKey", str)
Delay = NewType("Delay", int)
Timestamp = NewType("Timestamp", int)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class EventDetails:
room_id: RoomID
type: EventType
state_key: Optional[StateKey]
origin_server_ts: Optional[Timestamp]
content: JsonDict
device_id: Optional[DeviceID]
@attr.s(slots=True, frozen=True, auto_attribs=True)
class DelayedEventDetails(EventDetails):
delay_id: DelayID
user_localpart: UserLocalpart
class DelayedEventsStore(SQLBaseStore):
async def get_delayed_events_stream_pos(self) -> int:
"""
Gets the stream position of the background process to watch for state events
that target the same piece of state as any pending delayed events.
"""
return await self.db_pool.simple_select_one_onecol(
table="delayed_events_stream_pos",
keyvalues={},
retcol="stream_id",
desc="get_delayed_events_stream_pos",
)
async def update_delayed_events_stream_pos(self, stream_id: Optional[int]) -> None:
"""
Updates the stream position of the background process to watch for state events
that target the same piece of state as any pending delayed events.
Must only be used by the worker running the background process.
"""
await self.db_pool.simple_update_one(
table="delayed_events_stream_pos",
keyvalues={},
updatevalues={"stream_id": stream_id},
desc="update_delayed_events_stream_pos",
)
async def add_delayed_event(
self,
*,
user_localpart: str,
device_id: Optional[str],
creation_ts: Timestamp,
room_id: str,
event_type: str,
state_key: Optional[str],
origin_server_ts: Optional[int],
content: JsonDict,
delay: int,
) -> Tuple[DelayID, Timestamp]:
"""
Inserts a new delayed event in the DB.
Returns: The generated ID assigned to the added delayed event,
and the send time of the next delayed event to be sent,
which is either the event just added or one added earlier.
"""
delay_id = _generate_delay_id()
send_ts = Timestamp(creation_ts + delay)
def add_delayed_event_txn(txn: LoggingTransaction) -> Timestamp:
self.db_pool.simple_insert_txn(
txn,
table="delayed_events",
values={
"delay_id": delay_id,
"user_localpart": user_localpart,
"device_id": device_id,
"delay": delay,
"send_ts": send_ts,
"room_id": room_id,
"event_type": event_type,
"state_key": state_key,
"origin_server_ts": origin_server_ts,
"content": json_encoder.encode(content),
},
)
next_send_ts = self._get_next_delayed_event_send_ts_txn(txn)
assert next_send_ts is not None
return next_send_ts
next_send_ts = await self.db_pool.runInteraction(
"add_delayed_event", add_delayed_event_txn
)
return delay_id, next_send_ts
async def restart_delayed_event(
self,
*,
delay_id: str,
user_localpart: str,
current_ts: Timestamp,
) -> Timestamp:
"""
Restarts the send time of the matching delayed event,
as long as it hasn't already been marked for processing.
Args:
delay_id: The ID of the delayed event to restart.
user_localpart: The localpart of the delayed event's owner.
current_ts: The current time, which will be used to calculate the new send time.
Returns: The send time of the next delayed event to be sent,
which is either the event just restarted, or another one
with an earlier send time than the restarted one's new send time.
Raises:
NotFoundError: if there is no matching delayed event.
"""
def restart_delayed_event_txn(
txn: LoggingTransaction,
) -> Timestamp:
txn.execute(
"""
UPDATE delayed_events
SET send_ts = ? + delay
WHERE delay_id = ? AND user_localpart = ?
AND NOT is_processed
""",
(
current_ts,
delay_id,
user_localpart,
),
)
if txn.rowcount == 0:
raise NotFoundError("Delayed event not found")
next_send_ts = self._get_next_delayed_event_send_ts_txn(txn)
assert next_send_ts is not None
return next_send_ts
return await self.db_pool.runInteraction(
"restart_delayed_event", restart_delayed_event_txn
)
async def get_all_delayed_events_for_user(
self,
user_localpart: str,
) -> List[JsonDict]:
"""Returns all pending delayed events owned by the given user."""
# TODO: Support Pagination stream API ("next_batch" field)
rows = await self.db_pool.execute(
"get_all_delayed_events_for_user",
"""
SELECT
delay_id,
room_id,
event_type,
state_key,
delay,
send_ts,
content
FROM delayed_events
WHERE user_localpart = ? AND NOT is_processed
ORDER BY send_ts
""",
user_localpart,
)
return [
{
"delay_id": DelayID(row[0]),
"room_id": str(RoomID.from_string(row[1])),
"type": EventType(row[2]),
**({"state_key": StateKey(row[3])} if row[3] is not None else {}),
"delay": Delay(row[4]),
"running_since": Timestamp(row[5] - row[4]),
"content": db_to_json(row[6]),
}
for row in rows
]
async def process_timeout_delayed_events(
self, current_ts: Timestamp
) -> Tuple[
List[DelayedEventDetails],
Optional[Timestamp],
]:
"""
Marks for processing all delayed events that should have been sent prior to the provided time
that haven't already been marked as such.
Returns: The details of all newly-processed delayed events,
and the send time of the next delayed event to be sent, if any.
"""
def process_timeout_delayed_events_txn(
txn: LoggingTransaction,
) -> Tuple[
List[DelayedEventDetails],
Optional[Timestamp],
]:
sql_cols = ", ".join(
(
"delay_id",
"user_localpart",
"room_id",
"event_type",
"state_key",
"origin_server_ts",
"send_ts",
"content",
"device_id",
)
)
sql_update = "UPDATE delayed_events SET is_processed = TRUE"
sql_where = "WHERE send_ts <= ? AND NOT is_processed"
sql_args = (current_ts,)
sql_order = "ORDER BY send_ts"
if isinstance(self.database_engine, PostgresEngine):
# Do this only in Postgres because:
# - SQLite's RETURNING emits rows in an arbitrary order
# - https://www.sqlite.org/lang_returning.html#limitations_and_caveats
# - SQLite does not support data-modifying statements in a WITH clause
# - https://www.sqlite.org/lang_with.html
# - https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-MODIFYING
txn.execute(
f"""
WITH events_to_send AS (
{sql_update} {sql_where} RETURNING *
) SELECT {sql_cols} FROM events_to_send {sql_order}
""",
sql_args,
)
rows = txn.fetchall()
else:
txn.execute(
f"SELECT {sql_cols} FROM delayed_events {sql_where} {sql_order}",
sql_args,
)
rows = txn.fetchall()
txn.execute(f"{sql_update} {sql_where}", sql_args)
assert txn.rowcount == len(rows)
events = [
DelayedEventDetails(
RoomID.from_string(row[2]),
EventType(row[3]),
StateKey(row[4]) if row[4] is not None else None,
# If no custom_origin_ts is set, use send_ts as the event's timestamp
Timestamp(row[5] if row[5] is not None else row[6]),
db_to_json(row[7]),
DeviceID(row[8]) if row[8] is not None else None,
DelayID(row[0]),
UserLocalpart(row[1]),
)
for row in rows
]
next_send_ts = self._get_next_delayed_event_send_ts_txn(txn)
return events, next_send_ts
return await self.db_pool.runInteraction(
"process_timeout_delayed_events", process_timeout_delayed_events_txn
)
async def process_target_delayed_event(
self,
*,
delay_id: str,
user_localpart: str,
) -> Tuple[
EventDetails,
Optional[Timestamp],
]:
"""
Marks for processing the matching delayed event, regardless of its timeout time,
as long as it has not already been marked as such.
Args:
delay_id: The ID of the delayed event to restart.
user_localpart: The localpart of the delayed event's owner.
Returns: The details of the matching delayed event,
and the send time of the next delayed event to be sent, if any.
Raises:
NotFoundError: if there is no matching delayed event.
"""
def process_target_delayed_event_txn(
txn: LoggingTransaction,
) -> Tuple[
EventDetails,
Optional[Timestamp],
]:
sql_cols = ", ".join(
(
"room_id",
"event_type",
"state_key",
"origin_server_ts",
"content",
"device_id",
)
)
sql_update = "UPDATE delayed_events SET is_processed = TRUE"
sql_where = "WHERE delay_id = ? AND user_localpart = ? AND NOT is_processed"
sql_args = (delay_id, user_localpart)
txn.execute(
(
f"{sql_update} {sql_where} RETURNING {sql_cols}"
if self.database_engine.supports_returning
else f"SELECT {sql_cols} FROM delayed_events {sql_where}"
),
sql_args,
)
row = txn.fetchone()
if row is None:
raise NotFoundError("Delayed event not found")
elif not self.database_engine.supports_returning:
txn.execute(f"{sql_update} {sql_where}", sql_args)
assert txn.rowcount == 1
event = EventDetails(
RoomID.from_string(row[0]),
EventType(row[1]),
StateKey(row[2]) if row[2] is not None else None,
Timestamp(row[3]) if row[3] is not None else None,
db_to_json(row[4]),
DeviceID(row[5]) if row[5] is not None else None,
)
return event, self._get_next_delayed_event_send_ts_txn(txn)
return await self.db_pool.runInteraction(
"process_target_delayed_event", process_target_delayed_event_txn
)
async def cancel_delayed_event(
self,
*,
delay_id: str,
user_localpart: str,
) -> Optional[Timestamp]:
"""
Cancels the matching delayed event, i.e. remove it as long as it hasn't been processed.
Args:
delay_id: The ID of the delayed event to restart.
user_localpart: The localpart of the delayed event's owner.
Returns: The send time of the next delayed event to be sent, if any.
Raises:
NotFoundError: if there is no matching delayed event.
"""
def cancel_delayed_event_txn(
txn: LoggingTransaction,
) -> Optional[Timestamp]:
try:
self.db_pool.simple_delete_one_txn(
txn,
table="delayed_events",
keyvalues={
"delay_id": delay_id,
"user_localpart": user_localpart,
"is_processed": False,
},
)
except StoreError:
if txn.rowcount == 0:
raise NotFoundError("Delayed event not found")
else:
raise
return self._get_next_delayed_event_send_ts_txn(txn)
return await self.db_pool.runInteraction(
"cancel_delayed_event", cancel_delayed_event_txn
)
async def cancel_delayed_state_events(
self,
*,
room_id: str,
event_type: str,
state_key: str,
) -> Optional[Timestamp]:
"""
Cancels all matching delayed state events, i.e. remove them as long as they haven't been processed.
Returns: The send time of the next delayed event to be sent, if any.
"""
def cancel_delayed_state_events_txn(
txn: LoggingTransaction,
) -> Optional[Timestamp]:
self.db_pool.simple_delete_txn(
txn,
table="delayed_events",
keyvalues={
"room_id": room_id,
"event_type": event_type,
"state_key": state_key,
"is_processed": False,
},
)
return self._get_next_delayed_event_send_ts_txn(txn)
return await self.db_pool.runInteraction(
"cancel_delayed_state_events", cancel_delayed_state_events_txn
)
async def delete_processed_delayed_event(
self,
delay_id: DelayID,
user_localpart: UserLocalpart,
) -> None:
"""
Delete the matching delayed event, as long as it has been marked as processed.
Throws:
StoreError: if there is no matching delayed event, or if it has not yet been processed.
"""
return await self.db_pool.simple_delete_one(
table="delayed_events",
keyvalues={
"delay_id": delay_id,
"user_localpart": user_localpart,
"is_processed": True,
},
desc="delete_processed_delayed_event",
)
async def delete_processed_delayed_state_events(
self,
*,
room_id: str,
event_type: str,
state_key: str,
) -> None:
"""
Delete the matching delayed state events that have been marked as processed.
"""
await self.db_pool.simple_delete(
table="delayed_events",
keyvalues={
"room_id": room_id,
"event_type": event_type,
"state_key": state_key,
"is_processed": True,
},
desc="delete_processed_delayed_state_events",
)
async def unprocess_delayed_events(self) -> None:
"""
Unmark all delayed events for processing.
"""
await self.db_pool.simple_update(
table="delayed_events",
keyvalues={"is_processed": True},
updatevalues={"is_processed": False},
desc="unprocess_delayed_events",
)
async def get_next_delayed_event_send_ts(self) -> Optional[Timestamp]:
"""
Returns the send time of the next delayed event to be sent, if any.
"""
return await self.db_pool.runInteraction(
"get_next_delayed_event_send_ts",
self._get_next_delayed_event_send_ts_txn,
db_autocommit=True,
)
def _get_next_delayed_event_send_ts_txn(
self, txn: LoggingTransaction
) -> Optional[Timestamp]:
result = self.db_pool.simple_select_one_onecol_txn(
txn,
table="delayed_events",
keyvalues={"is_processed": False},
retcol="MIN(send_ts)",
allow_none=True,
)
return Timestamp(result) if result is not None else None
def _generate_delay_id() -> DelayID:
"""Generates an opaque string, for use as a delay ID"""
# We use the following format for delay IDs:
# syd_<random string>
# They are scoped to user localparts, so it is possible for
# the same ID to exist for multiple users.
return DelayID(f"syd_{stringutils.random_string(20)}")

View File

@ -52,6 +52,7 @@ from synapse.storage.types import Cursor
from synapse.types import JsonDict, RoomStreamToken, StateMap, StrCollection from synapse.types import JsonDict, RoomStreamToken, StateMap, StrCollection
from synapse.types.handlers import SLIDING_SYNC_DEFAULT_BUMP_EVENT_TYPES from synapse.types.handlers import SLIDING_SYNC_DEFAULT_BUMP_EVENT_TYPES
from synapse.types.state import StateFilter from synapse.types.state import StateFilter
from synapse.types.storage import _BackgroundUpdates
from synapse.util import json_encoder from synapse.util import json_encoder
from synapse.util.iterutils import batch_iter from synapse.util.iterutils import batch_iter
@ -76,34 +77,6 @@ _REPLACE_STREAM_ORDERING_SQL_COMMANDS = (
) )
class _BackgroundUpdates:
EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
DELETE_SOFT_FAILED_EXTREMITIES = "delete_soft_failed_extremities"
POPULATE_STREAM_ORDERING2 = "populate_stream_ordering2"
INDEX_STREAM_ORDERING2 = "index_stream_ordering2"
INDEX_STREAM_ORDERING2_CONTAINS_URL = "index_stream_ordering2_contains_url"
INDEX_STREAM_ORDERING2_ROOM_ORDER = "index_stream_ordering2_room_order"
INDEX_STREAM_ORDERING2_ROOM_STREAM = "index_stream_ordering2_room_stream"
INDEX_STREAM_ORDERING2_TS = "index_stream_ordering2_ts"
REPLACE_STREAM_ORDERING_COLUMN = "replace_stream_ordering_column"
EVENT_EDGES_DROP_INVALID_ROWS = "event_edges_drop_invalid_rows"
EVENT_EDGES_REPLACE_INDEX = "event_edges_replace_index"
EVENTS_POPULATE_STATE_KEY_REJECTIONS = "events_populate_state_key_rejections"
EVENTS_JUMP_TO_DATE_INDEX = "events_jump_to_date_index"
SLIDING_SYNC_PREFILL_JOINED_ROOMS_TO_RECALCULATE_TABLE_BG_UPDATE = (
"sliding_sync_prefill_joined_rooms_to_recalculate_table_bg_update"
)
SLIDING_SYNC_JOINED_ROOMS_BG_UPDATE = "sliding_sync_joined_rooms_bg_update"
SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_BG_UPDATE = (
"sliding_sync_membership_snapshots_bg_update"
)
@attr.s(slots=True, frozen=True, auto_attribs=True) @attr.s(slots=True, frozen=True, auto_attribs=True)
class _CalculateChainCover: class _CalculateChainCover:
"""Return value for _calculate_chain_cover_txn.""" """Return value for _calculate_chain_cover_txn."""

View File

@ -83,6 +83,7 @@ from synapse.storage.util.id_generators import (
from synapse.storage.util.sequence import build_sequence_generator from synapse.storage.util.sequence import build_sequence_generator
from synapse.types import JsonDict, get_domain_from_id from synapse.types import JsonDict, get_domain_from_id
from synapse.types.state import StateFilter from synapse.types.state import StateFilter
from synapse.types.storage import _BackgroundUpdates
from synapse.util import unwrapFirstError from synapse.util import unwrapFirstError
from synapse.util.async_helpers import ObservableDeferred, delay_cancellation from synapse.util.async_helpers import ObservableDeferred, delay_cancellation
from synapse.util.caches.descriptors import cached, cachedList from synapse.util.caches.descriptors import cached, cachedList
@ -2465,3 +2466,84 @@ class EventsWorkerStore(SQLBaseStore):
) )
self.invalidate_get_event_cache_after_txn(txn, event_id) self.invalidate_get_event_cache_after_txn(txn, event_id)
async def get_events_sent_by_user_in_room(
self, user_id: str, room_id: str, limit: int, filter: Optional[List[str]] = None
) -> Optional[List[str]]:
"""
Get a list of event ids of events sent by the user in the specified room
Args:
user_id: user ID to search against
room_id: room ID of the room to search for events in
filter: type of events to filter for
limit: maximum number of event ids to return
"""
def _get_events_by_user_in_room_txn(
txn: LoggingTransaction,
user_id: str,
room_id: str,
filter: Optional[List[str]],
batch_size: int,
offset: int,
) -> Tuple[Optional[List[str]], int]:
if filter:
base_clause, args = make_in_list_sql_clause(
txn.database_engine, "type", filter
)
clause = f"AND {base_clause}"
parameters = (user_id, room_id, *args, batch_size, offset)
else:
clause = ""
parameters = (user_id, room_id, batch_size, offset)
sql = f"""
SELECT event_id FROM events
WHERE sender = ? AND room_id = ?
{clause}
ORDER BY received_ts DESC
LIMIT ?
OFFSET ?
"""
txn.execute(sql, parameters)
res = txn.fetchall()
if res:
events = [row[0] for row in res]
else:
events = None
return events, offset + batch_size
offset = 0
batch_size = 100
if batch_size > limit:
batch_size = limit
selected_ids: List[str] = []
while offset < limit:
res, offset = await self.db_pool.runInteraction(
"get_events_by_user",
_get_events_by_user_in_room_txn,
user_id,
room_id,
filter,
batch_size,
offset,
)
if res:
selected_ids = selected_ids + res
else:
break
return selected_ids
async def have_finished_sliding_sync_background_jobs(self) -> bool:
"""Return if it's safe to use the sliding sync membership tables."""
return await self.db_pool.updates.have_completed_background_updates(
(
_BackgroundUpdates.SLIDING_SYNC_PREFILL_JOINED_ROOMS_TO_RECALCULATE_TABLE_BG_UPDATE,
_BackgroundUpdates.SLIDING_SYNC_JOINED_ROOMS_BG_UPDATE,
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_BG_UPDATE,
)
)

View File

@ -41,6 +41,7 @@ import attr
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import Codes, SynapseError from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.logging.opentracing import trace from synapse.logging.opentracing import trace
from synapse.metrics import LaterGauge from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import wrap_as_background_process from synapse.metrics.background_process_metrics import wrap_as_background_process
@ -51,7 +52,6 @@ from synapse.storage.database import (
LoggingTransaction, LoggingTransaction,
) )
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.storage.databases.main.events_bg_updates import _BackgroundUpdates
from synapse.storage.databases.main.events_worker import EventsWorkerStore from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.engines import Sqlite3Engine from synapse.storage.engines import Sqlite3Engine
from synapse.storage.roommember import ( from synapse.storage.roommember import (
@ -1365,6 +1365,9 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
self._invalidate_cache_and_stream( self._invalidate_cache_and_stream(
txn, self.get_forgotten_rooms_for_user, (user_id,) txn, self.get_forgotten_rooms_for_user, (user_id,)
) )
self._invalidate_cache_and_stream(
txn, self.get_sliding_sync_rooms_for_user, (user_id,)
)
await self.db_pool.runInteraction("forget_membership", f) await self.db_pool.runInteraction("forget_membership", f)
@ -1401,7 +1404,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
) -> Mapping[str, RoomsForUserSlidingSync]: ) -> Mapping[str, RoomsForUserSlidingSync]:
"""Get all the rooms for a user to handle a sliding sync request. """Get all the rooms for a user to handle a sliding sync request.
Ignores forgotten rooms and rooms that the user has been kicked from. Ignores forgotten rooms and rooms that the user has left themselves.
Returns: Returns:
Map from room ID to membership info Map from room ID to membership info
@ -1410,10 +1413,15 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
def get_sliding_sync_rooms_for_user_txn( def get_sliding_sync_rooms_for_user_txn(
txn: LoggingTransaction, txn: LoggingTransaction,
) -> Dict[str, RoomsForUserSlidingSync]: ) -> Dict[str, RoomsForUserSlidingSync]:
# XXX: If you use any new columns that can change (like from
# `sliding_sync_joined_rooms` or `forgotten`), make sure to bust the
# `get_sliding_sync_rooms_for_user` cache in the appropriate places (and add
# tests).
sql = """ sql = """
SELECT m.room_id, m.sender, m.membership, m.membership_event_id, SELECT m.room_id, m.sender, m.membership, m.membership_event_id,
r.room_version, r.room_version,
m.event_instance_name, m.event_stream_ordering, m.event_instance_name, m.event_stream_ordering,
m.has_known_state,
COALESCE(j.room_type, m.room_type), COALESCE(j.room_type, m.room_type),
COALESCE(j.is_encrypted, m.is_encrypted) COALESCE(j.is_encrypted, m.is_encrypted)
FROM sliding_sync_membership_snapshots AS m FROM sliding_sync_membership_snapshots AS m
@ -1421,6 +1429,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
LEFT JOIN sliding_sync_joined_rooms AS j ON (j.room_id = m.room_id AND m.membership = 'join') LEFT JOIN sliding_sync_joined_rooms AS j ON (j.room_id = m.room_id AND m.membership = 'join')
WHERE user_id = ? WHERE user_id = ?
AND m.forgotten = 0 AND m.forgotten = 0
AND (m.membership != 'leave' OR m.user_id != m.sender)
""" """
txn.execute(sql, (user_id,)) txn.execute(sql, (user_id,))
return { return {
@ -1431,10 +1440,15 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
event_id=row[3], event_id=row[3],
room_version_id=row[4], room_version_id=row[4],
event_pos=PersistedEventPosition(row[5], row[6]), event_pos=PersistedEventPosition(row[5], row[6]),
room_type=row[7], has_known_state=bool(row[7]),
is_encrypted=row[8], room_type=row[8],
is_encrypted=bool(row[9]),
) )
for row in txn for row in txn
# We filter out unknown room versions proactively. They
# shouldn't go down sync and their metadata may be in a broken
# state (causing errors).
if row[4] in KNOWN_ROOM_VERSIONS
} }
return await self.db_pool.runInteraction( return await self.db_pool.runInteraction(
@ -1442,15 +1456,47 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
get_sliding_sync_rooms_for_user_txn, get_sliding_sync_rooms_for_user_txn,
) )
async def have_finished_sliding_sync_background_jobs(self) -> bool: async def get_sliding_sync_room_for_user(
"""Return if it's safe to use the sliding sync membership tables.""" self, user_id: str, room_id: str
) -> Optional[RoomsForUserSlidingSync]:
"""Get the sliding sync room entry for the given user and room."""
return await self.db_pool.updates.have_completed_background_updates( def get_sliding_sync_room_for_user_txn(
( txn: LoggingTransaction,
_BackgroundUpdates.SLIDING_SYNC_PREFILL_JOINED_ROOMS_TO_RECALCULATE_TABLE_BG_UPDATE, ) -> Optional[RoomsForUserSlidingSync]:
_BackgroundUpdates.SLIDING_SYNC_JOINED_ROOMS_BG_UPDATE, sql = """
_BackgroundUpdates.SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_BG_UPDATE, SELECT m.room_id, m.sender, m.membership, m.membership_event_id,
r.room_version,
m.event_instance_name, m.event_stream_ordering,
m.has_known_state,
COALESCE(j.room_type, m.room_type),
COALESCE(j.is_encrypted, m.is_encrypted)
FROM sliding_sync_membership_snapshots AS m
INNER JOIN rooms AS r USING (room_id)
LEFT JOIN sliding_sync_joined_rooms AS j ON (j.room_id = m.room_id AND m.membership = 'join')
WHERE user_id = ?
AND m.forgotten = 0
AND m.room_id = ?
"""
txn.execute(sql, (user_id, room_id))
row = txn.fetchone()
if not row:
return None
return RoomsForUserSlidingSync(
room_id=row[0],
sender=row[1],
membership=row[2],
event_id=row[3],
room_version_id=row[4],
event_pos=PersistedEventPosition(row[5], row[6]),
has_known_state=bool(row[7]),
room_type=row[8],
is_encrypted=row[9],
) )
return await self.db_pool.runInteraction(
"get_sliding_sync_room_for_user", get_sliding_sync_room_for_user_txn
) )

View File

@ -267,6 +267,15 @@ class SlidingSyncStore(SQLBaseStore):
(have_sent_room.status.value, have_sent_room.last_token) (have_sent_room.status.value, have_sent_room.last_token)
) )
for (
room_id,
have_sent_room,
) in per_connection_state.account_data._statuses.items():
key_values.append((connection_position, "account_data", room_id))
value_values.append(
(have_sent_room.status.value, have_sent_room.last_token)
)
self.db_pool.simple_upsert_many_txn( self.db_pool.simple_upsert_many_txn(
txn, txn,
table="sliding_sync_connection_streams", table="sliding_sync_connection_streams",
@ -407,6 +416,7 @@ class SlidingSyncStore(SQLBaseStore):
# Now look up the per-room stream data. # Now look up the per-room stream data.
rooms: Dict[str, HaveSentRoom[str]] = {} rooms: Dict[str, HaveSentRoom[str]] = {}
receipts: Dict[str, HaveSentRoom[str]] = {} receipts: Dict[str, HaveSentRoom[str]] = {}
account_data: Dict[str, HaveSentRoom[str]] = {}
receipt_rows = self.db_pool.simple_select_list_txn( receipt_rows = self.db_pool.simple_select_list_txn(
txn, txn,
@ -427,6 +437,8 @@ class SlidingSyncStore(SQLBaseStore):
rooms[room_id] = have_sent_room rooms[room_id] = have_sent_room
elif stream == "receipts": elif stream == "receipts":
receipts[room_id] = have_sent_room receipts[room_id] = have_sent_room
elif stream == "account_data":
account_data[room_id] = have_sent_room
else: else:
# For forwards compatibility we ignore unknown streams, as in # For forwards compatibility we ignore unknown streams, as in
# future we want to be able to easily add more stream types. # future we want to be able to easily add more stream types.
@ -435,6 +447,7 @@ class SlidingSyncStore(SQLBaseStore):
return PerConnectionStateDB( return PerConnectionStateDB(
rooms=RoomStatusMap(rooms), rooms=RoomStatusMap(rooms),
receipts=RoomStatusMap(receipts), receipts=RoomStatusMap(receipts),
account_data=RoomStatusMap(account_data),
room_configs=room_configs, room_configs=room_configs,
) )
@ -452,6 +465,7 @@ class PerConnectionStateDB:
rooms: "RoomStatusMap[str]" rooms: "RoomStatusMap[str]"
receipts: "RoomStatusMap[str]" receipts: "RoomStatusMap[str]"
account_data: "RoomStatusMap[str]"
room_configs: Mapping[str, "RoomSyncConfig"] room_configs: Mapping[str, "RoomSyncConfig"]
@ -484,10 +498,21 @@ class PerConnectionStateDB:
for room_id, status in per_connection_state.receipts.get_updates().items() for room_id, status in per_connection_state.receipts.get_updates().items()
} }
account_data = {
room_id: HaveSentRoom(
status=status.status,
last_token=(
str(status.last_token) if status.last_token is not None else None
),
)
for room_id, status in per_connection_state.account_data.get_updates().items()
}
log_kv( log_kv(
{ {
"rooms": rooms, "rooms": rooms,
"receipts": receipts, "receipts": receipts,
"account_data": account_data,
"room_configs": per_connection_state.room_configs.maps[0], "room_configs": per_connection_state.room_configs.maps[0],
} }
) )
@ -495,6 +520,7 @@ class PerConnectionStateDB:
return PerConnectionStateDB( return PerConnectionStateDB(
rooms=RoomStatusMap(rooms), rooms=RoomStatusMap(rooms),
receipts=RoomStatusMap(receipts), receipts=RoomStatusMap(receipts),
account_data=RoomStatusMap(account_data),
room_configs=per_connection_state.room_configs.maps[0], room_configs=per_connection_state.room_configs.maps[0],
) )
@ -524,8 +550,19 @@ class PerConnectionStateDB:
for room_id, status in self.receipts._statuses.items() for room_id, status in self.receipts._statuses.items()
} }
account_data = {
room_id: HaveSentRoom(
status=status.status,
last_token=(
int(status.last_token) if status.last_token is not None else None
),
)
for room_id, status in self.account_data._statuses.items()
}
return PerConnectionState( return PerConnectionState(
rooms=RoomStatusMap(rooms), rooms=RoomStatusMap(rooms),
receipts=RoomStatusMap(receipts), receipts=RoomStatusMap(receipts),
account_data=RoomStatusMap(account_data),
room_configs=self.room_configs, room_configs=self.room_configs,
) )

View File

@ -308,8 +308,24 @@ class StateGroupWorkerStore(EventsWorkerStore, SQLBaseStore):
return create_event return create_event
@cached(max_entries=10000) @cached(max_entries=10000)
async def get_room_type(self, room_id: str) -> Optional[str]: async def get_room_type(self, room_id: str) -> Union[Optional[str], Sentinel]:
raise NotImplementedError() """Fetch room type for given room.
Since this function is cached, any missing values would be cached as
`None`. In order to distinguish between an unencrypted room that has
`None` encryption and a room that is unknown to the server where we
might want to omit the value (which would make it cached as `None`),
instead we use the sentinel value `ROOM_UNKNOWN_SENTINEL`.
"""
try:
create_event = await self.get_create_event_for_room(room_id)
return create_event.content.get(EventContentFields.ROOM_TYPE)
except NotFoundError:
# We use the sentinel value to distinguish between `None` which is a
# valid room type and a room that is unknown to the server so the value
# is just unset.
return ROOM_UNKNOWN_SENTINEL
@cachedList(cached_method_name="get_room_type", list_name="room_ids") @cachedList(cached_method_name="get_room_type", list_name="room_ids")
async def bulk_get_room_type( async def bulk_get_room_type(

View File

@ -941,6 +941,12 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
Returns: Returns:
All membership changes to the current state in the token range. Events are All membership changes to the current state in the token range. Events are
sorted by `stream_ordering` ascending. sorted by `stream_ordering` ascending.
`event_id`/`sender` can be `None` when the server leaves a room (meaning
everyone locally left) or a state reset which removed the person from the
room. We can't tell the difference between the two cases with what's
available in the `current_state_delta_stream` table. To actually check for a
state reset, you need to check if a membership still exists in the room.
""" """
# Start by ruling out cases where a DB query is not necessary. # Start by ruling out cases where a DB query is not necessary.
if from_key == to_key: if from_key == to_key:
@ -1052,6 +1058,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
membership=( membership=(
membership if membership is not None else Membership.LEAVE membership if membership is not None else Membership.LEAVE
), ),
# This will also be null for the same reasons if `s.event_id = null`
sender=sender, sender=sender,
# Prev event # Prev event
prev_event_id=prev_event_id, prev_event_id=prev_event_id,
@ -1469,6 +1476,10 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
recheck_rooms: Set[str] = set() recheck_rooms: Set[str] = set()
min_token = end_token.stream min_token = end_token.stream
for room_id, stream in uncapped_results.items(): for room_id, stream in uncapped_results.items():
if stream is None:
# Despite the function not directly setting None, the cache can!
# See: https://github.com/element-hq/synapse/issues/17726
continue
if stream <= min_token: if stream <= min_token:
results[room_id] = stream results[room_id] = stream
else: else:
@ -1495,7 +1506,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
@cachedList(cached_method_name="_get_max_event_pos", list_name="room_ids") @cachedList(cached_method_name="_get_max_event_pos", list_name="room_ids")
async def _bulk_get_max_event_pos( async def _bulk_get_max_event_pos(
self, room_ids: StrCollection self, room_ids: StrCollection
) -> Mapping[str, int]: ) -> Mapping[str, Optional[int]]:
"""Fetch the max position of a persisted event in the room.""" """Fetch the max position of a persisted event in the room."""
# We need to be careful not to return positions ahead of the current # We need to be careful not to return positions ahead of the current
@ -1524,7 +1535,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
# majority of rooms will have a latest token from before the min stream # majority of rooms will have a latest token from before the min stream
# pos. # pos.
def bulk_get_max_event_pos_txn( def bulk_get_max_event_pos_fallback_txn(
txn: LoggingTransaction, batched_room_ids: StrCollection txn: LoggingTransaction, batched_room_ids: StrCollection
) -> Dict[str, int]: ) -> Dict[str, int]:
clause, args = make_in_list_sql_clause( clause, args = make_in_list_sql_clause(
@ -1547,14 +1558,40 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
txn.execute(sql, [max_pos] + args) txn.execute(sql, [max_pos] + args)
return {row[0]: row[1] for row in txn} return {row[0]: row[1] for row in txn}
# It's easier to look at the `sliding_sync_joined_rooms` table and avoid all of
# the joins and sub-queries.
def bulk_get_max_event_pos_from_sliding_sync_tables_txn(
txn: LoggingTransaction, batched_room_ids: StrCollection
) -> Dict[str, int]:
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", batched_room_ids
)
sql = f"""
SELECT room_id, event_stream_ordering
FROM sliding_sync_joined_rooms
WHERE {clause}
ORDER BY event_stream_ordering DESC
"""
txn.execute(sql, args)
return {row[0]: row[1] for row in txn}
recheck_rooms: Set[str] = set() recheck_rooms: Set[str] = set()
for batched in batch_iter(room_ids, 1000): for batched in batch_iter(room_ids, 1000):
batch_results = await self.db_pool.runInteraction( if await self.have_finished_sliding_sync_background_jobs():
"_bulk_get_max_event_pos", bulk_get_max_event_pos_txn, batched batch_results = await self.db_pool.runInteraction(
) "bulk_get_max_event_pos_from_sliding_sync_tables_txn",
bulk_get_max_event_pos_from_sliding_sync_tables_txn,
batched,
)
else:
batch_results = await self.db_pool.runInteraction(
"bulk_get_max_event_pos_fallback_txn",
bulk_get_max_event_pos_fallback_txn,
batched,
)
for room_id, stream_ordering in batch_results.items(): for room_id, stream_ordering in batch_results.items():
if stream_ordering <= now_token.stream: if stream_ordering <= now_token.stream:
results.update(batch_results) results[room_id] = stream_ordering
else: else:
recheck_rooms.add(room_id) recheck_rooms.add(room_id)

View File

@ -158,9 +158,56 @@ class TagsWorkerStore(AccountDataWorkerStore):
return results return results
async def has_tags_changed_for_room(
self,
# Since there are multiple arguments with the same type, force keyword arguments
# so people don't accidentally swap the order
*,
user_id: str,
room_id: str,
from_stream_id: int,
to_stream_id: int,
) -> bool:
"""Check if the users tags for a room have been updated in the token range
(> `from_stream_id` and <= `to_stream_id`)
Args:
user_id: The user to get tags for
room_id: The room to get tags for
from_stream_id: The point in the stream to fetch from
to_stream_id: The point in the stream to fetch to
Returns:
A mapping of tags to tag content.
"""
# Shortcut if no room has changed for the user
changed = self._account_data_stream_cache.has_entity_changed(
user_id, int(from_stream_id)
)
if not changed:
return False
last_change_position_for_room = await self.db_pool.simple_select_one_onecol(
table="room_tags_revisions",
keyvalues={"user_id": user_id, "room_id": room_id},
retcol="stream_id",
allow_none=True,
)
if last_change_position_for_room is None:
return False
return (
last_change_position_for_room > from_stream_id
and last_change_position_for_room <= to_stream_id
)
@cached(num_args=2, tree=True)
async def get_tags_for_room( async def get_tags_for_room(
self, user_id: str, room_id: str self, user_id: str, room_id: str
) -> Dict[str, JsonDict]: ) -> Mapping[str, JsonMapping]:
"""Get all the tags for the given room """Get all the tags for the given room
Args: Args:
@ -182,7 +229,7 @@ class TagsWorkerStore(AccountDataWorkerStore):
return {tag: db_to_json(content) for tag, content in rows} return {tag: db_to_json(content) for tag, content in rows}
async def add_tag_to_room( async def add_tag_to_room(
self, user_id: str, room_id: str, tag: str, content: JsonDict self, user_id: str, room_id: str, tag: str, content: JsonMapping
) -> int: ) -> int:
"""Add a tag to a room for a user. """Add a tag to a room for a user.
@ -213,6 +260,7 @@ class TagsWorkerStore(AccountDataWorkerStore):
await self.db_pool.runInteraction("add_tag", add_tag_txn, next_id) await self.db_pool.runInteraction("add_tag", add_tag_txn, next_id)
self.get_tags_for_user.invalidate((user_id,)) self.get_tags_for_user.invalidate((user_id,))
self.get_tags_for_room.invalidate((user_id, room_id))
return self._account_data_id_gen.get_current_token() return self._account_data_id_gen.get_current_token()
@ -237,6 +285,7 @@ class TagsWorkerStore(AccountDataWorkerStore):
await self.db_pool.runInteraction("remove_tag", remove_tag_txn, next_id) await self.db_pool.runInteraction("remove_tag", remove_tag_txn, next_id)
self.get_tags_for_user.invalidate((user_id,)) self.get_tags_for_user.invalidate((user_id,))
self.get_tags_for_room.invalidate((user_id, room_id))
return self._account_data_id_gen.get_current_token() return self._account_data_id_gen.get_current_token()
@ -290,9 +339,19 @@ class TagsWorkerStore(AccountDataWorkerStore):
rows: Iterable[Any], rows: Iterable[Any],
) -> None: ) -> None:
if stream_name == AccountDataStream.NAME: if stream_name == AccountDataStream.NAME:
for row in rows: # Cast is safe because the `AccountDataStream` should only be giving us
# `AccountDataStreamRow`
account_data_stream_rows: List[AccountDataStream.AccountDataStreamRow] = (
cast(List[AccountDataStream.AccountDataStreamRow], rows)
)
for row in account_data_stream_rows:
if row.data_type == AccountDataTypes.TAG: if row.data_type == AccountDataTypes.TAG:
self.get_tags_for_user.invalidate((row.user_id,)) self.get_tags_for_user.invalidate((row.user_id,))
if row.room_id:
self.get_tags_for_room.invalidate((row.user_id, row.room_id))
else:
self.get_tags_for_room.invalidate((row.user_id,))
self._account_data_stream_cache.entity_has_changed( self._account_data_stream_cache.entity_has_changed(
row.user_id, token row.user_id, token
) )

View File

@ -48,6 +48,7 @@ class RoomsForUserSlidingSync:
event_pos: PersistedEventPosition event_pos: PersistedEventPosition
room_version_id: str room_version_id: str
has_known_state: bool
room_type: Optional[str] room_type: Optional[str]
is_encrypted: bool is_encrypted: bool

View File

@ -2,7 +2,7 @@
# This file is licensed under the Affero General Public License (AGPL) version 3. # This file is licensed under the Affero General Public License (AGPL) version 3.
# #
# Copyright 2021 The Matrix.org Foundation C.I.C. # Copyright 2021 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023 New Vector, Ltd # Copyright (C) 2023-2024 New Vector, Ltd
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as # it under the terms of the GNU Affero General Public License as
@ -19,7 +19,7 @@
# #
# #
SCHEMA_VERSION = 87 # remember to update the list below when updating SCHEMA_VERSION = 88 # remember to update the list below when updating
"""Represents the expectations made by the codebase about the database schema """Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the This should be incremented whenever the codebase changes its requirements on the
@ -149,6 +149,10 @@ Changes in SCHEMA_VERSION = 87
- Add tables for storing the per-connection state for sliding sync requests: - Add tables for storing the per-connection state for sliding sync requests:
sliding_sync_connections, sliding_sync_connection_positions, sliding_sync_connection_required_state, sliding_sync_connections, sliding_sync_connection_positions, sliding_sync_connection_required_state,
sliding_sync_connection_room_configs, sliding_sync_connection_streams sliding_sync_connection_room_configs, sliding_sync_connection_streams
Changes in SCHEMA_VERSION = 88
- MSC4140: Add `delayed_events` table that keeps track of events that are to
be posted in response to a resettable timeout or an on-demand action.
""" """

View File

@ -0,0 +1,30 @@
CREATE TABLE delayed_events (
delay_id TEXT NOT NULL,
user_localpart TEXT NOT NULL,
device_id TEXT,
delay BIGINT NOT NULL,
send_ts BIGINT NOT NULL,
room_id TEXT NOT NULL,
event_type TEXT NOT NULL,
state_key TEXT,
origin_server_ts BIGINT,
content bytea NOT NULL,
is_processed BOOLEAN NOT NULL DEFAULT FALSE,
PRIMARY KEY (user_localpart, delay_id)
);
CREATE INDEX delayed_events_send_ts ON delayed_events (send_ts);
CREATE INDEX delayed_events_is_processed ON delayed_events (is_processed);
CREATE INDEX delayed_events_room_state_event_idx ON delayed_events (room_id, event_type, state_key) WHERE state_key IS NOT NULL;
CREATE TABLE delayed_events_stream_pos (
Lock CHAR(1) NOT NULL DEFAULT 'X' UNIQUE, -- Makes sure this table only has one row.
stream_id BIGINT NOT NULL,
CHECK (Lock='X')
);
-- Start processing events from the point this migration was run, rather
-- than the beginning of time.
INSERT INTO delayed_events_stream_pos (
stream_id
) SELECT COALESCE(MAX(stream_ordering), 0) from events;

View File

@ -37,23 +37,20 @@ from typing import (
import attr import attr
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import Extra
from synapse.api.constants import EventTypes from synapse.api.constants import EventTypes
from synapse.types import MultiWriterStreamToken, RoomStreamToken, StrCollection, UserID
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import Extra
else:
from pydantic import Extra
from synapse.events import EventBase from synapse.events import EventBase
from synapse.types import ( from synapse.types import (
DeviceListUpdates, DeviceListUpdates,
JsonDict, JsonDict,
JsonMapping, JsonMapping,
MultiWriterStreamToken,
Requester, Requester,
RoomStreamToken,
SlidingSyncStreamToken, SlidingSyncStreamToken,
StrCollection,
StreamToken, StreamToken,
UserID,
) )
from synapse.types.rest.client import SlidingSyncBody from synapse.types.rest.client import SlidingSyncBody
@ -317,8 +314,8 @@ class SlidingSyncResult:
"""The Account Data extension (MSC3959) """The Account Data extension (MSC3959)
Attributes: Attributes:
global_account_data_map: Mapping from `type` to `content` of global account global_account_data_map: Mapping from `type` to `content` of global
data events. account data events.
account_data_by_room_map: Mapping from room_id to mapping of `type` to account_data_by_room_map: Mapping from room_id to mapping of `type` to
`content` of room account data events. `content` of room account data events.
""" """
@ -678,7 +675,7 @@ class HaveSentRoomFlag(Enum):
LIVE = "live" LIVE = "live"
T = TypeVar("T", str, RoomStreamToken, MultiWriterStreamToken) T = TypeVar("T", str, RoomStreamToken, MultiWriterStreamToken, int)
@attr.s(auto_attribs=True, slots=True, frozen=True) @attr.s(auto_attribs=True, slots=True, frozen=True)
@ -826,6 +823,7 @@ class PerConnectionState:
rooms: RoomStatusMap[RoomStreamToken] = attr.Factory(RoomStatusMap) rooms: RoomStatusMap[RoomStreamToken] = attr.Factory(RoomStatusMap)
receipts: RoomStatusMap[MultiWriterStreamToken] = attr.Factory(RoomStatusMap) receipts: RoomStatusMap[MultiWriterStreamToken] = attr.Factory(RoomStatusMap)
account_data: RoomStatusMap[int] = attr.Factory(RoomStatusMap)
room_configs: Mapping[str, RoomSyncConfig] = attr.Factory(dict) room_configs: Mapping[str, RoomSyncConfig] = attr.Factory(dict)
@ -836,6 +834,7 @@ class PerConnectionState:
return MutablePerConnectionState( return MutablePerConnectionState(
rooms=self.rooms.get_mutable(), rooms=self.rooms.get_mutable(),
receipts=self.receipts.get_mutable(), receipts=self.receipts.get_mutable(),
account_data=self.account_data.get_mutable(),
room_configs=ChainMap({}, room_configs), room_configs=ChainMap({}, room_configs),
) )
@ -843,6 +842,7 @@ class PerConnectionState:
return PerConnectionState( return PerConnectionState(
rooms=self.rooms.copy(), rooms=self.rooms.copy(),
receipts=self.receipts.copy(), receipts=self.receipts.copy(),
account_data=self.account_data.copy(),
room_configs=dict(self.room_configs), room_configs=dict(self.room_configs),
) )
@ -856,6 +856,7 @@ class MutablePerConnectionState(PerConnectionState):
rooms: MutableRoomStatusMap[RoomStreamToken] rooms: MutableRoomStatusMap[RoomStreamToken]
receipts: MutableRoomStatusMap[MultiWriterStreamToken] receipts: MutableRoomStatusMap[MultiWriterStreamToken]
account_data: MutableRoomStatusMap[int]
room_configs: typing.ChainMap[str, RoomSyncConfig] room_configs: typing.ChainMap[str, RoomSyncConfig]
@ -863,6 +864,7 @@ class MutablePerConnectionState(PerConnectionState):
return ( return (
bool(self.rooms.get_updates()) bool(self.rooms.get_updates())
or bool(self.receipts.get_updates()) or bool(self.receipts.get_updates())
or bool(self.account_data.get_updates())
or bool(self.get_room_config_updates()) or bool(self.get_room_config_updates())
) )

View File

@ -18,14 +18,7 @@
# [This file includes modifications made by New Vector Limited] # [This file includes modifications made by New Vector Limited]
# #
# #
from typing import TYPE_CHECKING from synapse._pydantic_compat import BaseModel, Extra
from synapse._pydantic_compat import HAS_PYDANTIC_V2
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import BaseModel, Extra
else:
from pydantic import BaseModel, Extra
class RequestBodyModel(BaseModel): class RequestBodyModel(BaseModel):

View File

@ -20,29 +20,15 @@
# #
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import (
Extra,
if TYPE_CHECKING or HAS_PYDANTIC_V2: StrictBool,
from pydantic.v1 import ( StrictInt,
Extra, StrictStr,
StrictBool, conint,
StrictInt, constr,
StrictStr, validator,
conint, )
constr,
validator,
)
else:
from pydantic import (
Extra,
StrictBool,
StrictInt,
StrictStr,
conint,
constr,
validator,
)
from synapse.types.rest import RequestBodyModel from synapse.types.rest import RequestBodyModel
from synapse.util.threepids import validate_email from synapse.util.threepids import validate_email
@ -384,7 +370,7 @@ class SlidingSyncBody(RequestBodyModel):
receipts: Optional[ReceiptsExtension] = None receipts: Optional[ReceiptsExtension] = None
typing: Optional[TypingExtension] = None typing: Optional[TypingExtension] = None
conn_id: Optional[str] conn_id: Optional[StrictStr]
# mypy workaround via https://github.com/pydantic/pydantic/issues/156#issuecomment-1130883884 # mypy workaround via https://github.com/pydantic/pydantic/issues/156#issuecomment-1130883884
if TYPE_CHECKING: if TYPE_CHECKING:

View File

@ -0,0 +1,47 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
class _BackgroundUpdates:
EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
DELETE_SOFT_FAILED_EXTREMITIES = "delete_soft_failed_extremities"
POPULATE_STREAM_ORDERING2 = "populate_stream_ordering2"
INDEX_STREAM_ORDERING2 = "index_stream_ordering2"
INDEX_STREAM_ORDERING2_CONTAINS_URL = "index_stream_ordering2_contains_url"
INDEX_STREAM_ORDERING2_ROOM_ORDER = "index_stream_ordering2_room_order"
INDEX_STREAM_ORDERING2_ROOM_STREAM = "index_stream_ordering2_room_stream"
INDEX_STREAM_ORDERING2_TS = "index_stream_ordering2_ts"
REPLACE_STREAM_ORDERING_COLUMN = "replace_stream_ordering_column"
EVENT_EDGES_DROP_INVALID_ROWS = "event_edges_drop_invalid_rows"
EVENT_EDGES_REPLACE_INDEX = "event_edges_replace_index"
EVENTS_POPULATE_STATE_KEY_REJECTIONS = "events_populate_state_key_rejections"
EVENTS_JUMP_TO_DATE_INDEX = "events_jump_to_date_index"
SLIDING_SYNC_PREFILL_JOINED_ROOMS_TO_RECALCULATE_TABLE_BG_UPDATE = (
"sliding_sync_prefill_joined_rooms_to_recalculate_table_bg_update"
)
SLIDING_SYNC_JOINED_ROOMS_BG_UPDATE = "sliding_sync_joined_rooms_bg_update"
SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_BG_UPDATE = (
"sliding_sync_membership_snapshots_bg_update"
)

View File

@ -139,6 +139,10 @@ async def filter_events_for_client(
room_id room_id
] = await storage.main.get_retention_policy_for_room(room_id) ] = await storage.main.get_retention_policy_for_room(room_id)
# meow: let admins see secret events like org.matrix.dummy_event, m.room.aliases
# and events expired by the retention policy.
filter_override = user_id in storage.hs.config.meow.filter_override
def allowed(event: EventBase) -> Optional[EventBase]: def allowed(event: EventBase) -> Optional[EventBase]:
state_after_event = event_id_to_state.get(event.event_id) state_after_event = event_id_to_state.get(event.event_id)
filtered = _check_client_allowed_to_see_event( filtered = _check_client_allowed_to_see_event(
@ -152,6 +156,7 @@ async def filter_events_for_client(
state=state_after_event, state=state_after_event,
is_peeking=is_peeking, is_peeking=is_peeking,
sender_erased=erased_senders.get(event.sender, False), sender_erased=erased_senders.get(event.sender, False),
filter_override=filter_override,
) )
if filtered is None: if filtered is None:
return None return None
@ -328,6 +333,7 @@ def _check_client_allowed_to_see_event(
retention_policy: RetentionPolicy, retention_policy: RetentionPolicy,
state: Optional[StateMap[EventBase]], state: Optional[StateMap[EventBase]],
sender_erased: bool, sender_erased: bool,
filter_override: bool,
) -> Optional[EventBase]: ) -> Optional[EventBase]:
"""Check with the given user is allowed to see the given event """Check with the given user is allowed to see the given event
@ -344,6 +350,7 @@ def _check_client_allowed_to_see_event(
retention_policy: The retention policy of the room retention_policy: The retention policy of the room
state: The state at the event, unless its an outlier state: The state at the event, unless its an outlier
sender_erased: Whether the event sender has been marked as "erased" sender_erased: Whether the event sender has been marked as "erased"
filter_override: meow
Returns: Returns:
None if the user cannot see this event at all None if the user cannot see this event at all
@ -357,7 +364,7 @@ def _check_client_allowed_to_see_event(
# because, if this is not the case, we're probably only checking if the users can # because, if this is not the case, we're probably only checking if the users can
# see events in the room at that point in the DAG, and that shouldn't be decided # see events in the room at that point in the DAG, and that shouldn't be decided
# on those checks. # on those checks.
if filter_send_to_client: if filter_send_to_client and not filter_override:
if ( if (
_check_filter_send_to_client(event, clock, retention_policy, sender_ignored) _check_filter_send_to_client(event, clock, retention_policy, sender_ignored)
== _CheckFilter.DENIED == _CheckFilter.DENIED
@ -367,6 +374,9 @@ def _check_client_allowed_to_see_event(
event.event_id, event.event_id,
) )
return None return None
# meow: even with filter_override, we want to filter ignored users
elif filter_send_to_client and not event.is_state() and sender_ignored:
return None
if event.event_id in always_include_ids: if event.event_id in always_include_ids:
return event return event

File diff suppressed because it is too large Load Diff

View File

@ -21,9 +21,11 @@
import hashlib import hashlib
import hmac import hmac
import json
import os import os
import urllib.parse import urllib.parse
from binascii import unhexlify from binascii import unhexlify
from http import HTTPStatus
from typing import Dict, List, Optional from typing import Dict, List, Optional
from unittest.mock import AsyncMock, Mock, patch from unittest.mock import AsyncMock, Mock, patch
@ -33,7 +35,7 @@ from twisted.test.proto_helpers import MemoryReactor
from twisted.web.resource import Resource from twisted.web.resource import Resource
import synapse.rest.admin import synapse.rest.admin
from synapse.api.constants import ApprovalNoticeMedium, LoginType, UserTypes from synapse.api.constants import ApprovalNoticeMedium, EventTypes, LoginType, UserTypes
from synapse.api.errors import Codes, HttpResponseException, ResourceLimitError from synapse.api.errors import Codes, HttpResponseException, ResourceLimitError
from synapse.api.room_versions import RoomVersions from synapse.api.room_versions import RoomVersions
from synapse.media.filepath import MediaFilePaths from synapse.media.filepath import MediaFilePaths
@ -5089,3 +5091,271 @@ class UserSuspensionTestCase(unittest.HomeserverTestCase):
res5 = self.get_success(self.store.get_user_suspended_status(self.bad_user)) res5 = self.get_success(self.store.get_user_suspended_status(self.bad_user))
self.assertEqual(True, res5) self.assertEqual(True, res5)
class UserRedactionTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets,
login.register_servlets,
admin.register_servlets,
room.register_servlets,
sync.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.admin = self.register_user("thomas", "pass", True)
self.admin_tok = self.login("thomas", "pass")
self.bad_user = self.register_user("teresa", "pass")
self.bad_user_tok = self.login("teresa", "pass")
self.store = hs.get_datastores().main
self.spam_checker = hs.get_module_api_callbacks().spam_checker
# create rooms - room versions 11+ store the `redacts` key in content while
# earlier ones don't so we use a mix of room versions
self.rm1 = self.helper.create_room_as(
self.admin, tok=self.admin_tok, room_version="7"
)
self.rm2 = self.helper.create_room_as(self.admin, tok=self.admin_tok)
self.rm3 = self.helper.create_room_as(
self.admin, tok=self.admin_tok, room_version="11"
)
def test_redact_messages_all_rooms(self) -> None:
"""
Test that request to redact events in all rooms user is member of is successful
"""
# join rooms, send some messages
originals = []
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
originals.append(join["event_id"])
for i in range(15):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok, expect_code=200
)
originals.append(res["event_id"])
# redact all events in all rooms
channel = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": []},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
matched = []
for rm in [self.rm1, self.rm2, self.rm3]:
filter = json.dumps({"types": [EventTypes.Redaction]})
channel = self.make_request(
"GET",
f"rooms/{rm}/messages?filter={filter}&limit=50",
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
for event in channel.json_body["chunk"]:
for event_id in originals:
if (
event["type"] == "m.room.redaction"
and event["redacts"] == event_id
):
matched.append(event_id)
self.assertEqual(len(matched), len(originals))
def test_redact_messages_specific_rooms(self) -> None:
"""
Test that request to redact events in specified rooms user is member of is successful
"""
originals = []
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
originals.append(join["event_id"])
for i in range(15):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok
)
originals.append(res["event_id"])
# redact messages in rooms 1 and 3
channel = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm1, self.rm3]},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
# messages in requested rooms are redacted
for rm in [self.rm1, self.rm3]:
filter = json.dumps({"types": [EventTypes.Redaction]})
channel = self.make_request(
"GET",
f"rooms/{rm}/messages?filter={filter}&limit=50",
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
matches = []
for event in channel.json_body["chunk"]:
for event_id in originals:
if (
event["type"] == "m.room.redaction"
and event["redacts"] == event_id
):
matches.append((event_id, event))
# we redacted 16 messages
self.assertEqual(len(matches), 16)
channel = self.make_request(
"GET", f"rooms/{self.rm2}/messages?limit=50", access_token=self.admin_tok
)
self.assertEqual(channel.code, 200)
# messages in remaining room are not
for event in channel.json_body["chunk"]:
if event["type"] == "m.room.redaction":
self.fail("found redaction in room 2")
def test_redact_status(self) -> None:
rm2_originals = []
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
if rm == self.rm2:
rm2_originals.append(join["event_id"])
for i in range(5):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok
)
if rm == self.rm2:
rm2_originals.append(res["event_id"])
# redact messages in rooms 1 and 3
channel = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm1, self.rm3]},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
id = channel.json_body.get("redact_id")
channel2 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id}",
access_token=self.admin_tok,
)
self.assertEqual(channel2.code, 200)
self.assertEqual(channel2.json_body.get("status"), "complete")
self.assertEqual(channel2.json_body.get("failed_redactions"), {})
# mock that will cause persisting the redaction events to fail
async def check_event_for_spam(event: str) -> str:
return "spam"
self.spam_checker.check_event_for_spam = check_event_for_spam # type: ignore
channel3 = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm2]},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, 200)
id = channel3.json_body.get("redact_id")
channel4 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id}",
access_token=self.admin_tok,
)
self.assertEqual(channel4.code, 200)
self.assertEqual(channel4.json_body.get("status"), "complete")
failed_redactions = channel4.json_body.get("failed_redactions")
assert failed_redactions is not None
matched = []
for original in rm2_originals:
if failed_redactions.get(original) is not None:
matched.append(original)
self.assertEqual(len(matched), len(rm2_originals))
def test_admin_redact_works_if_user_kicked_or_banned(self) -> None:
originals = []
for rm in [self.rm1, self.rm2, self.rm3]:
join = self.helper.join(rm, self.bad_user, tok=self.bad_user_tok)
originals.append(join["event_id"])
for i in range(5):
event = {"body": f"hello{i}", "msgtype": "m.text"}
res = self.helper.send_event(
rm, "m.room.message", event, tok=self.bad_user_tok
)
originals.append(res["event_id"])
# kick user from rooms 1 and 3
for r in [self.rm1, self.rm2]:
channel = self.make_request(
"POST",
f"/_matrix/client/r0/rooms/{r}/kick",
content={"reason": "being a bummer", "user_id": self.bad_user},
access_token=self.admin_tok,
)
self.assertEqual(channel.code, HTTPStatus.OK, channel.result)
# redact messages in room 1 and 3
channel1 = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm1, self.rm3]},
access_token=self.admin_tok,
)
self.assertEqual(channel1.code, 200)
id = channel1.json_body.get("redact_id")
# check that there were no failed redactions in room 1 and 3
channel2 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id}",
access_token=self.admin_tok,
)
self.assertEqual(channel2.code, 200)
self.assertEqual(channel2.json_body.get("status"), "complete")
failed_redactions = channel2.json_body.get("failed_redactions")
self.assertEqual(failed_redactions, {})
# ban user
channel3 = self.make_request(
"POST",
f"/_matrix/client/r0/rooms/{self.rm2}/ban",
content={"reason": "being a bummer", "user_id": self.bad_user},
access_token=self.admin_tok,
)
self.assertEqual(channel3.code, HTTPStatus.OK, channel3.result)
# redact messages in room 2
channel4 = self.make_request(
"POST",
f"/_synapse/admin/v1/user/{self.bad_user}/redact",
content={"rooms": [self.rm2]},
access_token=self.admin_tok,
)
self.assertEqual(channel4.code, 200)
id2 = channel1.json_body.get("redact_id")
# check that there were no failed redactions in room 2
channel5 = self.make_request(
"GET",
f"/_synapse/admin/v1/user/redact_status/{id2}",
access_token=self.admin_tok,
)
self.assertEqual(channel5.code, 200)
self.assertEqual(channel5.json_body.get("status"), "complete")
failed_redactions = channel5.json_body.get("failed_redactions")
self.assertEqual(failed_redactions, {})

View File

@ -11,9 +11,11 @@
# See the GNU Affero General Public License for more details: # See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>. # <https://www.gnu.org/licenses/agpl-3.0.html>.
# #
import enum
import logging import logging
from parameterized import parameterized_class from parameterized import parameterized, parameterized_class
from typing_extensions import assert_never
from twisted.test.proto_helpers import MemoryReactor from twisted.test.proto_helpers import MemoryReactor
@ -30,6 +32,11 @@ from tests.server import TimedOutException
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class TagAction(enum.Enum):
ADD = enum.auto()
REMOVE = enum.auto()
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the # FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for # foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by # `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
@ -80,18 +87,23 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
} }
response_body, _ = self.do_sync(sync_body, tok=user1_tok) response_body, _ = self.do_sync(sync_body, tok=user1_tok)
global_account_data_map = {
global_event["type"]: global_event["content"]
for global_event in response_body["extensions"]["account_data"].get(
"global"
)
}
self.assertIncludes( self.assertIncludes(
{ global_account_data_map.keys(),
global_event["type"]
for global_event in response_body["extensions"]["account_data"].get(
"global"
)
},
# Even though we don't have any global account data set, Synapse saves some # Even though we don't have any global account data set, Synapse saves some
# default push rules for us. # default push rules for us.
{AccountDataTypes.PUSH_RULES}, {AccountDataTypes.PUSH_RULES},
exact=True, exact=True,
) )
# Push rules are a giant chunk of JSON data so we will just assume the value is correct if they key is here.
# global_account_data_map[AccountDataTypes.PUSH_RULES]
# No room account data for this test
self.assertIncludes( self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(), response_body["extensions"]["account_data"].get("rooms").keys(),
set(), set(),
@ -121,16 +133,19 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
# There has been no account data changes since the `from_token` so we shouldn't # There has been no account data changes since the `from_token` so we shouldn't
# see any account data here. # see any account data here.
global_account_data_map = {
global_event["type"]: global_event["content"]
for global_event in response_body["extensions"]["account_data"].get(
"global"
)
}
self.assertIncludes( self.assertIncludes(
{ global_account_data_map.keys(),
global_event["type"]
for global_event in response_body["extensions"]["account_data"].get(
"global"
)
},
set(), set(),
exact=True, exact=True,
) )
# No room account data for this test
self.assertIncludes( self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(), response_body["extensions"]["account_data"].get("rooms").keys(),
set(), set(),
@ -165,16 +180,24 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
response_body, _ = self.do_sync(sync_body, tok=user1_tok) response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# It should show us all of the global account data # It should show us all of the global account data
global_account_data_map = {
global_event["type"]: global_event["content"]
for global_event in response_body["extensions"]["account_data"].get(
"global"
)
}
self.assertIncludes( self.assertIncludes(
{ global_account_data_map.keys(),
global_event["type"]
for global_event in response_body["extensions"]["account_data"].get(
"global"
)
},
{AccountDataTypes.PUSH_RULES, "org.matrix.foobarbaz"}, {AccountDataTypes.PUSH_RULES, "org.matrix.foobarbaz"},
exact=True, exact=True,
) )
# Push rules are a giant chunk of JSON data so we will just assume the value is correct if they key is here.
# global_account_data_map[AccountDataTypes.PUSH_RULES]
self.assertEqual(
global_account_data_map["org.matrix.foobarbaz"], {"foo": "bar"}
)
# No room account data for this test
self.assertIncludes( self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(), response_body["extensions"]["account_data"].get("rooms").keys(),
set(), set(),
@ -220,17 +243,23 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
# Make an incremental Sliding Sync request with the account_data extension enabled # Make an incremental Sliding Sync request with the account_data extension enabled
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok) response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
global_account_data_map = {
global_event["type"]: global_event["content"]
for global_event in response_body["extensions"]["account_data"].get(
"global"
)
}
self.assertIncludes( self.assertIncludes(
{ global_account_data_map.keys(),
global_event["type"]
for global_event in response_body["extensions"]["account_data"].get(
"global"
)
},
# We should only see the new global account data that happened after the `from_token` # We should only see the new global account data that happened after the `from_token`
{"org.matrix.doodardaz"}, {"org.matrix.doodardaz"},
exact=True, exact=True,
) )
self.assertEqual(
global_account_data_map["org.matrix.doodardaz"], {"doo": "dar"}
)
# No room account data for this test
self.assertIncludes( self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(), response_body["extensions"]["account_data"].get("rooms").keys(),
set(), set(),
@ -255,6 +284,15 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
content={"roo": "rar"}, content={"roo": "rar"},
) )
) )
# Add a room tag to mark the room as a favourite
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
content={},
)
)
# Create another room with some room account data # Create another room with some room account data
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok) room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
@ -266,6 +304,15 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
content={"roo": "rar"}, content={"roo": "rar"},
) )
) )
# Add a room tag to mark the room as a favourite
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
content={},
)
)
# Make an initial Sliding Sync request with the account_data extension enabled # Make an initial Sliding Sync request with the account_data extension enabled
sync_body = { sync_body = {
@ -294,21 +341,36 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
{room_id1}, {room_id1},
exact=True, exact=True,
) )
account_data_map = {
event["type"]: event["content"]
for event in response_body["extensions"]["account_data"]
.get("rooms")
.get(room_id1)
}
self.assertIncludes( self.assertIncludes(
{ account_data_map.keys(),
event["type"] {"org.matrix.roorarraz", AccountDataTypes.TAG},
for event in response_body["extensions"]["account_data"]
.get("rooms")
.get(room_id1)
},
{"org.matrix.roorarraz"},
exact=True, exact=True,
) )
self.assertEqual(account_data_map["org.matrix.roorarraz"], {"roo": "rar"})
self.assertEqual(
account_data_map[AccountDataTypes.TAG], {"tags": {"m.favourite": {}}}
)
def test_room_account_data_incremental_sync(self) -> None: @parameterized.expand(
[
("add tags", TagAction.ADD),
("remove tags", TagAction.REMOVE),
]
)
def test_room_account_data_incremental_sync(
self, test_description: str, tag_action: TagAction
) -> None:
""" """
On incremental sync, we return all account data for a given room but only for On incremental sync, we return all account data for a given room but only for
rooms that we request and are being returned in the Sliding Sync response. rooms that we request and are being returned in the Sliding Sync response.
(HaveSentRoomFlag.LIVE)
""" """
user1_id = self.register_user("user1", "pass") user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass") user1_tok = self.login(user1_id, "pass")
@ -323,6 +385,15 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
content={"roo": "rar"}, content={"roo": "rar"},
) )
) )
# Add a room tag to mark the room as a favourite
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
content={},
)
)
# Create another room with some room account data # Create another room with some room account data
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok) room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
@ -334,6 +405,15 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
content={"roo": "rar"}, content={"roo": "rar"},
) )
) )
# Add a room tag to mark the room as a favourite
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
content={},
)
)
sync_body = { sync_body = {
"lists": {}, "lists": {},
@ -369,6 +449,42 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
content={"roo": "rar"}, content={"roo": "rar"},
) )
) )
if tag_action == TagAction.ADD:
# Add another room tag
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.server_notice",
content={},
)
)
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.server_notice",
content={},
)
)
elif tag_action == TagAction.REMOVE:
# Remove the room tag
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
)
)
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
)
)
else:
assert_never(tag_action)
# Make an incremental Sliding Sync request with the account_data extension enabled # Make an incremental Sliding Sync request with the account_data extension enabled
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok) response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
@ -383,16 +499,443 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
exact=True, exact=True,
) )
# We should only see the new room account data that happened after the `from_token` # We should only see the new room account data that happened after the `from_token`
account_data_map = {
event["type"]: event["content"]
for event in response_body["extensions"]["account_data"]
.get("rooms")
.get(room_id1)
}
self.assertIncludes( self.assertIncludes(
{ account_data_map.keys(),
event["type"] {"org.matrix.roorarraz2", AccountDataTypes.TAG},
for event in response_body["extensions"]["account_data"]
.get("rooms")
.get(room_id1)
},
{"org.matrix.roorarraz2"},
exact=True, exact=True,
) )
self.assertEqual(account_data_map["org.matrix.roorarraz2"], {"roo": "rar"})
if tag_action == TagAction.ADD:
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {"m.favourite": {}, "m.server_notice": {}}},
)
elif tag_action == TagAction.REMOVE:
# If we previously showed the client that the room has tags, when it no
# longer has tags, we need to show them an empty map.
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {}},
)
else:
assert_never(tag_action)
@parameterized.expand(
[
("add tags", TagAction.ADD),
("remove tags", TagAction.REMOVE),
]
)
def test_room_account_data_incremental_sync_out_of_range_never(
self, test_description: str, tag_action: TagAction
) -> None:
"""Tests that we don't return account data for rooms that are out of
range, but then do send all account data once they're in range.
(initial/HaveSentRoomFlag.NEVER)
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# Create a room and add some room account data
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
self.get_success(
self.account_data_handler.add_account_data_to_room(
user_id=user1_id,
room_id=room_id1,
account_data_type="org.matrix.roorarraz",
content={"roo": "rar"},
)
)
# Add a room tag to mark the room as a favourite
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
content={},
)
)
# Create another room with some room account data
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
self.get_success(
self.account_data_handler.add_account_data_to_room(
user_id=user1_id,
room_id=room_id2,
account_data_type="org.matrix.roorarraz",
content={"roo": "rar"},
)
)
# Add a room tag to mark the room as a favourite
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
content={},
)
)
# Now send a message into room1 so that it is at the top of the list
self.helper.send(room_id1, body="new event", tok=user1_tok)
# Make a SS request for only the top room.
sync_body = {
"lists": {
"main": {
"ranges": [[0, 0]],
"required_state": [],
"timeline_limit": 0,
}
},
"extensions": {
"account_data": {
"enabled": True,
"lists": ["main"],
}
},
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# Only room1 should be in the response since it's the latest room with activity
# and our range only includes 1 room.
self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(),
{room_id1},
exact=True,
)
# Add some other room account data
self.get_success(
self.account_data_handler.add_account_data_to_room(
user_id=user1_id,
room_id=room_id1,
account_data_type="org.matrix.roorarraz2",
content={"roo": "rar"},
)
)
self.get_success(
self.account_data_handler.add_account_data_to_room(
user_id=user1_id,
room_id=room_id2,
account_data_type="org.matrix.roorarraz2",
content={"roo": "rar"},
)
)
if tag_action == TagAction.ADD:
# Add another room tag
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.server_notice",
content={},
)
)
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.server_notice",
content={},
)
)
elif tag_action == TagAction.REMOVE:
# Remove the room tag
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
)
)
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
)
)
else:
assert_never(tag_action)
# Move room2 into range.
self.helper.send(room_id2, body="new event", tok=user1_tok)
# Make an incremental Sliding Sync request with the account_data extension enabled
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
self.assertIsNotNone(response_body["extensions"]["account_data"].get("global"))
# We expect to see the account data of room2, as that has the most
# recent update.
self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(),
{room_id2},
exact=True,
)
# Since this is the first time we're seeing room2 down sync, we should see all
# room account data for it.
account_data_map = {
event["type"]: event["content"]
for event in response_body["extensions"]["account_data"]
.get("rooms")
.get(room_id2)
}
expected_account_data_keys = {
"org.matrix.roorarraz",
"org.matrix.roorarraz2",
}
if tag_action == TagAction.ADD:
expected_account_data_keys.add(AccountDataTypes.TAG)
self.assertIncludes(
account_data_map.keys(),
expected_account_data_keys,
exact=True,
)
self.assertEqual(account_data_map["org.matrix.roorarraz"], {"roo": "rar"})
self.assertEqual(account_data_map["org.matrix.roorarraz2"], {"roo": "rar"})
if tag_action == TagAction.ADD:
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {"m.favourite": {}, "m.server_notice": {}}},
)
elif tag_action == TagAction.REMOVE:
# Since we never told the client about the room tags, we don't need to say
# anything if there are no tags now (the client doesn't need an update).
self.assertIsNone(
account_data_map.get(AccountDataTypes.TAG),
account_data_map,
)
else:
assert_never(tag_action)
@parameterized.expand(
[
("add tags", TagAction.ADD),
("remove tags", TagAction.REMOVE),
]
)
def test_room_account_data_incremental_sync_out_of_range_previously(
self, test_description: str, tag_action: TagAction
) -> None:
"""Tests that we don't return account data for rooms that fall out of
range, but then do send all account data that has changed they're back in range.
(HaveSentRoomFlag.PREVIOUSLY)
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# Create a room and add some room account data
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
self.get_success(
self.account_data_handler.add_account_data_to_room(
user_id=user1_id,
room_id=room_id1,
account_data_type="org.matrix.roorarraz",
content={"roo": "rar"},
)
)
# Add a room tag to mark the room as a favourite
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
content={},
)
)
# Create another room with some room account data
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok)
self.get_success(
self.account_data_handler.add_account_data_to_room(
user_id=user1_id,
room_id=room_id2,
account_data_type="org.matrix.roorarraz",
content={"roo": "rar"},
)
)
# Add a room tag to mark the room as a favourite
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
content={},
)
)
# Make an initial Sliding Sync request for only room1 and room2.
sync_body = {
"lists": {},
"room_subscriptions": {
room_id1: {
"required_state": [],
"timeline_limit": 0,
},
room_id2: {
"required_state": [],
"timeline_limit": 0,
},
},
"extensions": {
"account_data": {
"enabled": True,
"rooms": [room_id1, room_id2],
}
},
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# Both rooms show up because we have a room subscription for each and they're
# requested in the `account_data` extension.
self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(),
{room_id1, room_id2},
exact=True,
)
# Add some other room account data
self.get_success(
self.account_data_handler.add_account_data_to_room(
user_id=user1_id,
room_id=room_id1,
account_data_type="org.matrix.roorarraz2",
content={"roo": "rar"},
)
)
self.get_success(
self.account_data_handler.add_account_data_to_room(
user_id=user1_id,
room_id=room_id2,
account_data_type="org.matrix.roorarraz2",
content={"roo": "rar"},
)
)
if tag_action == TagAction.ADD:
# Add another room tag
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id1,
tag="m.server_notice",
content={},
)
)
self.get_success(
self.account_data_handler.add_tag_to_room(
user_id=user1_id,
room_id=room_id2,
tag="m.server_notice",
content={},
)
)
elif tag_action == TagAction.REMOVE:
# Remove the room tag
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id1,
tag="m.favourite",
)
)
self.get_success(
self.account_data_handler.remove_tag_from_room(
user_id=user1_id,
room_id=room_id2,
tag="m.favourite",
)
)
else:
assert_never(tag_action)
# Make an incremental Sliding Sync request for just room1
response_body, from_token = self.do_sync(
{
**sync_body,
"room_subscriptions": {
room_id1: {
"required_state": [],
"timeline_limit": 0,
},
},
},
since=from_token,
tok=user1_tok,
)
# Only room1 shows up because we only have a room subscription for room1 now.
self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(),
{room_id1},
exact=True,
)
# Make an incremental Sliding Sync request for just room2 now
response_body, from_token = self.do_sync(
{
**sync_body,
"room_subscriptions": {
room_id2: {
"required_state": [],
"timeline_limit": 0,
},
},
},
since=from_token,
tok=user1_tok,
)
# Only room2 shows up because we only have a room subscription for room2 now.
self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(),
{room_id2},
exact=True,
)
self.assertIsNotNone(response_body["extensions"]["account_data"].get("global"))
# Check for room account data for room2
self.assertIncludes(
response_body["extensions"]["account_data"].get("rooms").keys(),
{room_id2},
exact=True,
)
# We should see any room account data updates for room2 since the last
# time we saw it down sync
account_data_map = {
event["type"]: event["content"]
for event in response_body["extensions"]["account_data"]
.get("rooms")
.get(room_id2)
}
self.assertIncludes(
account_data_map.keys(),
{"org.matrix.roorarraz2", AccountDataTypes.TAG},
exact=True,
)
self.assertEqual(account_data_map["org.matrix.roorarraz2"], {"roo": "rar"})
if tag_action == TagAction.ADD:
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {"m.favourite": {}, "m.server_notice": {}}},
)
elif tag_action == TagAction.REMOVE:
# If we previously showed the client that the room has tags, when it no
# longer has tags, we need to show them an empty map.
self.assertEqual(
account_data_map[AccountDataTypes.TAG],
{"tags": {}},
)
else:
assert_never(tag_action)
def test_wait_for_new_data(self) -> None: def test_wait_for_new_data(self) -> None:
""" """

File diff suppressed because it is too large Load Diff

View File

@ -935,7 +935,8 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
op = response_body["lists"]["foo-list"]["ops"][0] op = response_body["lists"]["foo-list"]["ops"][0]
self.assertEqual(op["op"], "SYNC") self.assertEqual(op["op"], "SYNC")
self.assertEqual(op["range"], [0, 1]) self.assertEqual(op["range"], [0, 1])
# Note that we don't order the ops anymore, so we need to compare sets. # Note that we don't sort the rooms when the range includes all of the rooms, so
# we just assert that the rooms are included
self.assertIncludes(set(op["room_ids"]), {room_id1, room_id2}, exact=True) self.assertIncludes(set(op["room_ids"]), {room_id1, room_id2}, exact=True)
# The `bump_stamp` for room1 should point at the latest message (not the # The `bump_stamp` for room1 should point at the latest message (not the
@ -1139,3 +1140,113 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
self.assertEqual( self.assertEqual(
response_body["rooms"][room_id]["bump_stamp"], invite_pos.stream response_body["rooms"][room_id]["bump_stamp"], invite_pos.stream
) )
def test_rooms_meta_is_dm(self) -> None:
"""
Test `rooms` `is_dm` is correctly set for DM rooms.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
# Create a DM room
joined_dm_room_id = self._create_dm_room(
inviter_user_id=user1_id,
inviter_tok=user1_tok,
invitee_user_id=user2_id,
invitee_tok=user2_tok,
should_join_room=True,
)
invited_dm_room_id = self._create_dm_room(
inviter_user_id=user1_id,
inviter_tok=user1_tok,
invitee_user_id=user2_id,
invitee_tok=user2_tok,
should_join_room=False,
)
# Create a normal room
room_id = self.helper.create_room_as(user2_id, tok=user2_tok)
self.helper.join(room_id, user1_id, tok=user1_tok)
# Create a room that user1 is invited to
invite_room_id = self.helper.create_room_as(user2_id, tok=user2_tok)
self.helper.invite(invite_room_id, src=user2_id, targ=user1_id, tok=user2_tok)
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 99]],
"required_state": [],
"timeline_limit": 0,
}
}
}
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# Ensure DM's are correctly marked
self.assertDictEqual(
{
room_id: room.get("is_dm")
for room_id, room in response_body["rooms"].items()
},
{
invite_room_id: None,
room_id: None,
invited_dm_room_id: True,
joined_dm_room_id: True,
},
)
def test_old_room_with_unknown_room_version(self) -> None:
"""Test that an old room with unknown room version does not break
sync."""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# We first create a standard room, then we'll change the room version in
# the DB.
room_id = self.helper.create_room_as(
user1_id,
tok=user1_tok,
)
# Poke the database and update the room version to an unknown one.
self.get_success(
self.hs.get_datastores().main.db_pool.simple_update(
"rooms",
keyvalues={"room_id": room_id},
updatevalues={"room_version": "unknown-room-version"},
desc="updated-room-version",
)
)
# Invalidate method so that it returns the currently updated version
# instead of the cached version.
self.hs.get_datastores().main.get_room_version_id.invalidate((room_id,))
# For old unknown room versions we won't have an entry in this table
# (due to us skipping unknown room versions in the background update).
self.get_success(
self.store.db_pool.simple_delete(
table="sliding_sync_joined_rooms",
keyvalues={"room_id": room_id},
desc="delete_sliding_room",
)
)
# Also invalidate some caches to ensure we pull things from the DB.
self.store._events_stream_cache._entity_to_key.pop(room_id)
self.store._get_max_event_pos.invalidate((room_id,))
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [],
"timeline_limit": 5,
}
}
}
response_body, _ = self.do_sync(sync_body, tok=user1_tok)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,346 @@
"""Tests REST events for /delayed_events paths."""
from http import HTTPStatus
from typing import List
from parameterized import parameterized
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.errors import Codes
from synapse.rest.client import delayed_events, room
from synapse.server import HomeServer
from synapse.types import JsonDict
from synapse.util import Clock
from tests.unittest import HomeserverTestCase
PATH_PREFIX = "/_matrix/client/unstable/org.matrix.msc4140/delayed_events"
_HS_NAME = "red"
_EVENT_TYPE = "com.example.test"
class DelayedEventsTestCase(HomeserverTestCase):
"""Tests getting and managing delayed events."""
servlets = [delayed_events.register_servlets, room.register_servlets]
user_id = f"@sid1:{_HS_NAME}"
def default_config(self) -> JsonDict:
config = super().default_config()
config["server_name"] = _HS_NAME
config["max_event_delay_duration"] = "24h"
return config
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.room_id = self.helper.create_room_as(
self.user_id,
extra_content={
"preset": "trusted_private_chat",
},
)
def test_delayed_events_empty_on_startup(self) -> None:
self.assertListEqual([], self._get_delayed_events())
def test_delayed_state_events_are_sent_on_timeout(self) -> None:
state_key = "to_send_on_timeout"
setter_key = "setter"
setter_expected = "on_timeout"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 900),
{
setter_key: setter_expected,
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
self.reactor.advance(1)
self.assertListEqual([], self._get_delayed_events())
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
def test_update_delayed_event_without_id(self) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/",
)
self.assertEqual(HTTPStatus.NOT_FOUND, channel.code, channel.result)
def test_update_delayed_event_without_body(self) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/abc",
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
Codes.NOT_JSON,
channel.json_body["errcode"],
)
def test_update_delayed_event_without_action(self) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/abc",
{},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
Codes.MISSING_PARAM,
channel.json_body["errcode"],
)
def test_update_delayed_event_with_invalid_action(self) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/abc",
{"action": "oops"},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
Codes.INVALID_PARAM,
channel.json_body["errcode"],
)
@parameterized.expand(["cancel", "restart", "send"])
def test_update_delayed_event_without_match(self, action: str) -> None:
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/abc",
{"action": action},
)
self.assertEqual(HTTPStatus.NOT_FOUND, channel.code, channel.result)
def test_cancel_delayed_state_event(self) -> None:
state_key = "to_never_send"
setter_key = "setter"
setter_expected = "none"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 1500),
{
setter_key: setter_expected,
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
self.assertIsNotNone(delay_id)
self.reactor.advance(1)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/{delay_id}",
{"action": "cancel"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertListEqual([], self._get_delayed_events())
self.reactor.advance(1)
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
def test_send_delayed_state_event(self) -> None:
state_key = "to_send_on_request"
setter_key = "setter"
setter_expected = "on_send"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 100000),
{
setter_key: setter_expected,
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
self.assertIsNotNone(delay_id)
self.reactor.advance(1)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/{delay_id}",
{"action": "send"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertListEqual([], self._get_delayed_events())
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
def test_restart_delayed_state_event(self) -> None:
state_key = "to_send_on_restarted_timeout"
setter_key = "setter"
setter_expected = "on_timeout"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 1500),
{
setter_key: setter_expected,
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
self.assertIsNotNone(delay_id)
self.reactor.advance(1)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/{delay_id}",
{"action": "restart"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.reactor.advance(1)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
content = self._get_delayed_event_content(events[0])
self.assertEqual(setter_expected, content.get(setter_key), content)
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
self.reactor.advance(1)
self.assertListEqual([], self._get_delayed_events())
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
def test_delayed_state_events_are_cancelled_by_more_recent_state(self) -> None:
state_key = "to_be_cancelled"
setter_key = "setter"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 900),
{
setter_key: "on_timeout",
},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
setter_expected = "manual"
self.helper.send_state(
self.room_id,
_EVENT_TYPE,
{
setter_key: setter_expected,
},
None,
state_key=state_key,
)
self.assertListEqual([], self._get_delayed_events())
self.reactor.advance(1)
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
def _get_delayed_events(self) -> List[JsonDict]:
channel = self.make_request(
"GET",
PATH_PREFIX,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
key = "delayed_events"
self.assertIn(key, channel.json_body)
events = channel.json_body[key]
self.assertIsInstance(events, list)
return events
def _get_delayed_event_content(self, event: JsonDict) -> JsonDict:
key = "content"
self.assertIn(key, event)
content = event[key]
self.assertIsInstance(content, dict)
return content
def _get_path_for_delayed_state(
room_id: str, event_type: str, state_key: str, delay_ms: int
) -> str:
return f"rooms/{room_id}/state/{event_type}/{state_key}?org.matrix.msc4140.delay={delay_ms}"

View File

@ -19,18 +19,12 @@
# #
# #
import unittest as stdlib_unittest import unittest as stdlib_unittest
from typing import TYPE_CHECKING
from typing_extensions import Literal from typing_extensions import Literal
from synapse._pydantic_compat import HAS_PYDANTIC_V2 from synapse._pydantic_compat import BaseModel, ValidationError
from synapse.types.rest.client import EmailRequestTokenBody from synapse.types.rest.client import EmailRequestTokenBody
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import BaseModel, ValidationError
else:
from pydantic import BaseModel, ValidationError
class ThreepidMediumEnumTestCase(stdlib_unittest.TestCase): class ThreepidMediumEnumTestCase(stdlib_unittest.TestCase):
class Model(BaseModel): class Model(BaseModel):

View File

@ -2291,6 +2291,106 @@ class RoomMessageFilterTestCase(RoomBase):
self.assertEqual(len(chunk), 2, [event["content"] for event in chunk]) self.assertEqual(len(chunk), 2, [event["content"] for event in chunk])
class RoomDelayedEventTestCase(RoomBase):
"""Tests delayed events."""
user_id = "@sid1:red"
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.room_id = self.helper.create_room_as(self.user_id)
@unittest.override_config({"max_event_delay_duration": "24h"})
def test_send_delayed_invalid_event(self) -> None:
"""Test sending a delayed event with invalid content."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertNotIn("org.matrix.msc4140.errcode", channel.json_body)
def test_delayed_event_unsupported_by_default(self) -> None:
"""Test that sending a delayed event is unsupported with the default config."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{"body": "test", "msgtype": "m.text"},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
"M_MAX_DELAY_UNSUPPORTED",
channel.json_body.get("org.matrix.msc4140.errcode"),
channel.json_body,
)
@unittest.override_config({"max_event_delay_duration": "1000"})
def test_delayed_event_exceeds_max_delay(self) -> None:
"""Test that sending a delayed event fails if its delay is longer than allowed."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{"body": "test", "msgtype": "m.text"},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
"M_MAX_DELAY_EXCEEDED",
channel.json_body.get("org.matrix.msc4140.errcode"),
channel.json_body,
)
@unittest.override_config({"max_event_delay_duration": "24h"})
def test_delayed_event_with_negative_delay(self) -> None:
"""Test that sending a delayed event fails if its delay is negative."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=-2000"
% self.room_id
).encode("ascii"),
{"body": "test", "msgtype": "m.text"},
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
Codes.INVALID_PARAM, channel.json_body["errcode"], channel.json_body
)
@unittest.override_config({"max_event_delay_duration": "24h"})
def test_send_delayed_message_event(self) -> None:
"""Test sending a valid delayed message event."""
channel = self.make_request(
"PUT",
(
"rooms/%s/send/m.room.message/mid1?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{"body": "test", "msgtype": "m.text"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
@unittest.override_config({"max_event_delay_duration": "24h"})
def test_send_delayed_state_event(self) -> None:
"""Test sending a valid delayed state event."""
channel = self.make_request(
"PUT",
(
"rooms/%s/state/m.room.topic/?org.matrix.msc4140.delay=2000"
% self.room_id
).encode("ascii"),
{"topic": "This is a topic"},
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
class RoomSearchTestCase(unittest.HomeserverTestCase): class RoomSearchTestCase(unittest.HomeserverTestCase):
servlets = [ servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource, synapse.rest.admin.register_servlets_for_client_rest_resource,

View File

@ -89,7 +89,7 @@ class TestResourceLimitsServerNotices(unittest.HomeserverTestCase):
return_value="!something:localhost" return_value="!something:localhost"
) )
self._rlsn._store.add_tag_to_room = AsyncMock(return_value=None) # type: ignore[method-assign] self._rlsn._store.add_tag_to_room = AsyncMock(return_value=None) # type: ignore[method-assign]
self._rlsn._store.get_tags_for_room = AsyncMock(return_value={}) # type: ignore[method-assign] self._rlsn._store.get_tags_for_room = AsyncMock(return_value={})
@override_config({"hs_disabled": True}) @override_config({"hs_disabled": True})
def test_maybe_send_server_notice_disabled_hs(self) -> None: def test_maybe_send_server_notice_disabled_hs(self) -> None:

View File

@ -34,11 +34,11 @@ from synapse.rest.client import login, room
from synapse.server import HomeServer from synapse.server import HomeServer
from synapse.storage.databases.main.events import DeltaState from synapse.storage.databases.main.events import DeltaState
from synapse.storage.databases.main.events_bg_updates import ( from synapse.storage.databases.main.events_bg_updates import (
_BackgroundUpdates,
_resolve_stale_data_in_sliding_sync_joined_rooms_table, _resolve_stale_data_in_sliding_sync_joined_rooms_table,
_resolve_stale_data_in_sliding_sync_membership_snapshots_table, _resolve_stale_data_in_sliding_sync_membership_snapshots_table,
) )
from synapse.types import create_requester from synapse.types import create_requester
from synapse.types.storage import _BackgroundUpdates
from synapse.util import Clock from synapse.util import Clock
from tests.test_utils.event_injection import create_event from tests.test_utils.event_injection import create_event

Some files were not shown because too many files have changed in this diff Show More