Compare commits

...

61 Commits

Author SHA1 Message Date
Dessalines 69b4c6647b Version 0.19.4-rc.5 2024-06-01 13:30:00 -04:00
renovate[bot] f7fe0d46fc
Update Rust crate console-subscriber to 0.2.0 (#4771)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-05-31 22:35:34 -04:00
renovate[bot] 609a6411a7
Update pnpm to v9.1.4 (#4770)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-05-31 22:24:17 -04:00
renovate[bot] 44666a34a2
Update dependency ts-jest to v29.1.4 (#4768)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-05-31 22:04:44 -04:00
renovate[bot] 6db878f761
Update dependency typescript to v5.4.5 (#4769)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-05-31 21:34:29 -04:00
renovate[bot] 6031709fcf
Update Rust crate serde to v1.0.203 (#4766)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-05-31 21:14:05 -04:00
renovate[bot] 4d9e38d875
Update asonix/pictrs Docker tag to v0.5.14 (#4767)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-05-31 21:13:39 -04:00
Dessalines 6a6c915014
Changing NodeInfo metadata to HashMap from vector. Fixes #4762 (#4764) 2024-05-31 16:38:46 -04:00
phiresky 96b7afc0b1
upgrade rust to 1.78 to fix diesel cli (#4761) 2024-05-31 08:39:45 -04:00
Felix Ableitner d2083f79d9 Version 0.19.4-rc.4 2024-05-30 11:55:34 +02:00
phiresky e8a7bb07a3
fix both permanent stopping of federation queues and multiple creation of the same federation queues (#4754)
Co-authored-by: Nutomic <me@nutomic.com>
2024-05-30 05:08:27 -04:00
Richard Schwab 91e57ff954
Prevent bot replies from increasing unread reply count when bot accounts are not shown (#4747)
* Prevent bot replies from increasing unread reply count when bot accounts are not shown

* Pass LocalUser for unread replies count query

* Prevent bot mentions from increasing unread reply count when bot accounts are not shown
2024-05-29 17:55:15 -04:00
phiresky 7d80a3c7d6
replace instanceid with domain (#4753) 2024-05-29 23:10:25 +02:00
Dessalines abcfa266af
Fixing slowness in saved post fetching. #4756 (#4758)
* Fixing slowness in saved post fetching. #4756

* Also fix comment_view.rs
2024-05-29 17:03:42 -04:00
SleeplessOne1917 51970ffc81
Update dependencies to alleviate cargo audit peer dependency vulnerability (#4750) 2024-05-28 17:47:21 -07:00
Dessalines fd6a1283a5 Version 0.19.4-rc.3 2024-05-27 09:37:58 -04:00
Nutomic af034f3b5e
Unit tests and cleanup for outgoing federation code (#4733)
* test setup

* code cleanup

* cleanup

* move stats to own file

* basic test working

* cleanup

* processes test

* more test cases

* fmt

* add file

* add assert

* error handling

* fmt

* use instance id instead of domain for stats channel
2024-05-27 09:34:58 -04:00
Dessalines 0d5db29bc9
After creating a comment, update the unread comments for the post. (#4742)
* After creating a comment, update the unread comments for the post.

- Fixes #3863

* Addressing PR comments.

* Add comment.

---------

Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>
2024-05-27 12:55:44 +02:00
dullbananas ec77c00ef8
Fix lost separation caused by comment width change (#4739)
* Update post_view.rs

* Update structs.rs

* Update worker.rs

* Update worker.rs
2024-05-23 14:05:35 -04:00
Dessalines 69bdcb3069 Version 0.19.4-rc.2 2024-05-23 12:10:33 -04:00
Dessalines 6a6108ac55
Fixing proxied images for federated posts. (#4737)
* Fixing proxied images for federated posts.

- Also added test.
- Fixes #4736

* Address PR comments.
2024-05-23 11:11:25 -04:00
Nutomic b2c1a14234
Correct url for nodeinfo version (#4734)
* Correct url for nodeinfo version

* add compat redirect

---------

Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>
2024-05-23 10:59:56 -04:00
Nutomic d8dc38eb06
Upgrade dependencies (#4740) 2024-05-23 10:55:20 -04:00
Nutomic c96017c009
Configure max comment width in clippy (#4738)
* Configure max comment width in clippy

* update default config
2024-05-23 08:46:26 -04:00
Dessalines 9aa565b559 Version 0.19.4-rc.1 2024-05-22 08:58:31 -04:00
Dessalines 7d7cd8ded4
Dont show replies / mentions from blocked users. Fixes #4227 (#4727)
* Dont show replies / mentions from blocked users. Fixes #4227

* Adding unit tests for reply and mention views.

- Also cleaned up some unwraps in the tests.

* Add allow deprecated to pass clippy for deprecated wav crate.

---------

Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>
2024-05-22 08:50:26 -04:00
Nutomic 943c31cc72
Allow passing command line params via environment (fixes #4603) (#4729)
* Allow passing command line params via environment (fixes #4603)

* add prefix

---------

Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>
2024-05-22 08:39:01 -04:00
Nutomic 973f39601c
Dont allow removing comment which was deleted (fixes #4731) (#4732) 2024-05-22 08:29:01 -04:00
Felix Ableitner a39c19c9db Version 0.19.4-beta.8 2024-05-22 10:30:38 +02:00
Dessalines 55f84dd38a
Fixing proxy images (#4722)
* Adding an image_details table to store image dimensions.

- Adds an image_details table, which stores the height,
  width, and content_type for local and remote images.
- For LocalImages, this information already comes back with
  the upload.
- For RemoteImages, it calls the pictrs details endpoint.
- Fixed some issues with proxying non-image urls.
- Fixes #3328
- Also fixes #4703

* Running sql format.

* Running fmt.

* Don't fetch metadata in background for local API requests.

* Dont export remote_image table to typescript.

* Cleaning up validate.

* Dont proxy url.

* Fixing tests, fixing issue with federated thumbnails.

* Fix tests.

* Updating corepack, fixing issue.

* Refactoring image inserts to use transactions.

* Use select exists again.

* Fixing imports.

* Fix test.

* Removing pointless backgrounded metadata generation version.

* Removing public pictrs details route.

* Fixing clippy.

* Fixing proxy image fetching. Fixes #4703

- This extracts only the proxy image fixes from #4704, leaving off
  thumbnails.

* Fix test.

* Addressing PR comments.

* Address PR comments 2.

---------

Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>
2024-05-22 10:28:47 +02:00
Nutomic 6b46a70535
Extra logging to debug duplicate activities (ref #4609) (#4726)
* Extra logging to debug duplicate activities (ref #4609)

* Fix logging for api tests

* fmt
2024-05-21 14:47:06 -04:00
Nutomic 4ffaa93431
Dont allow reusing password reset token, use normal rate limit (#4719)
* Dont allow reusing password reset token, use normal rate limit

* fix
2024-05-21 14:46:49 -04:00
flamingos-cant a0ad7806cb
Increase alt_text size to 1500 (#4724) 2024-05-17 13:03:19 -04:00
Nutomic 99aac07714
Mark database fields as sensitive so they dont show up in logs (#4720)
* Mark database fields as sensitive so they dont show up in logs

* add file

* fix test

* Update crates/apub/src/objects/person.rs

Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>

* Update crates/apub/src/objects/community.rs

Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>

* Update crates/apub/src/objects/instance.rs

Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>

---------

Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>
Co-authored-by: Dessalines <dessalines@users.noreply.github.com>
2024-05-16 20:41:57 -04:00
Dessalines 1a4aa3eaba
Stop using a diesel_cli docker image, use cargo-install in woodpecker. (#4723)
* Stop using a diesel_cli docker image, use cargo-binstall in woodpecker.

- The diesel_cli image is 500MB, and rebuilt daily. Much easier to use
  binstall to install it.

* Trying out a multiline var.

* Try sequence merges 1

* Try removing features.

* Try path fix.

* Abstracting diesel cli install.

* Try installing mysql and postgres.

* Try installing mysql and postgres 2.

* Try installing mysql and postgres 3.

* Try installing mysql and postgres 4.

* Try installing mysql and postgres 5.

* Try installing mysql and postgres 6.

* Try installing mysql and postgres 7.

* Try installing mysql and postgres 8.

* Try installing mysql and postgres 9.

* Try installing mysql and postgres 10.

* Try installing mysql and postgres 11.

* Try installing mysql and postgres 12.

* Try installing mysql and postgres 13.

* Add logging line.

* Add logging line 2.

* Add logging line 3.

* Add logging line 4.

* Removing binstall.

* Extract taplo into its own image, ignore binstall.

* Use a smaller taplo.

* taplo is the same image.
2024-05-16 16:41:36 -04:00
Nutomic 93c9a5f2b1
Dont federate post locking via Update activity (#4717)
* Dont federate post locking via Update activity

* cleanup

* add missing mod log entries

* update assets
2024-05-15 07:36:00 -04:00
Nutomic 9a9d518153
Fix import blocked objects (#4712)
* Allow importing partial backup (fixes #4672)

* Fetch blocked objects if not known locally (fixes #4669)

* extract helper fn

* add comment

* cleanup

* remove test

* fmt

* remove .ok()
2024-05-14 23:03:43 -04:00
Nutomic 7fb03c502e
Add test to ensure reports are sent to user's home instance (ref #4701) (#4711)
* Add test to ensure reports are sent to user's home instance (ref #4701)

* enable all tests

* set package-manager-strict=false
2024-05-14 22:48:24 -04:00
Nutomic 49bb17b583
Stricter rate limit for login (#4718) 2024-05-14 22:43:43 -04:00
Nutomic 723cb549d4
Allow importing partial backup (fixes #4672) (#4705)
* Allow importing partial backup (fixes #4672)

* Dont throw error on empty LocalUser::update

* fix tests
2024-05-14 22:37:30 -04:00
Nutomic 8b6a4c060e
Make nodeinfo standard compliant, upgrade to nodeinfo 2.1 (fixes #4702) (#4706)
* Always set activitypub protocol in nodeinfo response (fixes #4702)

* Add mandatory fields
2024-05-13 22:53:20 -04:00
Dessalines cb80980027 Version 0.19.4-beta.7 2024-05-11 13:51:09 -04:00
dullbananas c4fc3a8ede
Optimize stuff in attempt to fix high amount of locks, and fix comment_aggregates.child_count (#4696)
* separate triggers

* auto_explain.log_triggers=on

* Revert "auto_explain.log_triggers=on"

This reverts commit 078b2dbb9b.

* Revert "separate triggers"

This reverts commit 95600da4af.

* bring back migration

* re-order statements

* add comment about statement ordering

* no redundant updates

* optimize post_aggregates update in comment trigger

* set comment path in trigger

* update comment_aggregates.child_count using trigger

* move `LEFT JOIN post` to inner query

* clean up newest_comment_time_necro

* add down.sql
2024-05-09 08:18:55 -04:00
Nutomic b4f9ef24a5
Dont exit early when running only scheduled tasks (#4707)
* Dont exit early when running only scheduled tasks (fixes #4709)

* fix
2024-05-08 14:56:44 +02:00
Nutomic 866d752a3c
Instance.preferred_username should be optional (fixes #4701) (#4713) 2024-05-08 08:01:04 -04:00
Nutomic e0b1d0553d
Add timeout for processing incoming activities (#4708)
* Add timeout for processing incoming activities

* move to const
2024-05-08 08:00:55 -04:00
Nutomic 7c146272c3
Federate with wordpress, improvements for NodeBB, Discourse federation (#4692)
* Federate with wordpress

* upgrade apub lib with fix

* Also read post's community from `audience`

* cleanup

* cargo update

* upgrade apub lib

* add wordpress test activity
2024-05-07 16:20:43 -04:00
Nutomic cfdc732d3a
On registration set show_nsfw based on site.content_warning (#4616)
Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>
2024-05-07 16:18:58 -04:00
Tim Coombs 522f974e30
fix: use docker compose v2 (#4622)
* fix: use docker compose v2

* Using sudo tee.

* fix: correct postgres sed command

---------

Co-authored-by: Dessalines <tyhou13@gmx.com>
Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>
2024-05-07 11:41:40 +02:00
SleeplessOne1917 b152be7951
Update rustls (#4690)
* Update rustls

* Format code
2024-05-03 16:06:14 -04:00
SleeplessOne1917 485b0f1a54
Replace unmaintained encoding dep with maintained encoding_rs dep (#4694)
* Replace dependency on unmaintained encoding crate with dep on maintained encoding_rs crate

* Update lockfile

* Taplo format Cargo.toml

* Use better variable name

* Replace into_owned with into
2024-05-03 10:42:48 +00:00
Nutomic 7540b02723
Update activitypub library (#4691)
https://github.com/LemmyNet/activitypub-federation-rust/releases/tag/0.5.5
2024-05-02 12:46:34 -04:00
Nutomic 7746db4169
Testing and minor fix for federation with Discourse (#4628)
* Testing and minor fix for federation with Discourse

* prettier
2024-05-02 07:49:19 -04:00
Dessalines db2ce81fc4
Show trigger logging. #4681 (#4688) 2024-05-01 18:46:14 -04:00
renovate[bot] 4175a1af80
chore(deps): update rust crate serde_with to 3.8.1 (#4687)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-30 23:47:02 -04:00
renovate[bot] 563280456e
chore(deps): update pnpm to v9.0.6 (#4682)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-30 23:43:14 -04:00
Dessalines 2fecb7ecdf
Dont show own comments for liked and disliked_only. Fixes #4675 (#4679)
Co-authored-by: SleeplessOne1917 <28871516+SleeplessOne1917@users.noreply.github.com>
2024-04-30 23:26:55 -04:00
renovate[bot] 2c6f9c7fd5
chore(deps): update rust crate serde to 1.0.199 (#4684)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-30 23:17:23 -04:00
renovate[bot] e338e59868
fix(deps): update rust crate lettre to 0.11.7 (#4685)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-30 22:54:16 -04:00
renovate[bot] b0caa85ed4
chore(deps): update rust crate base64 to 0.22.1 (#4683)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-30 22:30:19 -04:00
Dessalines ad60d91f5c
Dont publish lemmy_db_perf to fix crates.io publish. Fixes #4678 (#4680) 2024-04-30 17:51:12 +00:00
143 changed files with 4011 additions and 2806 deletions

View File

@ -3,3 +3,5 @@ edition = "2021"
imports_layout = "HorizontalVertical"
imports_granularity = "Crate"
group_imports = "One"
wrap_comments = true
comment_width = 100

View File

@ -2,7 +2,8 @@
# See https://github.com/woodpecker-ci/woodpecker/issues/1677
variables:
- &rust_image "rust:1.77"
- &rust_image "rust:1.78"
- &rust_nightly_image "rustlang/rust:nightly"
- &install_pnpm "corepack enable pnpm"
- &slow_check_paths
- event: pull_request
@ -24,15 +25,17 @@ variables:
"diesel.toml",
".gitmodules",
]
# Broken for cron jobs currently, see
# https://github.com/woodpecker-ci/woodpecker/issues/1716
# clone:
# git:
# image: woodpeckerci/plugin-git
# settings:
# recursive: true
# submodule_update_remote: true
- install_binstall: &install_binstall
- wget https://github.com/cargo-bins/cargo-binstall/releases/latest/download/cargo-binstall-x86_64-unknown-linux-musl.tgz
- tar -xvf cargo-binstall-x86_64-unknown-linux-musl.tgz
- cp cargo-binstall /usr/local/cargo/bin
- install_diesel_cli: &install_diesel_cli
- apt update && apt install -y lsb-release build-essential
- sh -c 'echo "deb https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
- wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
- apt update && apt install -y postgresql-client-16
- cargo install diesel_cli --no-default-features --features postgres
- export PATH="$CARGO_HOME/bin:$PATH"
steps:
prepare_repo:
@ -66,7 +69,7 @@ steps:
- event: pull_request
cargo_fmt:
image: rustlang/rust:nightly
image: *rust_nightly_image
environment:
# store cargo data in repo folder so that it gets cached between steps
CARGO_HOME: .cargo_home
@ -77,11 +80,9 @@ steps:
- event: pull_request
cargo_machete:
image: rustlang/rust:nightly
image: *rust_nightly_image
commands:
- wget https://github.com/cargo-bins/cargo-binstall/releases/latest/download/cargo-binstall-x86_64-unknown-linux-musl.tgz
- tar -xvf cargo-binstall-x86_64-unknown-linux-musl.tgz
- cp cargo-binstall /usr/local/cargo/bin
- <<: *install_binstall
- cargo binstall -y cargo-machete
- cargo machete
when:
@ -133,11 +134,12 @@ steps:
when: *slow_check_paths
check_diesel_schema:
image: willsquire/diesel-cli
image: *rust_image
environment:
CARGO_HOME: .cargo_home
DATABASE_URL: postgres://lemmy:password@database:5432/lemmy
commands:
- <<: *install_diesel_cli
- diesel migration run
- diesel print-schema --config-file=diesel.toml > tmp.schema
- diff tmp.schema crates/db_schema/src/schema.rs
@ -197,8 +199,8 @@ steps:
PGHOST: database
PGDATABASE: lemmy
commands:
- cargo install diesel_cli
- export PATH="$CARGO_HOME/bin:$PATH"
# Install diesel_cli
- <<: *install_diesel_cli
# Run all migrations
- diesel migration run
# Dump schema to before.sqldump (PostgreSQL apt repo is used to prevent pg_dump version mismatch error)
@ -276,7 +278,9 @@ steps:
publish_to_crates_io:
image: *rust_image
commands:
- cargo install cargo-workspaces
- <<: *install_binstall
# Install cargo-workspaces
- cargo binstall -y cargo-workspaces
- cp -r migrations crates/db_schema/
- cargo workspaces publish --token "$CARGO_API_TOKEN" --from-git --allow-dirty --no-verify --allow-branch "${CI_COMMIT_TAG}" --yes custom "${CI_COMMIT_TAG}"
secrets: [cargo_api_token]

2004
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
[workspace.package]
version = "0.19.4-beta.6"
version = "0.19.4-rc.5"
edition = "2021"
description = "A link aggregator for the fediverse"
license = "AGPL-3.0"
@ -67,8 +67,8 @@ members = [
[workspace.lints.clippy]
cast_lossless = "deny"
complexity = "deny"
correctness = "deny"
complexity = { level = "deny", priority = -1 }
correctness = { level = "deny", priority = -1 }
dbg_macro = "deny"
explicit_into_iter_loop = "deny"
explicit_iter_loop = "deny"
@ -79,37 +79,37 @@ inefficient_to_string = "deny"
items-after-statements = "deny"
manual_string_new = "deny"
needless_collect = "deny"
perf = "deny"
perf = { level = "deny", priority = -1 }
redundant_closure_for_method_calls = "deny"
style = "deny"
suspicious = "deny"
style = { level = "deny", priority = -1 }
suspicious = { level = "deny", priority = -1 }
uninlined_format_args = "allow"
unused_self = "deny"
unwrap_used = "deny"
[workspace.dependencies]
lemmy_api = { version = "=0.19.4-beta.6", path = "./crates/api" }
lemmy_api_crud = { version = "=0.19.4-beta.6", path = "./crates/api_crud" }
lemmy_apub = { version = "=0.19.4-beta.6", path = "./crates/apub" }
lemmy_utils = { version = "=0.19.4-beta.6", path = "./crates/utils", default-features = false }
lemmy_db_schema = { version = "=0.19.4-beta.6", path = "./crates/db_schema" }
lemmy_api_common = { version = "=0.19.4-beta.6", path = "./crates/api_common" }
lemmy_routes = { version = "=0.19.4-beta.6", path = "./crates/routes" }
lemmy_db_views = { version = "=0.19.4-beta.6", path = "./crates/db_views" }
lemmy_db_views_actor = { version = "=0.19.4-beta.6", path = "./crates/db_views_actor" }
lemmy_db_views_moderator = { version = "=0.19.4-beta.6", path = "./crates/db_views_moderator" }
lemmy_federate = { version = "=0.19.4-beta.6", path = "./crates/federate" }
activitypub_federation = { version = "0.5.4", default-features = false, features = [
lemmy_api = { version = "=0.19.4-rc.5", path = "./crates/api" }
lemmy_api_crud = { version = "=0.19.4-rc.5", path = "./crates/api_crud" }
lemmy_apub = { version = "=0.19.4-rc.5", path = "./crates/apub" }
lemmy_utils = { version = "=0.19.4-rc.5", path = "./crates/utils", default-features = false }
lemmy_db_schema = { version = "=0.19.4-rc.5", path = "./crates/db_schema" }
lemmy_api_common = { version = "=0.19.4-rc.5", path = "./crates/api_common" }
lemmy_routes = { version = "=0.19.4-rc.5", path = "./crates/routes" }
lemmy_db_views = { version = "=0.19.4-rc.5", path = "./crates/db_views" }
lemmy_db_views_actor = { version = "=0.19.4-rc.5", path = "./crates/db_views_actor" }
lemmy_db_views_moderator = { version = "=0.19.4-rc.5", path = "./crates/db_views_moderator" }
lemmy_federate = { version = "=0.19.4-rc.5", path = "./crates/federate" }
activitypub_federation = { version = "0.5.6", default-features = false, features = [
"actix-web",
] }
diesel = "2.1.6"
diesel_migrations = "2.1.0"
diesel-async = "0.4.1"
serde = { version = "1.0.198", features = ["derive"] }
serde_with = "3.7.0"
actix-web = { version = "4.5.1", default-features = false, features = [
serde = { version = "1.0.202", features = ["derive"] }
serde_with = "3.8.1"
actix-web = { version = "4.6.0", default-features = false, features = [
"macros",
"rustls",
"rustls-0_23",
"compress-brotli",
"compress-gzip",
"compress-zstd",
@ -128,25 +128,25 @@ clokwerk = "0.4.0"
doku = { version = "0.21.1", features = ["url-2"] }
bcrypt = "0.15.1"
chrono = { version = "0.4.38", features = ["serde"], default-features = false }
serde_json = { version = "1.0.116", features = ["preserve_order"] }
base64 = "0.22.0"
serde_json = { version = "1.0.117", features = ["preserve_order"] }
base64 = "0.22.1"
uuid = { version = "1.8.0", features = ["serde", "v4"] }
async-trait = "0.1.80"
captcha = "0.0.9"
anyhow = { version = "1.0.82", features = [
anyhow = { version = "1.0.86", features = [
"backtrace",
] } # backtrace is on by default on nightly, but not stable rust
diesel_ltree = "0.3.1"
typed-builder = "0.18.2"
serial_test = "2.0.0"
serial_test = "3.1.1"
tokio = { version = "1.37.0", features = ["full"] }
regex = "1.10.4"
once_cell = "1.19.0"
diesel-derive-newtype = "2.1.2"
diesel-derive-enum = { version = "2.1.0", features = ["postgres"] }
strum = "0.25.0"
strum_macros = "0.25.3"
itertools = "0.12.1"
strum = "0.26.2"
strum_macros = "0.26.2"
itertools = "0.13.0"
futures = "0.3.30"
http = "0.2.12"
rosetta-i18n = "0.1.3"
@ -157,15 +157,15 @@ ts-rs = { version = "7.1.1", features = [
"chrono-impl",
"no-serde-warnings",
] }
rustls = { version = "0.21.11", features = ["dangerous_configuration"] }
rustls = { version = "0.23.8", features = ["ring"] }
futures-util = "0.3.30"
tokio-postgres = "0.7.10"
tokio-postgres-rustls = "0.10.0"
tokio-postgres-rustls = "0.12.0"
urlencoding = "2.1.3"
enum-map = "2.7"
moka = { version = "0.12.7", features = ["future"] }
i-love-jesus = { version = "0.1.0" }
clap = { version = "4.5.4", features = ["derive"] }
clap = { version = "4.5.4", features = ["derive", "env"] }
pretty_assertions = "1.4.0"
[dependencies]
@ -194,17 +194,17 @@ clokwerk = { workspace = true }
serde_json = { workspace = true }
tracing-opentelemetry = { workspace = true, optional = true }
opentelemetry = { workspace = true, optional = true }
console-subscriber = { version = "0.1.10", optional = true }
console-subscriber = { version = "0.2.0", optional = true }
opentelemetry-otlp = { version = "0.12.0", optional = true }
pict-rs = { version = "0.5.13", optional = true }
pict-rs = { version = "0.5.14", optional = true }
tokio.workspace = true
actix-cors = "0.6.5"
actix-cors = "0.7.0"
futures-util = { workspace = true }
chrono = { workspace = true }
prometheus = { version = "0.13.3", features = ["process"] }
prometheus = { version = "0.13.4", features = ["process"] }
serial_test = { workspace = true }
clap = { workspace = true }
actix-web-prom = "0.7.0"
actix-web-prom = "0.8.0"
[dev-dependencies]
pretty_assertions = { workspace = true }

1
api_tests/.npmrc Normal file
View File

@ -0,0 +1 @@
package-manager-strict=false

View File

@ -6,11 +6,11 @@
"repository": "https://github.com/LemmyNet/lemmy",
"author": "Dessalines",
"license": "AGPL-3.0",
"packageManager": "pnpm@9.0.4",
"packageManager": "pnpm@9.1.4",
"scripts": {
"lint": "tsc --noEmit && eslint --report-unused-disable-directives --ext .js,.ts,.tsx src && prettier --check 'src/**/*.ts'",
"fix": "prettier --write src && eslint --fix src",
"api-test": "jest -i follow.spec.ts && jest -i post.spec.ts && jest -i comment.spec.ts && jest -i private_message.spec.ts && jest -i user.spec.ts && jest -i community.spec.ts && jest -i image.spec.ts",
"api-test": "jest -i follow.spec.ts && jest -i image.spec.ts && jest -i user.spec.ts && jest -i private_message.spec.ts && jest -i community.spec.ts && jest -i post.spec.ts && jest -i comment.spec.ts ",
"api-test-follow": "jest -i follow.spec.ts",
"api-test-comment": "jest -i comment.spec.ts",
"api-test-post": "jest -i post.spec.ts",

View File

@ -13,7 +13,7 @@ importers:
version: 29.5.12
'@types/node':
specifier: ^20.12.4
version: 20.12.7
version: 20.12.4
'@typescript-eslint/eslint-plugin':
specifier: ^7.5.0
version: 7.5.0(@typescript-eslint/parser@7.5.0(eslint@8.57.0)(typescript@5.4.5))(eslint@8.57.0)(typescript@5.4.5)
@ -31,7 +31,7 @@ importers:
version: 5.1.3(eslint@8.57.0)(prettier@3.2.5)
jest:
specifier: ^29.5.0
version: 29.7.0(@types/node@20.12.7)
version: 29.7.0(@types/node@20.12.4)
lemmy-js-client:
specifier: 0.19.4-alpha.18
version: 0.19.4-alpha.18
@ -40,7 +40,7 @@ importers:
version: 3.2.5
ts-jest:
specifier: ^29.1.0
version: 29.1.2(@babel/core@7.23.9)(@jest/types@29.6.3)(babel-jest@29.7.0(@babel/core@7.23.9))(jest@29.7.0(@types/node@20.12.7))(typescript@5.4.5)
version: 29.1.4(@babel/core@7.23.9)(@jest/transform@29.7.0)(@jest/types@29.6.3)(babel-jest@29.7.0(@babel/core@7.23.9))(jest@29.7.0(@types/node@20.12.4))(typescript@5.4.5)
typescript:
specifier: ^5.4.4
version: 5.4.5
@ -398,8 +398,8 @@ packages:
'@types/json-schema@7.0.15':
resolution: {integrity: sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==}
'@types/node@20.12.7':
resolution: {integrity: sha512-wq0cICSkRLVaf3UGLMGItu/PtdY7oaXaI/RVU+xliKVOtRna3PRY57ZDfztpDL0n11vfymMUnXv8QwYCO7L1wg==}
'@types/node@20.12.4':
resolution: {integrity: sha512-E+Fa9z3wSQpzgYQdYmme5X3OTuejnnTx88A6p6vkkJosR3KBz+HpE3kqNm98VE6cfLFcISx7zW7MsJkH6KwbTw==}
'@types/semver@7.5.8':
resolution: {integrity: sha512-I8EUhyrgfLrcTkzV3TSsGyl1tSuPrEDzr0yd5m90UgNxQkyDXULk3b6MlQqTCpZpNtWe1K0hzclnZkTcLBe2UQ==}
@ -865,6 +865,7 @@ packages:
glob@7.2.3:
resolution: {integrity: sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==}
deprecated: Glob versions prior to v9 are no longer supported
globals@11.12.0:
resolution: {integrity: sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==}
@ -922,6 +923,7 @@ packages:
inflight@1.0.6:
resolution: {integrity: sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==}
deprecated: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
inherits@2.0.4:
resolution: {integrity: sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==}
@ -1380,6 +1382,7 @@ packages:
rimraf@3.0.2:
resolution: {integrity: sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==}
deprecated: Rimraf versions prior to v4 are no longer supported
hasBin: true
run-parallel@1.2.0:
@ -1389,13 +1392,13 @@ packages:
resolution: {integrity: sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==}
hasBin: true
semver@7.5.4:
resolution: {integrity: sha512-1bCSESV6Pv+i21Hvpxp3Dx+pSD8lIPt8uVjRrxAUt/nbswYc+tK6Y2btiULjd4+fnq15PX+nqQDC7Oft7WkwcA==}
semver@7.6.0:
resolution: {integrity: sha512-EnwXhrlwXMk9gKu5/flx5sv/an57AkRplG3hTK68W7FRDN+k+OWBj65M7719OkA82XLBxrcX0KSHj+X5COhOVg==}
engines: {node: '>=10'}
hasBin: true
semver@7.6.0:
resolution: {integrity: sha512-EnwXhrlwXMk9gKu5/flx5sv/an57AkRplG3hTK68W7FRDN+k+OWBj65M7719OkA82XLBxrcX0KSHj+X5COhOVg==}
semver@7.6.2:
resolution: {integrity: sha512-FNAIBWCx9qcRhoHcgcJ0gvU7SN1lYU2ZXuSfl04bSC5OpvDHFyJCjdNHomPXxjQlCBU67YW64PzY7/VIEH7F2w==}
engines: {node: '>=10'}
hasBin: true
@ -1499,12 +1502,13 @@ packages:
peerDependencies:
typescript: '>=4.2.0'
ts-jest@29.1.2:
resolution: {integrity: sha512-br6GJoH/WUX4pu7FbZXuWGKGNDuU7b8Uj77g/Sp7puZV6EXzuByl6JrECvm0MzVzSTkSHWTihsXt+5XYER5b+g==}
engines: {node: ^16.10.0 || ^18.0.0 || >=20.0.0}
ts-jest@29.1.4:
resolution: {integrity: sha512-YiHwDhSvCiItoAgsKtoLFCuakDzDsJ1DLDnSouTaTmdOcOwIkSzbLXduaQ6M5DRVhuZC/NYaaZ/mtHbWMv/S6Q==}
engines: {node: ^14.15.0 || ^16.10.0 || ^18.0.0 || >=20.0.0}
hasBin: true
peerDependencies:
'@babel/core': '>=7.0.0-beta.0 <8'
'@jest/transform': ^29.0.0
'@jest/types': ^29.0.0
babel-jest: ^29.0.0
esbuild: '*'
@ -1513,6 +1517,8 @@ packages:
peerDependenciesMeta:
'@babel/core':
optional: true
'@jest/transform':
optional: true
'@jest/types':
optional: true
babel-jest:
@ -1857,7 +1863,7 @@ snapshots:
'@jest/console@29.7.0':
dependencies:
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
chalk: 4.1.2
jest-message-util: 29.7.0
jest-util: 29.7.0
@ -1870,14 +1876,14 @@ snapshots:
'@jest/test-result': 29.7.0
'@jest/transform': 29.7.0
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
ansi-escapes: 4.3.2
chalk: 4.1.2
ci-info: 3.9.0
exit: 0.1.2
graceful-fs: 4.2.11
jest-changed-files: 29.7.0
jest-config: 29.7.0(@types/node@20.12.7)
jest-config: 29.7.0(@types/node@20.12.4)
jest-haste-map: 29.7.0
jest-message-util: 29.7.0
jest-regex-util: 29.6.3
@ -1902,7 +1908,7 @@ snapshots:
dependencies:
'@jest/fake-timers': 29.7.0
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
jest-mock: 29.7.0
'@jest/expect-utils@29.7.0':
@ -1920,7 +1926,7 @@ snapshots:
dependencies:
'@jest/types': 29.6.3
'@sinonjs/fake-timers': 10.3.0
'@types/node': 20.12.7
'@types/node': 20.12.4
jest-message-util: 29.7.0
jest-mock: 29.7.0
jest-util: 29.7.0
@ -1942,7 +1948,7 @@ snapshots:
'@jest/transform': 29.7.0
'@jest/types': 29.6.3
'@jridgewell/trace-mapping': 0.3.22
'@types/node': 20.12.7
'@types/node': 20.12.4
chalk: 4.1.2
collect-v8-coverage: 1.0.2
exit: 0.1.2
@ -2012,7 +2018,7 @@ snapshots:
'@jest/schemas': 29.6.3
'@types/istanbul-lib-coverage': 2.0.6
'@types/istanbul-reports': 3.0.4
'@types/node': 20.12.7
'@types/node': 20.12.4
'@types/yargs': 17.0.32
chalk: 4.1.2
@ -2080,7 +2086,7 @@ snapshots:
'@types/graceful-fs@4.1.9':
dependencies:
'@types/node': 20.12.7
'@types/node': 20.12.4
'@types/istanbul-lib-coverage@2.0.6': {}
@ -2099,7 +2105,7 @@ snapshots:
'@types/json-schema@7.0.15': {}
'@types/node@20.12.7':
'@types/node@20.12.4':
dependencies:
undici-types: 5.26.5
@ -2173,7 +2179,7 @@ snapshots:
globby: 11.1.0
is-glob: 4.0.3
minimatch: 9.0.3
semver: 7.6.0
semver: 7.6.2
ts-api-utils: 1.3.0(typescript@5.4.5)
optionalDependencies:
typescript: 5.4.5
@ -2378,13 +2384,13 @@ snapshots:
convert-source-map@2.0.0: {}
create-jest@29.7.0(@types/node@20.12.7):
create-jest@29.7.0(@types/node@20.12.4):
dependencies:
'@jest/types': 29.6.3
chalk: 4.1.2
exit: 0.1.2
graceful-fs: 4.2.11
jest-config: 29.7.0(@types/node@20.12.7)
jest-config: 29.7.0(@types/node@20.12.4)
jest-util: 29.7.0
prompts: 2.4.2
transitivePeerDependencies:
@ -2716,7 +2722,7 @@ snapshots:
'@babel/parser': 7.23.9
'@istanbuljs/schema': 0.1.3
istanbul-lib-coverage: 3.2.2
semver: 7.5.4
semver: 7.6.2
transitivePeerDependencies:
- supports-color
@ -2751,7 +2757,7 @@ snapshots:
'@jest/expect': 29.7.0
'@jest/test-result': 29.7.0
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
chalk: 4.1.2
co: 4.6.0
dedent: 1.5.1
@ -2771,16 +2777,16 @@ snapshots:
- babel-plugin-macros
- supports-color
jest-cli@29.7.0(@types/node@20.12.7):
jest-cli@29.7.0(@types/node@20.12.4):
dependencies:
'@jest/core': 29.7.0
'@jest/test-result': 29.7.0
'@jest/types': 29.6.3
chalk: 4.1.2
create-jest: 29.7.0(@types/node@20.12.7)
create-jest: 29.7.0(@types/node@20.12.4)
exit: 0.1.2
import-local: 3.1.0
jest-config: 29.7.0(@types/node@20.12.7)
jest-config: 29.7.0(@types/node@20.12.4)
jest-util: 29.7.0
jest-validate: 29.7.0
yargs: 17.7.2
@ -2790,7 +2796,7 @@ snapshots:
- supports-color
- ts-node
jest-config@29.7.0(@types/node@20.12.7):
jest-config@29.7.0(@types/node@20.12.4):
dependencies:
'@babel/core': 7.23.9
'@jest/test-sequencer': 29.7.0
@ -2815,7 +2821,7 @@ snapshots:
slash: 3.0.0
strip-json-comments: 3.1.1
optionalDependencies:
'@types/node': 20.12.7
'@types/node': 20.12.4
transitivePeerDependencies:
- babel-plugin-macros
- supports-color
@ -2844,7 +2850,7 @@ snapshots:
'@jest/environment': 29.7.0
'@jest/fake-timers': 29.7.0
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
jest-mock: 29.7.0
jest-util: 29.7.0
@ -2854,7 +2860,7 @@ snapshots:
dependencies:
'@jest/types': 29.6.3
'@types/graceful-fs': 4.1.9
'@types/node': 20.12.7
'@types/node': 20.12.4
anymatch: 3.1.3
fb-watchman: 2.0.2
graceful-fs: 4.2.11
@ -2893,7 +2899,7 @@ snapshots:
jest-mock@29.7.0:
dependencies:
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
jest-util: 29.7.0
jest-pnp-resolver@1.2.3(jest-resolve@29.7.0):
@ -2928,7 +2934,7 @@ snapshots:
'@jest/test-result': 29.7.0
'@jest/transform': 29.7.0
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
chalk: 4.1.2
emittery: 0.13.1
graceful-fs: 4.2.11
@ -2956,7 +2962,7 @@ snapshots:
'@jest/test-result': 29.7.0
'@jest/transform': 29.7.0
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
chalk: 4.1.2
cjs-module-lexer: 1.2.3
collect-v8-coverage: 1.0.2
@ -2995,14 +3001,14 @@ snapshots:
jest-util: 29.7.0
natural-compare: 1.4.0
pretty-format: 29.7.0
semver: 7.5.4
semver: 7.6.2
transitivePeerDependencies:
- supports-color
jest-util@29.7.0:
dependencies:
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
chalk: 4.1.2
ci-info: 3.9.0
graceful-fs: 4.2.11
@ -3021,7 +3027,7 @@ snapshots:
dependencies:
'@jest/test-result': 29.7.0
'@jest/types': 29.6.3
'@types/node': 20.12.7
'@types/node': 20.12.4
ansi-escapes: 4.3.2
chalk: 4.1.2
emittery: 0.13.1
@ -3030,17 +3036,17 @@ snapshots:
jest-worker@29.7.0:
dependencies:
'@types/node': 20.12.7
'@types/node': 20.12.4
jest-util: 29.7.0
merge-stream: 2.0.0
supports-color: 8.1.1
jest@29.7.0(@types/node@20.12.7):
jest@29.7.0(@types/node@20.12.4):
dependencies:
'@jest/core': 29.7.0
'@jest/types': 29.6.3
import-local: 3.1.0
jest-cli: 29.7.0(@types/node@20.12.7)
jest-cli: 29.7.0(@types/node@20.12.4)
transitivePeerDependencies:
- '@types/node'
- babel-plugin-macros
@ -3109,7 +3115,7 @@ snapshots:
make-dir@4.0.0:
dependencies:
semver: 7.5.4
semver: 7.6.2
make-error@1.3.6: {}
@ -3273,14 +3279,12 @@ snapshots:
semver@6.3.1: {}
semver@7.5.4:
dependencies:
lru-cache: 6.0.0
semver@7.6.0:
dependencies:
lru-cache: 6.0.0
semver@7.6.2: {}
shebang-command@2.0.0:
dependencies:
shebang-regex: 3.0.0
@ -3366,20 +3370,21 @@ snapshots:
dependencies:
typescript: 5.4.5
ts-jest@29.1.2(@babel/core@7.23.9)(@jest/types@29.6.3)(babel-jest@29.7.0(@babel/core@7.23.9))(jest@29.7.0(@types/node@20.12.7))(typescript@5.4.5):
ts-jest@29.1.4(@babel/core@7.23.9)(@jest/transform@29.7.0)(@jest/types@29.6.3)(babel-jest@29.7.0(@babel/core@7.23.9))(jest@29.7.0(@types/node@20.12.4))(typescript@5.4.5):
dependencies:
bs-logger: 0.2.6
fast-json-stable-stringify: 2.1.0
jest: 29.7.0(@types/node@20.12.7)
jest: 29.7.0(@types/node@20.12.4)
jest-util: 29.7.0
json5: 2.2.3
lodash.memoize: 4.1.2
make-error: 1.3.6
semver: 7.5.4
semver: 7.6.2
typescript: 5.4.5
yargs-parser: 21.1.1
optionalDependencies:
'@babel/core': 7.23.9
'@jest/transform': 29.7.0
'@jest/types': 29.6.3
babel-jest: 29.7.0(@babel/core@7.23.9)

View File

@ -3,19 +3,19 @@
# it is expected that this script is called by run-federation-test.sh script.
set -e
if [ -n "$LEMMY_LOG_LEVEL" ];
if [ -z "$LEMMY_LOG_LEVEL" ];
then
LEMMY_LOG_LEVEL=warn
LEMMY_LOG_LEVEL=info
fi
export RUST_BACKTRACE=1
#export RUST_LOG="warn,lemmy_server=$LEMMY_LOG_LEVEL,lemmy_federate=$LEMMY_LOG_LEVEL,lemmy_api=$LEMMY_LOG_LEVEL,lemmy_api_common=$LEMMY_LOG_LEVEL,lemmy_api_crud=$LEMMY_LOG_LEVEL,lemmy_apub=$LEMMY_LOG_LEVEL,lemmy_db_schema=$LEMMY_LOG_LEVEL,lemmy_db_views=$LEMMY_LOG_LEVEL,lemmy_db_views_actor=$LEMMY_LOG_LEVEL,lemmy_db_views_moderator=$LEMMY_LOG_LEVEL,lemmy_routes=$LEMMY_LOG_LEVEL,lemmy_utils=$LEMMY_LOG_LEVEL,lemmy_websocket=$LEMMY_LOG_LEVEL"
export RUST_LOG="warn,lemmy_server=$LEMMY_LOG_LEVEL,lemmy_federate=$LEMMY_LOG_LEVEL,lemmy_api=$LEMMY_LOG_LEVEL,lemmy_api_common=$LEMMY_LOG_LEVEL,lemmy_api_crud=$LEMMY_LOG_LEVEL,lemmy_apub=$LEMMY_LOG_LEVEL,lemmy_db_schema=$LEMMY_LOG_LEVEL,lemmy_db_views=$LEMMY_LOG_LEVEL,lemmy_db_views_actor=$LEMMY_LOG_LEVEL,lemmy_db_views_moderator=$LEMMY_LOG_LEVEL,lemmy_routes=$LEMMY_LOG_LEVEL,lemmy_utils=$LEMMY_LOG_LEVEL,lemmy_websocket=$LEMMY_LOG_LEVEL"
export LEMMY_TEST_FAST_FEDERATION=1 # by default, the persistent federation queue has delays in the scale of 30s-5min
# pictrs setup
if [ ! -f "api_tests/pict-rs" ]; then
curl "https://git.asonix.dog/asonix/pict-rs/releases/download/v0.5.0-beta.2/pict-rs-linux-amd64" -o api_tests/pict-rs
curl "https://git.asonix.dog/asonix/pict-rs/releases/download/v0.5.13/pict-rs-linux-amd64" -o api_tests/pict-rs
chmod +x api_tests/pict-rs
fi
./api_tests/pict-rs \

View File

@ -37,15 +37,15 @@ import {
followCommunity,
blockCommunity,
delay,
saveUserSettings,
} from "./shared";
import { CommentView, CommunityView } from "lemmy-js-client";
import { CommentView, CommunityView, SaveUserSettings } from "lemmy-js-client";
let betaCommunity: CommunityView | undefined;
let postOnAlphaRes: PostResponse;
beforeAll(async () => {
await setupLogins();
await unfollows();
await Promise.all([followBeta(alpha), followBeta(gamma)]);
betaCommunity = (await resolveBetaCommunity(alpha)).community;
if (betaCommunity) {
@ -444,6 +444,59 @@ test("Reply to a comment from another instance, get notification", async () => {
assertCommentFederation(alphaReply, replyRes.comment_view);
});
test("Bot reply notifications are filtered when bots are hidden", async () => {
const newAlphaBot = await registerUser(alpha, alphaUrl);
let form: SaveUserSettings = {
bot_account: true,
};
await saveUserSettings(newAlphaBot, form);
const alphaCommunity = (
await resolveCommunity(alpha, "!main@lemmy-alpha:8541")
).community;
if (!alphaCommunity) {
throw "Missing alpha community";
}
await alpha.markAllAsRead();
form = {
show_bot_accounts: false,
};
await saveUserSettings(alpha, form);
const postOnAlphaRes = await createPost(alpha, alphaCommunity.community.id);
// Bot reply to alpha's post
let commentRes = await createComment(
newAlphaBot,
postOnAlphaRes.post_view.post.id,
);
expect(commentRes).toBeDefined();
let alphaUnreadCountRes = await getUnreadCount(alpha);
expect(alphaUnreadCountRes.replies).toBe(0);
let alphaUnreadRepliesRes = await getReplies(alpha, true);
expect(alphaUnreadRepliesRes.replies.length).toBe(0);
// This both restores the original state that may be expected by other tests
// implicitly and is used by the next steps to ensure replies are still
// returned when a user later decides to show bot accounts again.
form = {
show_bot_accounts: true,
};
await saveUserSettings(alpha, form);
alphaUnreadCountRes = await getUnreadCount(alpha);
expect(alphaUnreadCountRes.replies).toBe(1);
alphaUnreadRepliesRes = await getReplies(alpha, true);
expect(alphaUnreadRepliesRes.replies.length).toBe(1);
expect(alphaUnreadRepliesRes.replies[0].comment.id).toBe(
commentRes.comment_view.comment.id,
);
});
test("Mention beta from alpha", async () => {
if (!betaCommunity) throw Error("no community");
const postOnAlphaRes = await createPost(alpha, betaCommunity.community.id);

View File

@ -380,8 +380,8 @@ test("User blocks instance, communities are hidden", async () => {
test("Community follower count is federated", async () => {
// Follow the beta community from alpha
let community = await createCommunity(beta);
let community_id = community.community_view.community.actor_id;
let resolved = await resolveCommunity(alpha, community_id);
let communityActorId = community.community_view.community.actor_id;
let resolved = await resolveCommunity(alpha, communityActorId);
if (!resolved.community) {
throw "Missing beta community";
}
@ -389,7 +389,7 @@ test("Community follower count is federated", async () => {
await followCommunity(alpha, true, resolved.community.community.id);
let followed = (
await waitUntil(
() => resolveCommunity(alpha, community_id),
() => resolveCommunity(alpha, communityActorId),
c => c.community?.subscribed === "Subscribed",
)
).community;
@ -398,7 +398,7 @@ test("Community follower count is federated", async () => {
expect(followed?.counts.subscribers).toBe(1);
// Follow the community from gamma
resolved = await resolveCommunity(gamma, community_id);
resolved = await resolveCommunity(gamma, communityActorId);
if (!resolved.community) {
throw "Missing beta community";
}
@ -406,7 +406,7 @@ test("Community follower count is federated", async () => {
await followCommunity(gamma, true, resolved.community.community.id);
followed = (
await waitUntil(
() => resolveCommunity(gamma, community_id),
() => resolveCommunity(gamma, communityActorId),
c => c.community?.subscribed === "Subscribed",
)
).community;
@ -415,7 +415,7 @@ test("Community follower count is federated", async () => {
expect(followed?.counts?.subscribers).toBe(2);
// Follow the community from delta
resolved = await resolveCommunity(delta, community_id);
resolved = await resolveCommunity(delta, communityActorId);
if (!resolved.community) {
throw "Missing beta community";
}
@ -423,7 +423,7 @@ test("Community follower count is federated", async () => {
await followCommunity(delta, true, resolved.community.community.id);
followed = (
await waitUntil(
() => resolveCommunity(delta, community_id),
() => resolveCommunity(delta, communityActorId),
c => c.community?.subscribed === "Subscribed",
)
).community;

View File

@ -29,14 +29,17 @@ import {
unfollows,
getPost,
waitUntil,
randomString,
createPostWithThumbnail,
sampleImage,
sampleSite,
} from "./shared";
const downloadFileSync = require("download-file-sync");
beforeAll(setupLogins);
afterAll(unfollows);
afterAll(async () => {
await Promise.all([unfollows(), deleteAllImages(alpha)]);
});
test("Upload image and delete it", async () => {
// Before running this test, you need to delete all previous images in the DB
@ -159,7 +162,6 @@ test("Purge post, linked image removed", async () => {
expect(post.post_view.post.url).toBe(upload.url);
// purge post
const purgeForm: PurgePost = {
post_id: post.post_view.post.id,
};
@ -171,48 +173,94 @@ test("Purge post, linked image removed", async () => {
expect(content2).toBe("");
});
test("Images in remote post are proxied if setting enabled", async () => {
let user = await registerUser(beta, betaUrl);
test("Images in remote image post are proxied if setting enabled", async () => {
let community = await createCommunity(gamma);
const upload_form: UploadImage = {
image: Buffer.from("test"),
};
const upload = await user.uploadImage(upload_form);
let post = await createPost(
let postRes = await createPost(
gamma,
community.community_view.community.id,
upload.url,
"![](http://example.com/image2.png)",
sampleImage,
`![](${sampleImage})`,
);
expect(post.post_view.post).toBeDefined();
const post = postRes.post_view.post;
expect(post).toBeDefined();
// remote image gets proxied after upload
expect(
post.post_view.post.url?.startsWith(
post.thumbnail_url?.startsWith(
"http://lemmy-gamma:8561/api/v3/image_proxy?url",
),
).toBeTruthy();
expect(
post.post_view.post.body?.startsWith(
"![](http://lemmy-gamma:8561/api/v3/image_proxy?url",
),
post.body?.startsWith("![](http://lemmy-gamma:8561/api/v3/image_proxy?url"),
).toBeTruthy();
let epsilonPost = await resolvePost(epsilon, post.post_view.post);
expect(epsilonPost.post).toBeDefined();
// Make sure that it ends with jpg, to be sure its an image
expect(post.thumbnail_url?.endsWith(".jpg")).toBeTruthy();
let epsilonPostRes = await resolvePost(epsilon, postRes.post_view.post);
expect(epsilonPostRes.post).toBeDefined();
// Fetch the post again, the metadata should be backgrounded now
// Wait for the metadata to get fetched, since this is backgrounded now
let epsilonPostRes2 = await waitUntil(
() => getPost(epsilon, epsilonPostRes.post!.post.id),
p => p.post_view.post.thumbnail_url != undefined,
);
const epsilonPost = epsilonPostRes2.post_view.post;
// remote image gets proxied after federation
expect(
epsilonPost.post!.post.url?.startsWith(
epsilonPost.thumbnail_url?.startsWith(
"http://lemmy-epsilon:8581/api/v3/image_proxy?url",
),
).toBeTruthy();
expect(
epsilonPost.post!.post.body?.startsWith(
epsilonPost.body?.startsWith(
"![](http://lemmy-epsilon:8581/api/v3/image_proxy?url",
),
).toBeTruthy();
// Make sure that it ends with jpg, to be sure its an image
expect(epsilonPost.thumbnail_url?.endsWith(".jpg")).toBeTruthy();
});
test("Thumbnail of remote image link is proxied if setting enabled", async () => {
let community = await createCommunity(gamma);
let postRes = await createPost(
gamma,
community.community_view.community.id,
// The sample site metadata thumbnail ends in png
sampleSite,
);
const post = postRes.post_view.post;
expect(post).toBeDefined();
// remote image gets proxied after upload
expect(
post.thumbnail_url?.startsWith(
"http://lemmy-gamma:8561/api/v3/image_proxy?url",
),
).toBeTruthy();
// Make sure that it ends with png, to be sure its an image
expect(post.thumbnail_url?.endsWith(".png")).toBeTruthy();
let epsilonPostRes = await resolvePost(epsilon, postRes.post_view.post);
expect(epsilonPostRes.post).toBeDefined();
let epsilonPostRes2 = await waitUntil(
() => getPost(epsilon, epsilonPostRes.post!.post.id),
p => p.post_view.post.thumbnail_url != undefined,
);
const epsilonPost = epsilonPostRes2.post_view.post;
expect(
epsilonPost.thumbnail_url?.startsWith(
"http://lemmy-epsilon:8581/api/v3/image_proxy?url",
),
).toBeTruthy();
// Make sure that it ends with png, to be sure its an image
expect(epsilonPost.thumbnail_url?.endsWith(".png")).toBeTruthy();
});
test("No image proxying if setting is disabled", async () => {
@ -232,7 +280,7 @@ test("No image proxying if setting is disabled", async () => {
alpha,
community.community_view.community.id,
upload.url,
"![](http://example.com/image2.png)",
`![](${sampleImage})`,
);
expect(post.post_view.post).toBeDefined();
@ -240,7 +288,7 @@ test("No image proxying if setting is disabled", async () => {
expect(
post.post_view.post.url?.startsWith("http://127.0.0.1:8551/pictrs/image/"),
).toBeTruthy();
expect(post.post_view.post.body).toBe("![](http://example.com/image2.png)");
expect(post.post_view.post.body).toBe(`![](${sampleImage})`);
let betaPost = await waitForPost(
beta,
@ -253,8 +301,7 @@ test("No image proxying if setting is disabled", async () => {
expect(
betaPost.post.url?.startsWith("http://127.0.0.1:8551/pictrs/image/"),
).toBeTruthy();
expect(betaPost.post.body).toBe("![](http://example.com/image2.png)");
expect(betaPost.post.body).toBe(`![](${sampleImage})`);
// Make sure the alt text got federated
expect(post.post_view.post.alt_text).toBe(betaPost.post.alt_text);
});

View File

@ -48,7 +48,6 @@ beforeAll(async () => {
await setupLogins();
betaCommunity = (await resolveBetaCommunity(alpha)).community;
expect(betaCommunity).toBeDefined();
await unfollows();
});
afterAll(unfollows);
@ -83,10 +82,7 @@ async function assertPostFederation(postOne: PostView, postTwo: PostView) {
test("Create a post", async () => {
// Setup some allowlists and blocklists
let editSiteForm: EditSite = {
allowed_instances: ["lemmy-beta"],
};
await delta.editSite(editSiteForm);
const editSiteForm: EditSite = {};
editSiteForm.allowed_instances = [];
editSiteForm.blocked_instances = ["lemmy-alpha"];
@ -661,40 +657,60 @@ test("A and G subscribe to B (center) A posts, it gets announced to G", async ()
});
test("Report a post", async () => {
// Note, this is a different one from the setup
let betaCommunity = (await resolveBetaCommunity(beta)).community;
if (!betaCommunity) {
throw "Missing beta community";
}
// Create post from alpha
let alphaCommunity = (await resolveBetaCommunity(alpha)).community!;
await followBeta(alpha);
let postRes = await createPost(beta, betaCommunity.community.id);
let postRes = await createPost(alpha, alphaCommunity.community.id);
expect(postRes.post_view.post).toBeDefined();
let alphaPost = (await resolvePost(alpha, postRes.post_view.post)).post;
if (!alphaPost) {
throw "Missing alpha post";
}
let alphaReport = (
await reportPost(alpha, alphaPost.post.id, randomString(10))
).post_report_view.post_report;
// Send report from gamma
let gammaPost = (await resolvePost(gamma, alphaPost.post)).post!;
let gammaReport = (
await reportPost(gamma, gammaPost.post.id, randomString(10))
).post_report_view.post_report;
expect(gammaReport).toBeDefined();
// Report was federated to community instance
let betaReport = (await waitUntil(
() =>
listPostReports(beta).then(p =>
p.post_reports.find(
r =>
r.post_report.original_post_name === alphaReport.original_post_name,
r.post_report.original_post_name === gammaReport.original_post_name,
),
),
res => !!res,
))!.post_report;
expect(betaReport).toBeDefined();
expect(betaReport.resolved).toBe(false);
expect(betaReport.original_post_name).toBe(alphaReport.original_post_name);
expect(betaReport.original_post_url).toBe(alphaReport.original_post_url);
expect(betaReport.original_post_body).toBe(alphaReport.original_post_body);
expect(betaReport.reason).toBe(alphaReport.reason);
expect(betaReport.original_post_name).toBe(gammaReport.original_post_name);
//expect(betaReport.original_post_url).toBe(gammaReport.original_post_url);
expect(betaReport.original_post_body).toBe(gammaReport.original_post_body);
expect(betaReport.reason).toBe(gammaReport.reason);
await unfollowRemotes(alpha);
// Report was federated to poster's instance
let alphaReport = (await waitUntil(
() =>
listPostReports(alpha).then(p =>
p.post_reports.find(
r =>
r.post_report.original_post_name === gammaReport.original_post_name,
),
),
res => !!res,
))!.post_report;
expect(alphaReport).toBeDefined();
expect(alphaReport.resolved).toBe(false);
expect(alphaReport.original_post_name).toBe(gammaReport.original_post_name);
//expect(alphaReport.original_post_url).toBe(gammaReport.original_post_url);
expect(alphaReport.original_post_body).toBe(gammaReport.original_post_body);
expect(alphaReport.reason).toBe(gammaReport.reason);
});
test("Fetch post via redirect", async () => {
@ -729,7 +745,7 @@ test("Block post that contains banned URL", async () => {
await epsilon.editSite(editSiteForm);
await delay(500);
await delay();
if (!betaCommunity) {
throw "Missing beta community";

View File

@ -81,21 +81,24 @@ import { ListingType } from "lemmy-js-client/dist/types/ListingType";
export const fetchFunction = fetch;
export const imageFetchLimit = 50;
export const sampleImage =
"https://i.pinimg.com/originals/df/5f/5b/df5f5b1b174a2b4b6026cc6c8f9395c1.jpg";
export const sampleSite = "https://yahoo.com";
export let alphaUrl = "http://127.0.0.1:8541";
export let betaUrl = "http://127.0.0.1:8551";
export let gammaUrl = "http://127.0.0.1:8561";
export let deltaUrl = "http://127.0.0.1:8571";
export let epsilonUrl = "http://127.0.0.1:8581";
export const alphaUrl = "http://127.0.0.1:8541";
export const betaUrl = "http://127.0.0.1:8551";
export const gammaUrl = "http://127.0.0.1:8561";
export const deltaUrl = "http://127.0.0.1:8571";
export const epsilonUrl = "http://127.0.0.1:8581";
export let alpha = new LemmyHttp(alphaUrl, { fetchFunction });
export let alphaImage = new LemmyHttp(alphaUrl);
export let beta = new LemmyHttp(betaUrl, { fetchFunction });
export let gamma = new LemmyHttp(gammaUrl, { fetchFunction });
export let delta = new LemmyHttp(deltaUrl, { fetchFunction });
export let epsilon = new LemmyHttp(epsilonUrl, { fetchFunction });
export const alpha = new LemmyHttp(alphaUrl, { fetchFunction });
export const alphaImage = new LemmyHttp(alphaUrl);
export const beta = new LemmyHttp(betaUrl, { fetchFunction });
export const gamma = new LemmyHttp(gammaUrl, { fetchFunction });
export const delta = new LemmyHttp(deltaUrl, { fetchFunction });
export const epsilon = new LemmyHttp(epsilonUrl, { fetchFunction });
export let betaAllowedInstances = [
export const betaAllowedInstances = [
"lemmy-alpha",
"lemmy-gamma",
"lemmy-delta",
@ -180,6 +183,10 @@ export async function setupLogins() {
];
await gamma.editSite(editSiteForm);
// Setup delta allowed instance
editSiteForm.allowed_instances = ["lemmy-beta"];
await delta.editSite(editSiteForm);
// Create the main alpha/beta communities
// Ignore thrown errors of duplicates
try {
@ -357,10 +364,13 @@ export async function getUnreadCount(
return api.getUnreadCount();
}
export async function getReplies(api: LemmyHttp): Promise<GetRepliesResponse> {
export async function getReplies(
api: LemmyHttp,
unread_only: boolean = false,
): Promise<GetRepliesResponse> {
let form: GetReplies = {
sort: "New",
unread_only: false,
unread_only,
};
return api.getReplies(form);
}
@ -693,8 +703,8 @@ export async function saveUserSettingsBio(
export async function saveUserSettingsFederated(
api: LemmyHttp,
): Promise<SuccessResponse> {
let avatar = "https://image.flaticon.com/icons/png/512/35/35896.png";
let banner = "https://image.flaticon.com/icons/png/512/36/35896.png";
let avatar = sampleImage;
let banner = sampleImage;
let bio = "a changed bio";
let form: SaveUserSettings = {
show_nsfw: false,
@ -760,6 +770,7 @@ export async function unfollowRemotes(
await Promise.all(
remoteFollowed.map(cu => followCommunity(api, false, cu.community.id)),
);
let siteRes = await getSite(api);
return siteRes;
}
@ -889,14 +900,17 @@ export async function deleteAllImages(api: LemmyHttp) {
limit: imageFetchLimit,
});
imagesRes.images;
for (const image of imagesRes.images) {
const form: DeleteImage = {
token: image.local_image.pictrs_delete_token,
filename: image.local_image.pictrs_alias,
};
await api.deleteImage(form);
}
Promise.all(
imagesRes.images
.map(image => {
const form: DeleteImage = {
token: image.local_image.pictrs_delete_token,
filename: image.local_image.pictrs_alias,
};
return form;
})
.map(form => api.deleteImage(form)),
);
}
export async function unfollows() {
@ -907,6 +921,24 @@ export async function unfollows() {
unfollowRemotes(delta),
unfollowRemotes(epsilon),
]);
await Promise.all([
purgeAllPosts(alpha),
purgeAllPosts(beta),
purgeAllPosts(gamma),
purgeAllPosts(delta),
purgeAllPosts(epsilon),
]);
}
export async function purgeAllPosts(api: LemmyHttp) {
// The best way to get all federated items, is to find the posts
let res = await api.getPosts({ type_: "All", limit: 50 });
await Promise.all(
Array.from(new Set(res.posts.map(p => p.post.id)))
.map(post_id => api.purgePost({ post_id }))
// Ignore errors
.map(p => p.catch(e => e)),
);
}
export function getCommentParentId(comment: Comment): number | undefined {

View File

@ -47,7 +47,8 @@
#
# To be removed in 0.20
cache_external_link_previews: true
# Specifies how to handle remote images, so that users don't have to connect directly to remote servers.
# Specifies how to handle remote images, so that users don't have to connect directly to remote
# servers.
image_mode:
# Leave images unchanged, don't generate any local thumbnails for post urls. Instead the
# Opengraph image is directly returned as thumbnail
@ -64,10 +65,11 @@
# or
# If enabled, all images from remote domains are rewritten to pass through `/api/v3/image_proxy`,
# including embedded images in markdown. Images are stored temporarily in pict-rs for caching.
# This improves privacy as users don't expose their IP to untrusted servers, and decreases load
# on other servers. However it increases bandwidth use for the local server.
# If enabled, all images from remote domains are rewritten to pass through
# `/api/v3/image_proxy`, including embedded images in markdown. Images are stored temporarily
# in pict-rs for caching. This improves privacy as users don't expose their IP to untrusted
# servers, and decreases load on other servers. However it increases bandwidth use for the
# local server.
#
# Requires pict-rs 0.5
"ProxyAllImages"

View File

@ -33,7 +33,7 @@ anyhow = { workspace = true }
tracing = { workspace = true }
chrono = { workspace = true }
url = { workspace = true }
wav = "1.0.0"
wav = "1.0.1"
sitemap-rs = "0.2.1"
totp-rs = { version = "5.5.1", features = ["gen_secret", "otpauth"] }
actix-web-httpauth = "0.8.1"

View File

@ -44,6 +44,7 @@ pub mod site;
pub mod sitemap;
/// Converts the captcha to a base64 encoded wav audio file
#[allow(deprecated)]
pub(crate) fn captcha_as_wav_base64(captcha: &Captcha) -> LemmyResult<String> {
let letters = captcha.as_wav();

View File

@ -29,7 +29,7 @@ pub async fn add_admin(
.await?
.ok_or(LemmyErrorType::ObjectNotLocal)?;
let added_admin = LocalUser::update(
LocalUser::update(
&mut context.pool(),
added_local_user.local_user.id,
&LocalUserUpdateForm {
@ -43,7 +43,7 @@ pub async fn add_admin(
// Mod tables
let form = ModAddForm {
mod_person_id: local_user_view.person.id,
other_person_id: added_admin.person_id,
other_person_id: added_local_user.person.id,
removed: Some(!data.added),
};

View File

@ -19,7 +19,7 @@ pub async fn change_password_after_reset(
) -> LemmyResult<Json<SuccessResponse>> {
// Fetch the user_id from the token
let token = data.token.clone();
let local_user_id = PasswordResetRequest::read_from_token(&mut context.pool(), &token)
let local_user_id = PasswordResetRequest::read_and_delete(&mut context.pool(), &token)
.await?
.ok_or(LemmyErrorType::TokenNotFound)?
.local_user_id;

View File

@ -1,11 +1,7 @@
use crate::{build_totp_2fa, generate_totp_2fa_secret};
use activitypub_federation::config::Data;
use actix_web::web::Json;
use lemmy_api_common::{
context::LemmyContext,
person::GenerateTotpSecretResponse,
sensitive::Sensitive,
};
use lemmy_api_common::{context::LemmyContext, person::GenerateTotpSecretResponse};
use lemmy_db_schema::source::local_user::{LocalUser, LocalUserUpdateForm};
use lemmy_db_views::structs::{LocalUserView, SiteView};
use lemmy_utils::error::{LemmyErrorType, LemmyResult};
@ -41,6 +37,6 @@ pub async fn generate_totp_secret(
.await?;
Ok(Json(GenerateTotpSecretResponse {
totp_secret_url: Sensitive::new(secret_url),
totp_secret_url: secret_url.into(),
}))
}

View File

@ -11,9 +11,12 @@ pub async fn unread_count(
) -> LemmyResult<Json<GetUnreadCountResponse>> {
let person_id = local_user_view.person.id;
let replies = CommentReplyView::get_unread_replies(&mut context.pool(), person_id).await?;
let replies =
CommentReplyView::get_unread_replies(&mut context.pool(), &local_user_view.local_user).await?;
let mentions = PersonMentionView::get_unread_mentions(&mut context.pool(), person_id).await?;
let mentions =
PersonMentionView::get_unread_mentions(&mut context.pool(), &local_user_view.local_user)
.await?;
let private_messages =
PrivateMessageView::get_unread_messages(&mut context.pool(), person_id).await?;

View File

@ -6,7 +6,6 @@ use lemmy_api_common::{
utils::send_password_reset_email,
SuccessResponse,
};
use lemmy_db_schema::source::password_reset_request::PasswordResetRequest;
use lemmy_db_views::structs::{LocalUserView, SiteView};
use lemmy_utils::error::{LemmyErrorType, LemmyResult};
@ -21,15 +20,6 @@ pub async fn reset_password(
.await?
.ok_or(LemmyErrorType::IncorrectLogin)?;
// Check for too many attempts (to limit potential abuse)
let recent_resets_count = PasswordResetRequest::get_recent_password_resets_count(
&mut context.pool(),
local_user_view.local_user.id,
)
.await?;
if recent_resets_count >= 3 {
Err(LemmyErrorType::PasswordResetLimitReached)?
}
let site_view = SiteView::read_local(&mut context.pool())
.await?
.ok_or(LemmyErrorType::LocalSiteNotSetup)?;

View File

@ -28,6 +28,7 @@ use lemmy_utils::{
error::{LemmyErrorType, LemmyResult},
utils::validation::{is_valid_bio_field, is_valid_display_name, is_valid_matrix_id},
};
use std::ops::Deref;
#[tracing::instrument(skip(context))]
pub async fn save_user_settings(
@ -57,7 +58,7 @@ pub async fn save_user_settings(
if let Some(Some(email)) = &email {
let previous_email = local_user_view.local_user.email.clone().unwrap_or_default();
// if email was changed, check that it is not taken and send verification mail
if &previous_email != email {
if previous_email.deref() != email {
if LocalUser::is_email_taken(&mut context.pool(), email).await? {
return Err(LemmyErrorType::EmailAlreadyExists)?;
}
@ -71,7 +72,8 @@ pub async fn save_user_settings(
}
}
// When the site requires email, make sure email is not Some(None). IE, an overwrite to a None value
// When the site requires email, make sure email is not Some(None). IE, an overwrite to a None
// value
if let Some(email) = &email {
if email.is_none() && site_view.local_site.require_email_verification {
Err(LemmyErrorType::EmailRequired)?
@ -141,11 +143,7 @@ pub async fn save_user_settings(
..Default::default()
};
// Ignore errors, because 'no fields updated' will return an error.
// https://github.com/LemmyNet/lemmy/issues/4076
LocalUser::update(&mut context.pool(), local_user_id, &local_user_form)
.await
.ok();
LocalUser::update(&mut context.pool(), local_user_id, &local_user_form).await?;
// Update the vote display modes
let vote_display_modes_form = LocalUserVoteDisplayModeUpdateForm {

View File

@ -9,12 +9,10 @@ use lemmy_db_schema::{
source::{
email_verification::EmailVerification,
local_user::{LocalUser, LocalUserUpdateForm},
person::Person,
},
traits::Crud,
RegistrationMode,
};
use lemmy_db_views::structs::SiteView;
use lemmy_db_views::structs::{LocalUserView, SiteView};
use lemmy_utils::error::{LemmyErrorType, LemmyResult};
pub async fn verify_email(
@ -38,7 +36,7 @@ pub async fn verify_email(
};
let local_user_id = verification.local_user_id;
let local_user = LocalUser::update(&mut context.pool(), local_user_id, &form).await?;
LocalUser::update(&mut context.pool(), local_user_id, &form).await?;
EmailVerification::delete_old_tokens_for_local_user(&mut context.pool(), local_user_id).await?;
@ -46,12 +44,16 @@ pub async fn verify_email(
if site_view.local_site.registration_mode == RegistrationMode::RequireApplication
&& site_view.local_site.application_email_admins
{
let person = Person::read(&mut context.pool(), local_user.person_id)
let local_user = LocalUserView::read(&mut context.pool(), local_user_id)
.await?
.ok_or(LemmyErrorType::CouldntFindPerson)?;
send_new_applicant_email_to_admins(&person.name, &mut context.pool(), context.settings())
.await?;
send_new_applicant_email_to_admins(
&local_user.person.name,
&mut context.pool(),
context.settings(),
)
.await?;
}
Ok(Json(SuccessResponse::default()))

View File

@ -68,7 +68,6 @@ pub async fn like_post(
.with_lemmy_type(LemmyErrorType::CouldntLikePost)?;
}
// Mark the post as read
mark_post_as_read(person_id, post_id, &mut context.pool()).await?;
let community = Community::read(&mut context.pool(), post.community_id)

View File

@ -38,7 +38,6 @@ pub async fn save_post(
.await?
.ok_or(LemmyErrorType::CouldntFindPost)?;
// Mark the post as read
mark_post_as_read(person_id, post_id, &mut context.pool()).await?;
Ok(Json(PostResponse { post_view }))

View File

@ -25,7 +25,7 @@ full = [
"lemmy_db_views_moderator/full",
"lemmy_utils/full",
"activitypub_federation",
"encoding",
"encoding_rs",
"reqwest-middleware",
"webpage",
"ts-rs",
@ -66,13 +66,13 @@ actix-web = { workspace = true, optional = true }
enum-map = { workspace = true }
urlencoding = { workspace = true }
mime = { version = "0.3.17", optional = true }
webpage = { version = "1.6", default-features = false, features = [
webpage = { version = "2.0", default-features = false, features = [
"serde",
], optional = true }
encoding = { version = "0.2.33", optional = true }
jsonwebtoken = { version = "8.3.0", optional = true }
encoding_rs = { version = "0.8.34", optional = true }
jsonwebtoken = { version = "9.3.0", optional = true }
# necessary for wasmt compilation
getrandom = { version = "0.2.14", features = ["js"] }
getrandom = { version = "0.2.15", features = ["js"] }
[package.metadata.cargo-machete]
ignored = ["getrandom"]

View File

@ -121,7 +121,8 @@ pub async fn send_local_notifs(
if let Ok(Some(mention_user_view)) = user_view {
// TODO
// At some point, make it so you can't tag the parent creator either
// Potential duplication of notifications, one for reply and the other for mention, is handled below by checking recipient ids
// Potential duplication of notifications, one for reply and the other for mention, is handled
// below by checking recipient ids
recipient_ids.push(mention_user_view.local_user.id);
let user_mention_form = PersonMentionInsertForm {

View File

@ -1,9 +1,10 @@
use crate::{context::LemmyContext, sensitive::Sensitive};
use crate::context::LemmyContext;
use actix_web::{http::header::USER_AGENT, HttpRequest};
use chrono::Utc;
use jsonwebtoken::{decode, encode, DecodingKey, EncodingKey, Header, Validation};
use lemmy_db_schema::{
newtypes::LocalUserId,
sensitive::SensitiveString,
source::login_token::{LoginToken, LoginTokenCreateForm},
};
use lemmy_utils::error::{LemmyErrorExt, LemmyErrorType, LemmyResult};
@ -40,7 +41,7 @@ impl Claims {
user_id: LocalUserId,
req: HttpRequest,
context: &LemmyContext,
) -> LemmyResult<Sensitive<String>> {
) -> LemmyResult<SensitiveString> {
let hostname = context.settings().hostname.clone();
let my_claims = Claims {
sub: user_id.0.to_string(),
@ -50,7 +51,7 @@ impl Claims {
let secret = &context.secret().jwt_secret;
let key = EncodingKey::from_secret(secret.as_ref());
let token = encode(&Header::default(), &my_claims, &key)?;
let token: SensitiveString = encode(&Header::default(), &my_claims, &key)?.into();
let ip = req
.connection_info()
.realip_remote_addr()
@ -67,7 +68,7 @@ impl Claims {
user_agent,
};
LoginToken::create(&mut context.pool(), form).await?;
Ok(Sensitive::new(token))
Ok(token)
}
}

View File

@ -64,7 +64,7 @@ impl LemmyContext {
let client = ClientBuilder::new(client).build();
let secret = Secret {
id: 0,
jwt_secret: String::new(),
jwt_secret: String::new().into(),
};
let rate_limit_cell = RateLimitCell::with_test_config();

View File

@ -14,7 +14,6 @@ pub mod private_message;
pub mod request;
#[cfg(feature = "full")]
pub mod send_activity;
pub mod sensitive;
pub mod site;
#[cfg(feature = "full")]
pub mod utils;

View File

@ -1,6 +1,6 @@
use crate::sensitive::Sensitive;
use lemmy_db_schema::{
newtypes::{CommentReplyId, CommunityId, LanguageId, PersonId, PersonMentionId},
sensitive::SensitiveString,
source::site::Site,
CommentSortType,
ListingType,
@ -25,8 +25,8 @@ use ts_rs::TS;
#[cfg_attr(feature = "full", ts(export))]
/// Logging into lemmy.
pub struct Login {
pub username_or_email: Sensitive<String>,
pub password: Sensitive<String>,
pub username_or_email: SensitiveString,
pub password: SensitiveString,
/// May be required, if totp is enabled for their account.
pub totp_2fa_token: Option<String>,
}
@ -38,11 +38,11 @@ pub struct Login {
/// Register / Sign up to lemmy.
pub struct Register {
pub username: String,
pub password: Sensitive<String>,
pub password_verify: Sensitive<String>,
pub show_nsfw: bool,
pub password: SensitiveString,
pub password_verify: SensitiveString,
pub show_nsfw: Option<bool>,
/// email is mandatory if email verification is enabled on the server
pub email: Option<Sensitive<String>>,
pub email: Option<SensitiveString>,
/// The UUID of the captcha item.
pub captcha_uuid: Option<String>,
/// Your captcha answer.
@ -99,7 +99,7 @@ pub struct SaveUserSettings {
/// Your display name, which can contain strange characters, and does not need to be unique.
pub display_name: Option<String>,
/// Your email.
pub email: Option<Sensitive<String>>,
pub email: Option<SensitiveString>,
/// Your bio / info, in markdown.
pub bio: Option<String>,
/// Your matrix user id. Ex: @my_user:matrix.org
@ -124,7 +124,8 @@ pub struct SaveUserSettings {
pub post_listing_mode: Option<PostListingMode>,
/// Whether to allow keyboard navigation (for browsing and interacting with posts and comments).
pub enable_keyboard_navigation: Option<bool>,
/// Whether user avatars or inline images in the UI that are gifs should be allowed to play or should be paused
/// Whether user avatars or inline images in the UI that are gifs should be allowed to play or
/// should be paused
pub enable_animated_images: Option<bool>,
/// Whether to auto-collapse bot comments.
pub collapse_bot_comments: Option<bool>,
@ -140,9 +141,9 @@ pub struct SaveUserSettings {
#[cfg_attr(feature = "full", ts(export))]
/// Changes your account password.
pub struct ChangePassword {
pub new_password: Sensitive<String>,
pub new_password_verify: Sensitive<String>,
pub old_password: Sensitive<String>,
pub new_password: SensitiveString,
pub new_password_verify: SensitiveString,
pub old_password: SensitiveString,
}
#[skip_serializing_none]
@ -151,8 +152,9 @@ pub struct ChangePassword {
#[cfg_attr(feature = "full", ts(export))]
/// A response for your login.
pub struct LoginResponse {
/// This is None in response to `Register` if email verification is enabled, or the server requires registration applications.
pub jwt: Option<Sensitive<String>>,
/// This is None in response to `Register` if email verification is enabled, or the server
/// requires registration applications.
pub jwt: Option<SensitiveString>,
/// If registration applications are required, this will return true for a signup response.
pub registration_created: bool,
/// If email verifications are required, this will return true for a signup response.
@ -340,7 +342,7 @@ pub struct CommentReplyResponse {
#[cfg_attr(feature = "full", ts(export))]
/// Delete your account.
pub struct DeleteAccount {
pub password: Sensitive<String>,
pub password: SensitiveString,
pub delete_content: bool,
}
@ -349,7 +351,7 @@ pub struct DeleteAccount {
#[cfg_attr(feature = "full", ts(export))]
/// Reset your password via email.
pub struct PasswordReset {
pub email: Sensitive<String>,
pub email: SensitiveString,
}
#[derive(Debug, Serialize, Deserialize, Clone, Default, PartialEq, Eq, Hash)]
@ -357,9 +359,9 @@ pub struct PasswordReset {
#[cfg_attr(feature = "full", ts(export))]
/// Change your password after receiving a reset request.
pub struct PasswordChangeAfterReset {
pub token: Sensitive<String>,
pub password: Sensitive<String>,
pub password_verify: Sensitive<String>,
pub token: SensitiveString,
pub password: SensitiveString,
pub password_verify: SensitiveString,
}
#[skip_serializing_none]
@ -405,7 +407,7 @@ pub struct VerifyEmail {
#[cfg_attr(feature = "full", derive(TS))]
#[cfg_attr(feature = "full", ts(export))]
pub struct GenerateTotpSecretResponse {
pub totp_secret_url: Sensitive<String>,
pub totp_secret_url: SensitiveString,
}
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, Hash)]

View File

@ -3,10 +3,11 @@ use crate::{
lemmy_db_schema::traits::Crud,
post::{LinkMetadata, OpenGraphData},
send_activity::{ActivityChannel, SendActivityData},
utils::{local_site_opt_to_sensitive, proxy_image_link, proxy_image_link_opt_apub},
utils::{local_site_opt_to_sensitive, proxy_image_link},
};
use activitypub_federation::config::Data;
use encoding::{all::encodings, DecoderTrap};
use chrono::{DateTime, Utc};
use encoding_rs::{Encoding, UTF_8};
use lemmy_db_schema::{
newtypes::DbUrl,
source::{
@ -18,14 +19,13 @@ use lemmy_db_schema::{
use lemmy_utils::{
error::{LemmyError, LemmyErrorType, LemmyResult},
settings::structs::{PictrsImageMode, Settings},
spawn_try_task,
REQWEST_TIMEOUT,
VERSION,
};
use mime::Mime;
use reqwest::{header::CONTENT_TYPE, Client, ClientBuilder};
use reqwest_middleware::ClientWithMiddleware;
use serde::Deserialize;
use serde::{Deserialize, Serialize};
use tracing::info;
use url::Url;
use urlencoding::encode;
@ -65,106 +65,79 @@ pub async fn fetch_link_metadata(url: &Url, context: &LemmyContext) -> LemmyResu
})
}
/// Generate post thumbnail in background task, because some sites can be very slow to respond.
/// Generates and saves a post thumbnail and metadata.
///
/// Takes a callback to generate a send activity task, so that post can be federated with metadata.
///
/// TODO: `federated_thumbnail` param can be removed once we federate full metadata and can
/// write it to db directly, without calling this function.
/// https://github.com/LemmyNet/lemmy/issues/4598
pub fn generate_post_link_metadata(
pub async fn generate_post_link_metadata(
post: Post,
custom_thumbnail: Option<Url>,
federated_thumbnail: Option<Url>,
send_activity: impl FnOnce(Post) -> Option<SendActivityData> + Send + 'static,
local_site: Option<LocalSite>,
context: Data<LemmyContext>,
) {
spawn_try_task(async move {
let metadata = match &post.url {
Some(url) => fetch_link_metadata(url, &context).await.unwrap_or_default(),
_ => Default::default(),
};
) -> LemmyResult<()> {
let metadata = match &post.url {
Some(url) => fetch_link_metadata(url, &context).await.unwrap_or_default(),
_ => Default::default(),
};
let is_image_post = metadata
.content_type
.as_ref()
.is_some_and(|content_type| content_type.starts_with("image"));
let is_image_post = metadata
.content_type
.as_ref()
.is_some_and(|content_type| content_type.starts_with("image"));
// Decide if we are allowed to generate local thumbnail
let allow_sensitive = local_site_opt_to_sensitive(&local_site);
let allow_generate_thumbnail = allow_sensitive || !post.nsfw;
// Decide if we are allowed to generate local thumbnail
let allow_sensitive = local_site_opt_to_sensitive(&local_site);
let allow_generate_thumbnail = allow_sensitive || !post.nsfw;
// Use custom thumbnail if available and its not an image post
let thumbnail_url = if !is_image_post && custom_thumbnail.is_some() {
custom_thumbnail
}
// Use federated thumbnail if available
else if federated_thumbnail.is_some() {
federated_thumbnail
}
// Generate local thumbnail if allowed
else if allow_generate_thumbnail {
match post
.url
.filter(|_| is_image_post)
.or(metadata.opengraph_data.image)
{
Some(url) => generate_pictrs_thumbnail(&url, &context).await.ok(),
None => None,
}
}
// Otherwise use opengraph preview image directly
else {
metadata.opengraph_data.image.map(Into::into)
};
let image_url = if is_image_post {
post.url
} else {
metadata.opengraph_data.image.clone()
};
// Proxy the image fetch if necessary
let proxied_thumbnail_url = proxy_image_link_opt_apub(thumbnail_url, &context).await?;
let thumbnail_url = if let (false, Some(url)) = (is_image_post, custom_thumbnail) {
proxy_image_link(url, &context).await.ok()
} else if let (true, Some(url)) = (allow_generate_thumbnail, image_url) {
generate_pictrs_thumbnail(&url, &context)
.await
.ok()
.map(Into::into)
} else {
metadata.opengraph_data.image.clone()
};
let form = PostUpdateForm {
embed_title: Some(metadata.opengraph_data.title),
embed_description: Some(metadata.opengraph_data.description),
embed_video_url: Some(metadata.opengraph_data.embed_video_url),
thumbnail_url: Some(proxied_thumbnail_url),
url_content_type: Some(metadata.content_type),
..Default::default()
};
let updated_post = Post::update(&mut context.pool(), post.id, &form).await?;
if let Some(send_activity) = send_activity(updated_post) {
ActivityChannel::submit_activity(send_activity, &context).await?;
}
Ok(())
});
let form = PostUpdateForm {
embed_title: Some(metadata.opengraph_data.title),
embed_description: Some(metadata.opengraph_data.description),
embed_video_url: Some(metadata.opengraph_data.embed_video_url),
thumbnail_url: Some(thumbnail_url),
url_content_type: Some(metadata.content_type),
..Default::default()
};
let updated_post = Post::update(&mut context.pool(), post.id, &form).await?;
if let Some(send_activity) = send_activity(updated_post) {
ActivityChannel::submit_activity(send_activity, &context).await?;
}
Ok(())
}
/// Extract site metadata from HTML Opengraph attributes.
fn extract_opengraph_data(html_bytes: &[u8], url: &Url) -> LemmyResult<OpenGraphData> {
let html = String::from_utf8_lossy(html_bytes);
// Make sure the first line is doctype html
let first_line = html
.trim_start()
.lines()
.next()
.ok_or(LemmyErrorType::NoLinesInHtml)?
.to_lowercase();
if !first_line.starts_with("<!doctype html") {
Err(LemmyErrorType::SiteMetadataPageIsNotDoctypeHtml)?
}
let mut page = HTML::from_string(html.to_string(), None)?;
// If the web page specifies that it isn't actually UTF-8, re-decode the received bytes with the
// proper encoding. If the specified encoding cannot be found, fall back to the original UTF-8
// version.
if let Some(charset) = page.meta.get("charset") {
if charset.to_lowercase() != "utf-8" {
if let Some(encoding_ref) = encodings().iter().find(|e| e.name() == charset) {
if let Ok(html_with_encoding) = encoding_ref.decode(html_bytes, DecoderTrap::Replace) {
page = HTML::from_string(html_with_encoding, None)?;
}
if charset != UTF_8.name() {
if let Some(encoding) = Encoding::for_label(charset.as_bytes()) {
page = HTML::from_string(encoding.decode(html_bytes).0.into(), None)?;
}
}
}
@ -203,19 +176,40 @@ fn extract_opengraph_data(html_bytes: &[u8], url: &Url) -> LemmyResult<OpenGraph
})
}
#[derive(Deserialize, Debug)]
struct PictrsResponse {
files: Vec<PictrsFile>,
msg: String,
#[derive(Deserialize, Serialize, Debug)]
pub struct PictrsResponse {
pub files: Option<Vec<PictrsFile>>,
pub msg: String,
}
#[derive(Deserialize, Debug)]
struct PictrsFile {
file: String,
delete_token: String,
#[derive(Deserialize, Serialize, Debug)]
pub struct PictrsFile {
pub file: String,
pub delete_token: String,
pub details: PictrsFileDetails,
}
#[derive(Deserialize, Debug)]
impl PictrsFile {
pub fn thumbnail_url(&self, protocol_and_hostname: &str) -> Result<Url, url::ParseError> {
Url::parse(&format!(
"{protocol_and_hostname}/pictrs/image/{}",
self.file
))
}
}
/// Stores extra details about a Pictrs image.
#[derive(Deserialize, Serialize, Debug)]
pub struct PictrsFileDetails {
/// In pixels
pub width: u16,
/// In pixels
pub height: u16,
pub content_type: String,
pub created_at: DateTime<Utc>,
}
#[derive(Deserialize, Serialize, Debug)]
struct PictrsPurgeResponse {
msg: String,
}
@ -297,33 +291,34 @@ async fn generate_pictrs_thumbnail(image_url: &Url, context: &LemmyContext) -> L
encode(image_url.as_str())
);
let response = context
let res = context
.client()
.get(&fetch_url)
.timeout(REQWEST_TIMEOUT)
.send()
.await?
.json::<PictrsResponse>()
.await?;
let response: PictrsResponse = response.json().await?;
let files = res.files.unwrap_or_default();
if response.msg == "ok" {
let thumbnail_url = Url::parse(&format!(
"{}/pictrs/image/{}",
context.settings().get_protocol_and_hostname(),
response.files.first().expect("missing pictrs file").file
))?;
for uploaded_image in response.files {
let form = LocalImageForm {
local_user_id: None,
pictrs_alias: uploaded_image.file.to_string(),
pictrs_delete_token: uploaded_image.delete_token.to_string(),
};
LocalImage::create(&mut context.pool(), &form).await?;
}
Ok(thumbnail_url)
} else {
Err(LemmyErrorType::PictrsResponseError(response.msg))?
}
let image = files
.first()
.ok_or(LemmyErrorType::PictrsResponseError(res.msg))?;
let form = LocalImageForm {
// This is none because its an internal request.
// IE, a local user shouldn't get to delete the thumbnails for their link posts
local_user_id: None,
pictrs_alias: image.file.clone(),
pictrs_delete_token: image.delete_token.clone(),
};
let protocol_and_hostname = context.settings().get_protocol_and_hostname();
let thumbnail_url = image.thumbnail_url(&protocol_and_hostname)?;
LocalImage::create(&mut context.pool(), &form).await?;
Ok(thumbnail_url)
}
// TODO: get rid of this by reading content type from db

View File

@ -1,116 +0,0 @@
use serde::{Deserialize, Serialize};
use std::{
borrow::Borrow,
ops::{Deref, DerefMut},
};
#[cfg(feature = "full")]
use ts_rs::TS;
#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Deserialize, Serialize, Default)]
#[serde(transparent)]
pub struct Sensitive<T>(T);
impl<T> Sensitive<T> {
pub fn new(item: T) -> Self {
Sensitive(item)
}
pub fn into_inner(self) -> T {
self.0
}
}
impl<T> std::fmt::Debug for Sensitive<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("Sensitive").finish()
}
}
impl<T> AsRef<T> for Sensitive<T> {
fn as_ref(&self) -> &T {
&self.0
}
}
impl AsRef<str> for Sensitive<String> {
fn as_ref(&self) -> &str {
&self.0
}
}
impl AsRef<[u8]> for Sensitive<String> {
fn as_ref(&self) -> &[u8] {
self.0.as_ref()
}
}
impl AsRef<[u8]> for Sensitive<Vec<u8>> {
fn as_ref(&self) -> &[u8] {
self.0.as_ref()
}
}
impl<T> AsMut<T> for Sensitive<T> {
fn as_mut(&mut self) -> &mut T {
&mut self.0
}
}
impl AsMut<str> for Sensitive<String> {
fn as_mut(&mut self) -> &mut str {
&mut self.0
}
}
impl Deref for Sensitive<String> {
type Target = str;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for Sensitive<String> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl<T> From<T> for Sensitive<T> {
fn from(t: T) -> Self {
Sensitive(t)
}
}
impl From<&str> for Sensitive<String> {
fn from(s: &str) -> Self {
Sensitive(s.into())
}
}
impl<T> Borrow<T> for Sensitive<T> {
fn borrow(&self) -> &T {
&self.0
}
}
impl Borrow<str> for Sensitive<String> {
fn borrow(&self) -> &str {
&self.0
}
}
#[cfg(feature = "full")]
impl TS for Sensitive<String> {
fn name() -> String {
"string".to_string()
}
fn name_with_type_args(_args: Vec<String>) -> String {
"string".to_string()
}
fn dependencies() -> Vec<ts_rs::Dependency> {
Vec::new()
}
fn transparent() -> bool {
true
}
}

View File

@ -375,7 +375,8 @@ impl From<FederationQueueState> for ReadableFederationState {
pub struct InstanceWithFederationState {
#[serde(flatten)]
pub instance: Instance,
/// if federation to this instance is or was active, show state of outgoing federation to this instance
/// if federation to this instance is or was active, show state of outgoing federation to this
/// instance
pub federation_state: Option<ReadableFederationState>,
}

View File

@ -6,6 +6,7 @@ use crate::{
use chrono::{DateTime, Days, Local, TimeZone, Utc};
use enum_map::{enum_map, EnumMap};
use lemmy_db_schema::{
aggregates::structs::{PersonPostAggregates, PersonPostAggregatesForm},
newtypes::{CommunityId, DbUrl, InstanceId, PersonId, PostId},
source::{
comment::{Comment, CommentUpdateForm},
@ -139,13 +140,7 @@ pub fn is_top_mod(
}
}
#[tracing::instrument(skip_all)]
pub async fn get_post(post_id: PostId, pool: &mut DbPool<'_>) -> LemmyResult<Post> {
Post::read(pool, post_id)
.await?
.ok_or(LemmyErrorType::CouldntFindPost.into())
}
/// Marks a post as read for a given person.
#[tracing::instrument(skip_all)]
pub async fn mark_post_as_read(
person_id: PersonId,
@ -158,6 +153,28 @@ pub async fn mark_post_as_read(
Ok(())
}
/// Updates the read comment count for a post. Usually done when reading or creating a new comment.
#[tracing::instrument(skip_all)]
pub async fn update_read_comments(
person_id: PersonId,
post_id: PostId,
read_comments: i64,
pool: &mut DbPool<'_>,
) -> LemmyResult<()> {
let person_post_agg_form = PersonPostAggregatesForm {
person_id,
post_id,
read_comments,
..PersonPostAggregatesForm::default()
};
PersonPostAggregates::upsert(pool, &person_post_agg_form)
.await
.with_lemmy_type(LemmyErrorType::CouldntFindPost)?;
Ok(())
}
pub fn check_user_valid(person: &Person) -> LemmyResult<()> {
// Check for a site ban
if person.banned {
@ -356,7 +373,8 @@ pub async fn build_federated_instances(
federation_state: federation_state.map(std::convert::Into::into),
};
if is_blocked {
// blocked instances will only have an entry here if they had been federated with in the past.
// blocked instances will only have an entry here if they had been federated with in the
// past.
blocked.push(i);
} else if is_allowed {
allowed.push(i.clone());
@ -440,7 +458,7 @@ pub async fn send_password_reset_email(
// Insert the row after successful send, to avoid using daily reset limit while
// email sending is broken.
let local_user_id = user.local_user.id;
PasswordResetRequest::create_token(pool, local_user_id, token.clone()).await?;
PasswordResetRequest::create(pool, local_user_id, token.clone()).await?;
Ok(())
}
@ -954,8 +972,8 @@ pub async fn process_markdown_opt(
/// A wrapper for `proxy_image_link` for use in tests.
///
/// The parameter `force_image_proxy` is the config value of `pictrs.image_proxy`. Its necessary to pass
/// as separate parameter so it can be changed in tests.
/// The parameter `force_image_proxy` is the config value of `pictrs.image_proxy`. Its necessary to
/// pass as separate parameter so it can be changed in tests.
async fn proxy_image_link_internal(
link: Url,
image_mode: PictrsImageMode,
@ -965,13 +983,10 @@ async fn proxy_image_link_internal(
if link.domain() == Some(&context.settings().hostname) {
Ok(link.into())
} else if image_mode == PictrsImageMode::ProxyAllImages {
let proxied = format!(
"{}/api/v3/image_proxy?url={}",
context.settings().get_protocol_and_hostname(),
encode(link.as_str())
);
let proxied = build_proxied_image_url(&link, &context.settings().get_protocol_and_hostname())?;
RemoteImage::create(&mut context.pool(), vec![link]).await?;
Ok(Url::parse(&proxied)?.into())
Ok(proxied.into())
} else {
Ok(link.into())
}
@ -1025,6 +1040,17 @@ pub async fn proxy_image_link_opt_apub(
}
}
fn build_proxied_image_url(
link: &Url,
protocol_and_hostname: &str,
) -> Result<Url, url::ParseError> {
Url::parse(&format!(
"{}/api/v3/image_proxy?url={}",
protocol_and_hostname,
encode(link.as_str())
))
}
#[cfg(test)]
#[allow(clippy::unwrap_used)]
#[allow(clippy::indexing_slicing)]

View File

@ -9,11 +9,11 @@ use lemmy_api_common::{
check_community_user_action,
check_post_deleted_or_removed,
generate_local_apub_endpoint,
get_post,
get_url_blocklist,
is_mod_or_admin,
local_site_to_slur_regex,
process_markdown,
update_read_comments,
EndpointType,
},
};
@ -28,7 +28,7 @@ use lemmy_db_schema::{
},
traits::{Crud, Likeable},
};
use lemmy_db_views::structs::LocalUserView;
use lemmy_db_views::structs::{LocalUserView, PostView};
use lemmy_utils::{
error::{LemmyErrorExt, LemmyErrorType, LemmyResult},
utils::{mention::scrape_text_for_mentions, validation::is_valid_body_field},
@ -51,8 +51,19 @@ pub async fn create_comment(
// Check for a community ban
let post_id = data.post_id;
let post = get_post(post_id, &mut context.pool()).await?;
let community_id = post.community_id;
// Read the full post view in order to get the comments count.
let post_view = PostView::read(
&mut context.pool(),
post_id,
Some(local_user_view.person.id),
true,
)
.await?
.ok_or(LemmyErrorType::CouldntFindPost)?;
let post = post_view.post;
let community_id = post_view.community.id;
check_community_user_action(&local_user_view.person, community_id, &mut context.pool()).await?;
check_post_deleted_or_removed(&post)?;
@ -164,6 +175,15 @@ pub async fn create_comment(
)
.await?;
// Update the read comments, so your own new comment doesn't appear as a +1 unread
update_read_comments(
local_user_view.person.id,
post_id,
post_view.counts.comments + 1,
&mut context.pool(),
)
.await?;
// If we're responding to a comment where we're the recipient,
// (ie we're the grandparent, or the recipient of the parent comment_reply),
// then mark the parent as read.

View File

@ -37,6 +37,12 @@ pub async fn remove_comment(
)
.await?;
// Don't allow removing or restoring comment which was deleted by user, as it would reveal
// the comment text in mod log.
if orig_comment.comment.deleted {
return Err(LemmyErrorType::CouldntUpdateComment.into());
}
// Do the remove
let removed = data.removed;
let updated_comment = Comment::update(

View File

@ -14,7 +14,6 @@ use lemmy_api_common::{
local_site_to_slur_regex,
mark_post_as_read,
process_markdown_opt,
proxy_image_link_opt_apub,
EndpointType,
},
};
@ -75,7 +74,6 @@ pub async fn create_post(
is_url_blocked(&url, &url_blocklist)?;
check_url_scheme(&url)?;
check_url_scheme(&custom_thumbnail)?;
let url = proxy_image_link_opt_apub(url, &context).await?;
check_community_user_action(
&local_user_view.person,
@ -125,7 +123,7 @@ pub async fn create_post(
let post_form = PostInsertForm::builder()
.name(data.name.trim().to_string())
.url(url)
.url(url.map(Into::into))
.body(body)
.alt_text(data.alt_text.clone())
.community_id(data.community_id)
@ -159,11 +157,11 @@ pub async fn create_post(
generate_post_link_metadata(
updated_post.clone(),
custom_thumbnail,
None,
|post| Some(SendActivityData::CreatePost(post)),
Some(local_site),
context.reset_request_count(),
);
)
.await?;
// They like their own post by default
let person_id = local_user_view.person.id;
@ -178,7 +176,6 @@ pub async fn create_post(
.await
.with_lemmy_type(LemmyErrorType::CouldntLikePost)?;
// Mark the post as read
mark_post_as_read(person_id, post_id, &mut context.pool()).await?;
if let Some(url) = updated_post.url.clone() {

View File

@ -2,10 +2,9 @@ use actix_web::web::{Data, Json, Query};
use lemmy_api_common::{
context::LemmyContext,
post::{GetPost, GetPostResponse},
utils::{check_private_instance, is_mod_or_admin_opt, mark_post_as_read},
utils::{check_private_instance, is_mod_or_admin_opt, mark_post_as_read, update_read_comments},
};
use lemmy_db_schema::{
aggregates::structs::{PersonPostAggregates, PersonPostAggregatesForm},
source::{comment::Comment, post::Post},
traits::Crud,
};
@ -14,7 +13,7 @@ use lemmy_db_views::{
structs::{LocalUserView, PostView, SiteView},
};
use lemmy_db_views_actor::structs::{CommunityModeratorView, CommunityView};
use lemmy_utils::error::{LemmyErrorExt, LemmyErrorType, LemmyResult};
use lemmy_utils::error::{LemmyErrorType, LemmyResult};
#[tracing::instrument(skip(context))]
pub async fn get_post(
@ -60,10 +59,17 @@ pub async fn get_post(
.await?
.ok_or(LemmyErrorType::CouldntFindPost)?;
// Mark the post as read
let post_id = post_view.post.id;
if let Some(person_id) = person_id {
mark_post_as_read(person_id, post_id, &mut context.pool()).await?;
update_read_comments(
person_id,
post_id,
post_view.counts.comments,
&mut context.pool(),
)
.await?;
}
// Necessary for the sidebar subscribed
@ -76,21 +82,6 @@ pub async fn get_post(
.await?
.ok_or(LemmyErrorType::CouldntFindCommunity)?;
// Insert into PersonPostAggregates
// to update the read_comments count
if let Some(person_id) = person_id {
let read_comments = post_view.counts.comments;
let person_post_agg_form = PersonPostAggregatesForm {
person_id,
post_id,
read_comments,
..PersonPostAggregatesForm::default()
};
PersonPostAggregates::upsert(&mut context.pool(), &person_post_agg_form)
.await
.with_lemmy_type(LemmyErrorType::CouldntFindPost)?;
}
let moderators = CommunityModeratorView::for_community(&mut context.pool(), community_id).await?;
// Fetch the cross_posts

View File

@ -11,7 +11,6 @@ use lemmy_api_common::{
get_url_blocklist,
local_site_to_slur_regex,
process_markdown_opt,
proxy_image_link_opt_apub,
},
};
use lemmy_db_schema::{
@ -86,11 +85,6 @@ pub async fn update_post(
Err(LemmyErrorType::NoPostEditAllowed)?
}
let url = match url {
Some(url) => Some(proxy_image_link_opt_apub(Some(url), &context).await?),
_ => Default::default(),
};
let language_id = data.language_id;
CommunityLanguage::is_allowed_community_language(
&mut context.pool(),
@ -101,7 +95,7 @@ pub async fn update_post(
let post_form = PostUpdateForm {
name: data.name.clone(),
url,
url: Some(url.map(Into::into)),
body: diesel_option_overwrite(body),
alt_text: diesel_option_overwrite(data.alt_text.clone()),
nsfw: data.nsfw,
@ -118,11 +112,11 @@ pub async fn update_post(
generate_post_link_metadata(
updated_post.clone(),
custom_thumbnail,
None,
|post| Some(SendActivityData::UpdatePost(post)),
Some(local_site),
context.reset_request_count(),
);
)
.await?;
build_post_response(
context.deref(),

View File

@ -156,7 +156,8 @@ pub async fn update_site(
// TODO can't think of a better way to do this.
// If the server suddenly requires email verification, or required applications, no old users
// will be able to log in. It really only wants this to be a requirement for NEW signups.
// So if it was set from false, to true, you need to update all current users columns to be verified.
// So if it was set from false, to true, you need to update all current users columns to be
// verified.
let old_require_application =
local_site.registration_mode == RegistrationMode::RequireApplication;

View File

@ -142,12 +142,17 @@ pub async fn register(
.map(|lang_str| lang_str.split('-').next().unwrap_or_default().to_string())
.collect();
// Show nsfw content if param is true, or if content_warning exists
let show_nsfw = data
.show_nsfw
.unwrap_or(site_view.site.content_warning.is_some());
// Create the local user
let local_user_form = LocalUserInsertForm::builder()
.person_id(inserted_person.id)
.email(data.email.as_deref().map(str::to_lowercase))
.password_encrypted(data.password.to_string())
.show_nsfw(Some(data.show_nsfw))
.show_nsfw(Some(show_nsfw))
.accepted_application(accepted_application)
.default_listing_type(Some(local_site.default_post_listing_type))
.post_listing_mode(Some(local_site.default_post_listing_mode))
@ -192,7 +197,8 @@ pub async fn register(
verify_email_sent: false,
};
// Log the user in directly if the site is not setup, or email verification and application aren't required
// Log the user in directly if the site is not setup, or email verification and application aren't
// required
if !local_site.site_setup
|| (!require_registration_application && !local_site.require_email_verification)
{

View File

@ -44,7 +44,7 @@ once_cell = { workspace = true }
moka.workspace = true
serde_with.workspace = true
html2md = "0.2.14"
html2text = "0.6.0"
html2text = "0.12.5"
stringreader = "0.1.1"
enum_delegate = "0.2.0"

View File

@ -0,0 +1,22 @@
{
"id": "https://socialhub.activitypub.rocks/ap/actor/797217cf18c0e819dfafc52425590146",
"type": "Group",
"updated": "2024-04-05T12:49:51Z",
"url": "https://socialhub.activitypub.rocks/c/meeting/threadiverse-wg/88",
"name": "Threadiverse Working Group (SocialHub)",
"inbox": "https://socialhub.activitypub.rocks/ap/actor/797217cf18c0e819dfafc52425590146/inbox",
"outbox": "https://socialhub.activitypub.rocks/ap/actor/797217cf18c0e819dfafc52425590146/outbox",
"followers": "https://socialhub.activitypub.rocks/ap/actor/797217cf18c0e819dfafc52425590146/followers",
"preferredUsername": "threadiverse-wg",
"publicKey": {
"id": "https://socialhub.activitypub.rocks/ap/actor/797217cf18c0e819dfafc52425590146#main-key",
"owner": "https://socialhub.activitypub.rocks/ap/actor/797217cf18c0e819dfafc52425590146",
"publicKeyPem": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApJi4iAcW6bPiHVCxT9p0\n8DVnrDDO4QtLNy7bpRFdMFifmmmXprsuAi9D2MSwbhH49V54HtIkxBpKd2IR/UD8\nmhMDY4CNI9FHpjqLw0wtkzxcqF9urSqhn0/vWX+9oxyhIgQS5KMiIkYDMJiAc691\niEcZ8LCran23xIGl6Dk54Nr3TqTMLcjDhzQYUJbxMrLq5/knWqOKG3IF5OxK+9ZZ\n1wxDF872eJTxJLkmpag+WYNtHzvB2SGTp8j5IF1/pZ9J1c3cpYfaeolTch/B/GQn\najCB4l27U52rIIObxJqFXSY8wHyd0aAmNmxzPZ7cduRlBDhmI40cAmnCV1YQPvpk\nDwIDAQAB\n-----END PUBLIC KEY-----\n"
},
"icon": {
"type": "Image",
"mediaType": "image/png",
"url": "https://socialhub.activitypub.rocks/uploads/default/original/1X/8faac84234dc73d074dadaa2bcf24dc746b8647f.png"
},
"@context": "https://www.w3.org/ns/activitystreams"
}

View File

@ -0,0 +1,13 @@
{
"id": "https://socialhub.activitypub.rocks/ap/object/1899f65c062200daec50a4c89ed76dc9",
"type": "Note",
"audience": "https://socialhub.activitypub.rocks/ap/actor/797217cf18c0e819dfafc52425590146",
"published": "2024-04-13T14:36:19Z",
"updated": "2024-04-13T14:36:19Z",
"url": "https://socialhub.activitypub.rocks/t/our-next-meeting/4079/1",
"attributedTo": "https://socialhub.activitypub.rocks/ap/actor/495843076e9e469fbd35ccf467ae9fb1",
"name": "Our next meeting",
"context": "https://socialhub.activitypub.rocks/ap/collection/8850f6e85b57c490da915a5dfbbd5045",
"content": "<h3>Last Meeting</h3>\n<h4>Recording</h4>\n<a href=\"https://us06web.zoom.us/rec/share/4hGBTvgXJPlu8UkjkkxVARypNg5DH0eeaKlIBv71D4G3lokYyrCrg7cqBCJmL109.FsHYTZDlVvZXrgcn?startTime=1712254114000\">https://us06web.zoom.us/rec/share/4hGBTvgXJPlu8UkjkkxVARypNg5DH0eeaKlIBv71D4G3lokYyrCrg7cqBCJmL109.FsHYTZDlVvZXrgcn?startTime=1712254114000</a>\nPasscode: z+1*4pUB\n<h4>Minutes</h4>\nTo refresh your memory, you can read the minutes of last week's meeting <a href=\"https://community.nodebb.org/topic/17949/minutes&hellip;",
"@context": "https://www.w3.org/ns/activitystreams"
}

View File

@ -0,0 +1,23 @@
{
"id": "https://socialhub.activitypub.rocks/ap/actor/495843076e9e469fbd35ccf467ae9fb1",
"type": "Person",
"updated": "2024-01-15T12:27:03Z",
"url": "https://socialhub.activitypub.rocks/u/angus",
"name": "Angus McLeod",
"inbox": "https://socialhub.activitypub.rocks/ap/actor/495843076e9e469fbd35ccf467ae9fb1/inbox",
"outbox": "https://socialhub.activitypub.rocks/ap/actor/495843076e9e469fbd35ccf467ae9fb1/outbox",
"sharedInbox": "https://socialhub.activitypub.rocks/ap/users/inbox",
"followers": "https://socialhub.activitypub.rocks/ap/actor/495843076e9e469fbd35ccf467ae9fb1/followers",
"preferredUsername": "angus",
"publicKey": {
"id": "https://socialhub.activitypub.rocks/ap/actor/495843076e9e469fbd35ccf467ae9fb1#main-key",
"owner": "https://socialhub.activitypub.rocks/ap/actor/495843076e9e469fbd35ccf467ae9fb1",
"publicKeyPem": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3RpuFDuwXZzOeHO5fO2O\nHmP7Flc5JDXJ8OOEJYq5T/dzUKqREOF1ZT0WMww8/E3P6w+gfFsjzThraJb8nHuW\nP6798SUD35CWBclfhw9DapjVn99JyFcAWcH3b9fr6LYshc4y1BoeJagk1kcro2Dc\n+pX0vVXgNjwWnGfyucAgGIUWrNUjcvIvXmyVCBSQfXG3nCALV1JbI4KSgf/5KyBn\nza/QefaetxYiFV8wAisPKLsz3XQAaITsQmbSi+8gmwXt/9U808PK1KphCiClDOWg\noi0HPzJn0rn+mwFCfgNWenvribfeG40AHLG33OkWKvslufjifdWDCOcBYYzyCEV6\n+wIDAQAB\n-----END PUBLIC KEY-----\n"
},
"icon": {
"type": "Image",
"mediaType": "image/png",
"url": "https://socialhub.activitypub.rocks/user_avatar/socialhub.activitypub.rocks/angus/96/2295_2.png"
},
"@context": "https://www.w3.org/ns/activitystreams"
}

View File

@ -23,7 +23,6 @@
"href": "https://lemmy.ml/pictrs/image/xl8W7FZfk9.jpg"
}
],
"commentsEnabled": true,
"sensitive": false,
"language": {
"identifier": "ko",

View File

@ -23,7 +23,6 @@
"href": "https://lemmy.ml/pictrs/image/xl8W7FZfk9.jpg"
}
],
"commentsEnabled": true,
"sensitive": false,
"published": "2021-10-29T15:10:51.557399Z",
"updated": "2021-10-29T15:11:35.976374Z"

View File

@ -15,7 +15,6 @@
"cc": [],
"mediaType": "text/html",
"attachment": [],
"commentsEnabled": true,
"sensitive": false,
"published": "2023-02-06T06:42:41.939437Z",
"language": {
@ -36,7 +35,6 @@
"cc": [],
"mediaType": "text/html",
"attachment": [],
"commentsEnabled": true,
"sensitive": false,
"published": "2023-02-06T06:42:37.119567Z",
"language": {

View File

@ -22,7 +22,6 @@
],
"name": "another outbox test",
"mediaType": "text/html",
"commentsEnabled": true,
"sensitive": false,
"stickied": false,
"published": "2021-11-18T17:19:45.895163Z"
@ -51,7 +50,6 @@
],
"name": "outbox test",
"mediaType": "text/html",
"commentsEnabled": true,
"sensitive": false,
"stickied": false,
"published": "2021-11-18T17:19:05.763109Z"

View File

@ -25,7 +25,6 @@
"url": "https://enterprise.lemmy.ml/pictrs/image/eOtYb9iEiB.png"
},
"sensitive": false,
"commentsEnabled": true,
"language": {
"identifier": "fr",
"name": "Français"

View File

@ -0,0 +1,49 @@
{
"@context": ["https://www.w3.org/ns/activitystreams"],
"id": "https://pfefferle.org/lemmy-part-4/#activity#activity",
"type": "Announce",
"audience": "https://pfefferle.org/@pfefferle.org",
"published": "2024-05-03T12:32:29Z",
"updated": "2024-05-06T08:20:33Z",
"to": [
"https://www.w3.org/ns/activitystreams#Public",
"https://pfefferle.org/wp-json/activitypub/1.0/actors/1/followers"
],
"cc": [],
"object": {
"id": "https://pfefferle.org/lemmy-part-4/#activity",
"type": "Update",
"audience": "https://pfefferle.org/@pfefferle.org",
"published": "2024-05-03T12:32:29Z",
"updated": "2024-05-06T08:20:33Z",
"to": [
"https://www.w3.org/ns/activitystreams#Public",
"https://pfefferle.org/wp-json/activitypub/1.0/actors/1/followers"
],
"cc": [],
"object": {
"id": "https://pfefferle.org/lemmy-part-4/",
"type": "Article",
"attachment": [],
"attributedTo": "https://pfefferle.org/author/pfefferle/",
"audience": "https://pfefferle.org/@pfefferle.org",
"content": "\u003Cp\u003EIdentifies one or more entities that represent the total population of entities for which the object can considered to be relevant. Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant. \u003C/p\u003E",
"contentMap": {
"en": "\u003Cp\u003EIdentifies one or more entities that represent the total population of entities for which the object can considered to be relevant. Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant. \u003C/p\u003E"
},
"name": "Lemmy (Part 4)",
"published": "2024-05-03T12:32:29Z",
"summary": "Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant. Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object can considered to be relevant.Identifies one or more entities that represent the total population of entities for which the object [...]",
"tag": [],
"updated": "2024-05-06T08:20:33Z",
"url": "https://pfefferle.org/lemmy-part-4/",
"to": [
"https://www.w3.org/ns/activitystreams#Public",
"https://pfefferle.org/wp-json/activitypub/1.0/actors/1/followers"
],
"cc": []
},
"actor": "https://pfefferle.org/author/pfefferle/"
},
"actor": "https://pfefferle.org/@pfefferle.org"
}

View File

@ -0,0 +1,66 @@
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://w3id.org/security/v1",
"https://purl.archive.org/socialweb/webfinger",
{
"schema": "http://schema.org#",
"toot": "http://joinmastodon.org/ns#",
"webfinger": "https://webfinger.net/#",
"lemmy": "https://join-lemmy.org/ns#",
"manuallyApprovesFollowers": "as:manuallyApprovesFollowers",
"PropertyValue": "schema:PropertyValue",
"value": "schema:value",
"Hashtag": "as:Hashtag",
"featured": {
"@id": "toot:featured",
"@type": "@id"
},
"featuredTags": {
"@id": "toot:featuredTags",
"@type": "@id"
},
"moderators": {
"@id": "lemmy:moderators",
"@type": "@id"
},
"postingRestrictedToMods": "lemmy:postingRestrictedToMods",
"discoverable": "toot:discoverable",
"indexable": "toot:indexable",
"resource": "webfinger:resource"
}
],
"id": "https://pfefferle.org/@pfefferle.org",
"type": "Group",
"attachment": [],
"attributedTo": "https://pfefferle.org/wp-json/activitypub/1.0/collections/moderators",
"name": "Matthias Pfefferle",
"icon": {
"type": "Image",
"url": "https://pfefferle.org/wp-content/uploads/2023/06/cropped-BeLItBV-_400x400.jpg"
},
"published": "2024-04-03T16:58:22Z",
"summary": "<p>Webworker, blogger und podcaster</p>\n",
"tag": [],
"url": "https://pfefferle.org/@pfefferle.org",
"inbox": "https://pfefferle.org/wp-json/activitypub/1.0/users/0/inbox",
"outbox": "https://pfefferle.org/wp-json/activitypub/1.0/users/0/outbox",
"following": "https://pfefferle.org/wp-json/activitypub/1.0/users/0/following",
"followers": "https://pfefferle.org/wp-json/activitypub/1.0/users/0/followers",
"preferredUsername": "pfefferle.org",
"endpoints": {
"sharedInbox": "https://pfefferle.org/wp-json/activitypub/1.0/inbox"
},
"publicKey": {
"id": "https://pfefferle.org/@pfefferle.org#main-key",
"owner": "https://pfefferle.org/@pfefferle.org",
"publicKeyPem": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuq8xeLMFcaCwPFBhgMRE\n/dDh2XKoNXFXnixctmK8BXSuuLMxucm3I/8NyhIvb3LqU+uP1fO8F0ecUbk2sN+x\nKag5vIV6yKXzJ8ILMWQ9AaELpXDmMZqL0zal0LUJRAOkDgPDovDAoq6tx++yDoV0\njdVbf9CoZKit1cz2ZrEuE5dswq3J/z9+c6POkhCkWEX5TPJzkOrmnjkvrXxGHUJ2\nA3+P+VaZhd5cmvqYosSpYNJshxCdev12pIF78OnYLiYiyXlgGHU+7uQR0M4tTcij\n6cUdLkms9m+b6H3ctXntPn410e5YLFPldjAYzQB5wHVdFZsWtyrbqfYdCa+KkKpA\nvwIDAQAB\n-----END PUBLIC KEY-----\n"
},
"manuallyApprovesFollowers": false,
"featured": "https://pfefferle.org/wp-json/activitypub/1.0/users/0/collections/featured",
"moderators": "https://pfefferle.org/wp-json/activitypub/1.0/collections/moderators",
"discoverable": true,
"indexable": true,
"webfinger": "pfefferle.org@pfefferle.org",
"postingRestrictedToMods": true
}

View File

@ -0,0 +1,24 @@
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"Hashtag": "as:Hashtag"
}
],
"id": "https://pfefferle.org?c=148",
"type": "Note",
"attributedTo": "https://pfefferle.org/author/pfefferle/",
"content": "<p>Nice! Hello from WordPress!</p>",
"contentMap": {
"en": "<p>Nice! Hello from WordPress!</p>"
},
"inReplyTo": "https://socialhub.activitypub.rocks/ap/object/ce040f1ead95964f6dbbf1084b81432d",
"published": "2024-04-30T15:21:13Z",
"tag": [],
"url": "https://pfefferle.org?c=148",
"to": [
"https://www.w3.org/ns/activitystreams#Public",
"https://pfefferle.org/wp-json/activitypub/1.0/users/0/followers"
],
"cc": []
}

View File

@ -0,0 +1,26 @@
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"Hashtag": "as:Hashtag"
}
],
"id": "https://pfefferle.org/this-is-a-test-federation/",
"type": "Article",
"attachment": [],
"attributedTo": "https://pfefferle.org/author/pfefferle/",
"content": "<p>with Discource!</p>",
"contentMap": {
"en": "<p>with Discource!</p>"
},
"name": "This is a test-federation",
"published": "2024-04-30T15:16:41Z",
"summary": "with Discource! [...]",
"tag": [],
"url": "https://pfefferle.org/this-is-a-test-federation/",
"to": [
"https://www.w3.org/ns/activitystreams#Public",
"https://pfefferle.org/wp-json/activitypub/1.0/users/1/followers"
],
"cc": []
}

View File

@ -0,0 +1,74 @@
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://w3id.org/security/v1",
"https://purl.archive.org/socialweb/webfinger",
{
"schema": "http://schema.org#",
"toot": "http://joinmastodon.org/ns#",
"webfinger": "https://webfinger.net/#",
"lemmy": "https://join-lemmy.org/ns#",
"manuallyApprovesFollowers": "as:manuallyApprovesFollowers",
"PropertyValue": "schema:PropertyValue",
"value": "schema:value",
"Hashtag": "as:Hashtag",
"featured": {
"@id": "toot:featured",
"@type": "@id"
},
"featuredTags": {
"@id": "toot:featuredTags",
"@type": "@id"
},
"moderators": {
"@id": "lemmy:moderators",
"@type": "@id"
},
"postingRestrictedToMods": "lemmy:postingRestrictedToMods",
"discoverable": "toot:discoverable",
"indexable": "toot:indexable",
"resource": "webfinger:resource"
}
],
"id": "https://pfefferle.org/author/pfefferle/",
"type": "Person",
"attachment": [
{
"type": "PropertyValue",
"name": "Blog",
"value": "<a rel=\"me\" title=\"https://pfefferle.org/\" target=\"_blank\" href=\"https://pfefferle.org/\">pfefferle.org</a>"
},
{
"type": "PropertyValue",
"name": "Profile",
"value": "<a rel=\"me\" title=\"https://pfefferle.org/author/pfefferle/\" target=\"_blank\" href=\"https://pfefferle.org/author/pfefferle/\">pfefferle.org</a>"
}
],
"name": "Matthias Pfefferle",
"icon": {
"type": "Image",
"url": "https://secure.gravatar.com/avatar/a2bdca7870e859658cece96c044b3be5?s=120&#038;d=mm&#038;r=g"
},
"published": "2014-02-10T15:23:08Z",
"summary": "<p>Ich arbeite als Open Web Lead für Automattic.</p>\n",
"tag": [],
"url": "https://pfefferle.org/author/pfefferle/",
"inbox": "https://pfefferle.org/wp-json/activitypub/1.0/users/1/inbox",
"outbox": "https://pfefferle.org/wp-json/activitypub/1.0/users/1/outbox",
"following": "https://pfefferle.org/wp-json/activitypub/1.0/users/1/following",
"followers": "https://pfefferle.org/wp-json/activitypub/1.0/users/1/followers",
"preferredUsername": "matthias",
"endpoints": {
"sharedInbox": "https://pfefferle.org/wp-json/activitypub/1.0/inbox"
},
"publicKey": {
"id": "https://pfefferle.org/author/pfefferle/#main-key",
"owner": "https://pfefferle.org/author/pfefferle/",
"publicKeyPem": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvTA5RA40nOsso04RSwyX\nHXTojRPUMlIlArDcSy3M5GUJp9/xbxSUOdBjqd31KKB1GIi3vrLmD1Qi/ZqS95Qy\nw2Zd3xOsCg+o9bsyOG+O6Y8Lu+HEB5JKLUbNHdiSviakJ8wGadH9Wm4WIiN20y+q\n/u6lgxgiWfZ2CFCN6SOc28fUKi9NmKvXK+M12BhFfy1tC5KWXKDm0UbfI1+dmqhR\n3Ffe6vEsCI/YIVVdWxQ9kouOd0XSHOGdslktkepRO7IP9i9TdwyeCa0WWRoeO5Wa\ntVpc1Y0WuNbTM2ksIXTg0G+rO1/6KO/hrHnGu3RCfb/ZIHK5L/aWYb9B3PG3LyKV\n+wIDAQAB\n-----END PUBLIC KEY-----\n"
},
"manuallyApprovesFollowers": false,
"featured": "https://pfefferle.org/wp-json/activitypub/1.0/users/1/collections/featured",
"discoverable": true,
"indexable": true,
"webfinger": "matthias@pfefferle.org"
}

View File

@ -138,8 +138,8 @@ impl ActivityHandler for CollectionAdd {
.dereference(context)
.await?;
// If we had to refetch the community while parsing the activity, then the new mod has already
// been added. Skip it here as it would result in a duplicate key error.
// If we had to refetch the community while parsing the activity, then the new mod has
// already been added. Skip it here as it would result in a duplicate key error.
let new_mod_id = new_mod.id;
let moderated_communities =
CommunityModerator::get_person_moderated_communities(&mut context.pool(), new_mod_id)

View File

@ -26,6 +26,7 @@ use lemmy_db_schema::{
source::{
activity::ActivitySendTargets,
community::Community,
moderator::{ModLockPost, ModLockPostForm},
person::Person,
post::{Post, PostUpdateForm},
},
@ -60,12 +61,22 @@ impl ActivityHandler for LockPage {
}
async fn receive(self, context: &Data<Self::DataType>) -> Result<(), Self::Error> {
insert_received_activity(&self.id, context).await?;
let locked = Some(true);
let form = PostUpdateForm {
locked: Some(true),
locked,
..Default::default()
};
let post = self.object.dereference(context).await?;
Post::update(&mut context.pool(), post.id, &form).await?;
let form = ModLockPostForm {
mod_person_id: self.actor.dereference(context).await?.id,
post_id: post.id,
locked,
};
ModLockPost::create(&mut context.pool(), &form).await?;
Ok(())
}
}
@ -94,12 +105,21 @@ impl ActivityHandler for UndoLockPage {
async fn receive(self, context: &Data<Self::DataType>) -> Result<(), Self::Error> {
insert_received_activity(&self.id, context).await?;
let locked = Some(false);
let form = PostUpdateForm {
locked: Some(false),
locked,
..Default::default()
};
let post = self.object.object.dereference(context).await?;
Post::update(&mut context.pool(), post.id, &form).await?;
let form = ModLockPostForm {
mod_person_id: self.actor.dereference(context).await?.id,
post_id: post.id,
locked,
};
ModLockPost::create(&mut context.pool(), &form).await?;
Ok(())
}
}

View File

@ -24,13 +24,14 @@ pub mod update;
///
/// Activities are sent to the community itself if it lives on another instance. If the community
/// is local, the activity is directly wrapped into Announce and sent to community followers.
/// Activities are also sent to those who follow the actor (with exception of moderation activities).
/// Activities are also sent to those who follow the actor (with exception of moderation
/// activities).
///
/// * `activity` - The activity which is being sent
/// * `actor` - The user who is sending the activity
/// * `community` - Community inside which the activity is sent
/// * `inboxes` - Any additional inboxes the activity should be sent to (for example,
/// to the user who is being promoted to moderator)
/// * `inboxes` - Any additional inboxes the activity should be sent to (for example, to the user
/// who is being promoted to moderator)
/// * `is_mod_activity` - True for things like Add/Mod, these are not sent to user followers
pub(crate) async fn send_activity_in_community(
activity: AnnouncableActivities,

View File

@ -176,7 +176,8 @@ impl ActivityHandler for CreateOrUpdateNote {
// Although mentions could be gotten from the post tags (they are included there), or the ccs,
// Its much easier to scrape them from the comment body, since the API has to do that
// anyway.
// TODO: for compatibility with other projects, it would be much better to read this from cc or tags
// TODO: for compatibility with other projects, it would be much better to read this from cc or
// tags
let mentions = scrape_text_for_mentions(&comment.content);
send_local_notifs(mentions, comment.id, &actor, do_send_email, context).await?;
Ok(())

View File

@ -4,7 +4,6 @@ use crate::{
community::send_activity_in_community,
generate_activity_id,
verify_is_public,
verify_mod_action,
verify_person_in_community,
},
activity_lists::AnnouncableActivities,
@ -78,14 +77,13 @@ impl CreateOrUpdatePage {
let create_or_update =
CreateOrUpdatePage::new(post.into(), &person, &community, kind, &context).await?;
let is_mod_action = create_or_update.object.is_mod_action(&context).await?;
let activity = AnnouncableActivities::CreateOrUpdatePost(create_or_update);
send_activity_in_community(
activity,
&person,
&community,
ActivitySendTargets::empty(),
is_mod_action,
false,
&context,
)
.await?;
@ -112,30 +110,8 @@ impl ActivityHandler for CreateOrUpdatePage {
let community = self.community(context).await?;
verify_person_in_community(&self.actor, &community, context).await?;
check_community_deleted_or_removed(&community)?;
match self.kind {
CreateOrUpdateType::Create => {
verify_domains_match(self.actor.inner(), self.object.id.inner())?;
verify_urls_match(self.actor.inner(), self.object.creator()?.inner())?;
// Check that the post isnt locked, as that isnt possible for newly created posts.
// However, when fetching a remote post we generate a new create activity with the current
// locked value, so this check may fail. So only check if its a local community,
// because then we will definitely receive all create and update activities separately.
let is_locked = self.object.comments_enabled == Some(false);
if community.local && is_locked {
Err(LemmyErrorType::NewPostCannotBeLocked)?
}
}
CreateOrUpdateType::Update => {
let is_mod_action = self.object.is_mod_action(context).await?;
if is_mod_action {
verify_mod_action(&self.actor, &community, context).await?;
} else {
verify_domains_match(self.actor.inner(), self.object.id.inner())?;
verify_urls_match(self.actor.inner(), self.object.creator()?.inner())?;
}
}
}
verify_domains_match(self.actor.inner(), self.object.id.inner())?;
verify_urls_match(self.actor.inner(), self.object.creator()?.inner())?;
ApubPost::verify(&self.object, self.actor.inner(), context).await?;
Ok(())
}

View File

@ -4,9 +4,10 @@ use crate::objects::{
person::ApubPerson,
post::ApubPost,
};
use activitypub_federation::{config::Data, fetch::object_id::ObjectId};
use activitypub_federation::{config::Data, fetch::object_id::ObjectId, traits::Object};
use actix_web::web::Json;
use futures::{future::try_join_all, StreamExt};
use itertools::Itertools;
use lemmy_api_common::{context::LemmyContext, SuccessResponse};
use lemmy_db_schema::{
newtypes::DbUrl,
@ -30,8 +31,11 @@ use lemmy_utils::{
spawn_try_task,
};
use serde::{Deserialize, Serialize};
use std::future::Future;
use tracing::info;
const PARALLELISM: usize = 10;
/// Backup of user data. This struct should never be changed so that the data can be used as a
/// long-term backup in case the instance goes down unexpectedly. All fields are optional to allow
/// importing partial backups.
@ -40,7 +44,7 @@ use tracing::info;
///
/// Be careful with any changes to this struct, to avoid breaking changes which could prevent
/// importing older backups.
#[derive(Debug, Serialize, Deserialize, Clone)]
#[derive(Debug, Serialize, Deserialize, Clone, Default)]
pub struct UserSettingsBackup {
pub display_name: Option<String>,
pub bio: Option<String>,
@ -167,141 +171,91 @@ pub async fn import_settings(
}
spawn_try_task(async move {
const PARALLELISM: usize = 10;
let person_id = local_user_view.person.id;
// These tasks fetch objects from remote instances which might be down.
// TODO: Would be nice if we could send a list of failed items with api response, but then
// the request would likely timeout.
let mut failed_items = vec![];
info!(
"Starting settings backup for {}",
"Starting settings import for {}",
local_user_view.person.name
);
futures::stream::iter(
data
.followed_communities
.clone()
.into_iter()
// reset_request_count works like clone, and is necessary to avoid running into request limit
.map(|f| (f, context.reset_request_count()))
.map(|(followed, context)| async move {
// need to reset outgoing request count to avoid running into limit
let community = followed.dereference(&context).await?;
let form = CommunityFollowerForm {
person_id,
community_id: community.id,
pending: true,
};
CommunityFollower::follow(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
}),
let failed_followed_communities = fetch_and_import(
data.followed_communities.clone(),
&context,
|(followed, context)| async move {
let community = followed.dereference(&context).await?;
let form = CommunityFollowerForm {
person_id,
community_id: community.id,
pending: true,
};
CommunityFollower::follow(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
},
)
.buffer_unordered(PARALLELISM)
.collect::<Vec<_>>()
.await
.into_iter()
.enumerate()
.for_each(|(i, r)| {
if let Err(e) = r {
failed_items.push(data.followed_communities.get(i).map(|u| u.inner().clone()));
info!("Failed to import followed community: {e}");
}
});
futures::stream::iter(
data
.saved_posts
.clone()
.into_iter()
.map(|s| (s, context.reset_request_count()))
.map(|(saved, context)| async move {
let post = saved.dereference(&context).await?;
let form = PostSavedForm {
person_id,
post_id: post.id,
};
PostSaved::save(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
}),
)
.buffer_unordered(PARALLELISM)
.collect::<Vec<_>>()
.await
.into_iter()
.enumerate()
.for_each(|(i, r)| {
if let Err(e) = r {
failed_items.push(data.followed_communities.get(i).map(|u| u.inner().clone()));
info!("Failed to import saved post community: {e}");
}
});
futures::stream::iter(
data
.saved_comments
.clone()
.into_iter()
.map(|s| (s, context.reset_request_count()))
.map(|(saved, context)| async move {
let comment = saved.dereference(&context).await?;
let form = CommentSavedForm {
person_id,
comment_id: comment.id,
};
CommentSaved::save(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
}),
)
.buffer_unordered(PARALLELISM)
.collect::<Vec<_>>()
.await
.into_iter()
.enumerate()
.for_each(|(i, r)| {
if let Err(e) = r {
failed_items.push(data.followed_communities.get(i).map(|u| u.inner().clone()));
info!("Failed to import saved comment community: {e}");
}
});
let failed_items: Vec<_> = failed_items.into_iter().flatten().collect();
info!(
"Finished settings backup for {}, failed items: {:#?}",
local_user_view.person.name, failed_items
);
// These tasks don't connect to any remote instances but only insert directly in the database.
// That means the only error condition are db connection failures, so no extra error handling is
// needed.
try_join_all(data.blocked_communities.iter().map(|blocked| async {
// dont fetch unknown blocked objects from home server
let community = blocked.dereference_local(&context).await?;
let form = CommunityBlockForm {
person_id,
community_id: community.id,
};
CommunityBlock::block(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
}))
.await?;
try_join_all(data.blocked_users.iter().map(|blocked| async {
// dont fetch unknown blocked objects from home server
let target = blocked.dereference_local(&context).await?;
let form = PersonBlockForm {
person_id,
target_id: target.id,
};
PersonBlock::block(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
}))
let failed_saved_posts = fetch_and_import(
data.saved_posts.clone(),
&context,
|(saved, context)| async move {
let post = saved.dereference(&context).await?;
let form = PostSavedForm {
person_id,
post_id: post.id,
};
PostSaved::save(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
},
)
.await?;
let failed_saved_comments = fetch_and_import(
data.saved_comments.clone(),
&context,
|(saved, context)| async move {
let comment = saved.dereference(&context).await?;
let form = CommentSavedForm {
person_id,
comment_id: comment.id,
};
CommentSaved::save(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
},
)
.await?;
let failed_community_blocks = fetch_and_import(
data.blocked_communities.clone(),
&context,
|(blocked, context)| async move {
let community = blocked.dereference(&context).await?;
let form = CommunityBlockForm {
person_id,
community_id: community.id,
};
CommunityBlock::block(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
},
)
.await?;
let failed_user_blocks = fetch_and_import(
data.blocked_users.clone(),
&context,
|(blocked, context)| async move {
let context = context.reset_request_count();
let target = blocked.dereference(&context).await?;
let form = PersonBlockForm {
person_id,
target_id: target.id,
};
PersonBlock::block(&mut context.pool(), &form).await?;
LemmyResult::Ok(())
},
)
.await?;
try_join_all(data.blocked_instances.iter().map(|domain| async {
// dont fetch unknown blocked objects from home server
let instance = Instance::read_or_create(&mut context.pool(), domain.clone()).await?;
let form = InstanceBlockForm {
person_id,
@ -312,17 +266,53 @@ pub async fn import_settings(
}))
.await?;
info!("Settings import completed for {}, the following items failed: {failed_followed_communities}, {failed_saved_posts}, {failed_saved_comments}, {failed_community_blocks}, {failed_user_blocks}",
local_user_view.person.name);
Ok(())
});
Ok(Json(Default::default()))
}
async fn fetch_and_import<Kind, Fut>(
objects: Vec<ObjectId<Kind>>,
context: &Data<LemmyContext>,
import_fn: impl FnMut((ObjectId<Kind>, Data<LemmyContext>)) -> Fut,
) -> LemmyResult<String>
where
Kind: Object + Send + 'static,
for<'de2> <Kind as Object>::Kind: Deserialize<'de2>,
Fut: Future<Output = LemmyResult<()>>,
{
let mut failed_items = vec![];
futures::stream::iter(
objects
.clone()
.into_iter()
// need to reset outgoing request count to avoid running into limit
.map(|s| (s, context.reset_request_count()))
.map(import_fn),
)
.buffer_unordered(PARALLELISM)
.collect::<Vec<_>>()
.await
.into_iter()
.enumerate()
.for_each(|(i, r): (usize, LemmyResult<()>)| {
if r.is_err() {
if let Some(object) = objects.get(i) {
failed_items.push(object.inner().clone());
}
}
});
Ok(failed_items.into_iter().join(","))
}
#[cfg(test)]
#[allow(clippy::indexing_slicing)]
mod tests {
use crate::api::user_settings_backup::{export_settings, import_settings};
use crate::api::user_settings_backup::{export_settings, import_settings, UserSettingsBackup};
use activitypub_federation::config::Data;
use lemmy_api_common::context::LemmyContext;
use lemmy_db_schema::{
@ -420,6 +410,44 @@ mod tests {
Ok(())
}
#[tokio::test]
#[serial]
async fn test_settings_partial_import() -> LemmyResult<()> {
let context = LemmyContext::init_test_context().await;
let export_user =
create_user("hanna".to_string(), Some("my bio".to_string()), &context).await?;
let community_form = CommunityInsertForm::builder()
.name("testcom".to_string())
.title("testcom".to_string())
.instance_id(export_user.person.instance_id)
.build();
let community = Community::create(&mut context.pool(), &community_form).await?;
let follower_form = CommunityFollowerForm {
community_id: community.id,
person_id: export_user.person.id,
pending: false,
};
CommunityFollower::follow(&mut context.pool(), &follower_form).await?;
let backup = export_settings(export_user.clone(), context.reset_request_count()).await?;
let import_user = create_user("charles".to_string(), None, &context).await?;
let backup2 = UserSettingsBackup {
followed_communities: backup.followed_communities.clone(),
..Default::default()
};
import_settings(
actix_web::web::Json(backup2),
import_user.clone(),
context.reset_request_count(),
)
.await?;
Ok(())
}
#[tokio::test]
#[serial]
async fn disallow_large_backup() -> LemmyResult<()> {

View File

@ -72,7 +72,8 @@ impl Collection for ApubCommunityFeatured {
.to_vec();
}
// process items in parallel, to avoid long delay from fetch_site_metadata() and other processing
// process items in parallel, to avoid long delay from fetch_site_metadata() and other
// processing
let stickied_posts: Vec<Post> = join_all(pages.into_iter().map(|page| {
async {
// use separate request counter for each item, otherwise there will be problems with

View File

@ -102,7 +102,8 @@ impl Collection for ApubCommunityOutbox {
// We intentionally ignore errors here. This is because the outbox might contain posts from old
// Lemmy versions, or from other software which we cant parse. In that case, we simply skip the
// item and only parse the ones that work.
// process items in parallel, to avoid long delay from fetch_site_metadata() and other processing
// process items in parallel, to avoid long delay from fetch_site_metadata() and other
// processing
join_all(outbox_activities.into_iter().map(|activity| {
async {
// Receiving announce requires at least one local community follower for anti spam purposes.

View File

@ -20,7 +20,8 @@ use lemmy_db_schema::{
};
use lemmy_utils::error::{LemmyErrorType, LemmyResult};
use serde::{Deserialize, Serialize};
use std::ops::Deref;
use std::{ops::Deref, time::Duration};
use tokio::time::timeout;
use url::Url;
mod comment;
@ -30,13 +31,22 @@ mod post;
pub mod routes;
pub mod site;
const INCOMING_ACTIVITY_TIMEOUT: Duration = Duration::from_secs(9);
pub async fn shared_inbox(
request: HttpRequest,
body: Bytes,
data: Data<LemmyContext>,
) -> LemmyResult<HttpResponse> {
receive_activity::<SharedInboxActivities, UserOrCommunity, LemmyContext>(request, body, &data)
let receive_fut =
receive_activity::<SharedInboxActivities, UserOrCommunity, LemmyContext>(request, body, &data);
// Set a timeout shorter than `REQWEST_TIMEOUT` for processing incoming activities. This is to
// avoid taking a long time to process an incoming activity when a required data fetch times out.
// In this case our own instance would timeout and be marked as dead by the sender. Better to
// consider the activity broken and move on.
timeout(INCOMING_ACTIVITY_TIMEOUT, receive_fut)
.await
.map_err(|_| LemmyErrorType::InboxTimeout)?
}
/// Convert the data to json and turn it into an HTTP Response with the correct ActivityPub

View File

@ -29,7 +29,9 @@ pub(crate) mod mentions;
pub mod objects;
pub mod protocol;
pub const FEDERATION_HTTP_FETCH_LIMIT: u32 = 50;
/// Maximum number of outgoing HTTP requests to fetch a single object. Needs to be high enough
/// to fetch a new community with posts, moderators and featured posts.
pub const FEDERATION_HTTP_FETCH_LIMIT: u32 = 100;
/// Only include a basic context to save space and bandwidth. The main context is hosted statically
/// on join-lemmy.org. Include activitystreams explicitly for better compat, but this could

View File

@ -28,6 +28,7 @@ use lemmy_api_common::{
},
};
use lemmy_db_schema::{
sensitive::SensitiveString,
source::{
activity::ActorType,
actor_language::CommunityLanguage,
@ -213,7 +214,7 @@ impl Actor for ApubCommunity {
}
fn private_key_pem(&self) -> Option<String> {
self.private_key.clone()
self.private_key.clone().map(SensitiveString::into_inner)
}
fn inbox(&self) -> Url {

View File

@ -29,6 +29,7 @@ use lemmy_api_common::{
};
use lemmy_db_schema::{
newtypes::InstanceId,
sensitive::SensitiveString,
source::{
activity::ActorType,
actor_language::SiteLanguage,
@ -100,7 +101,7 @@ impl Object for ApubSite {
kind: ApplicationType::Application,
id: self.id().into(),
name: self.name.clone(),
preferred_username: data.domain().to_string(),
preferred_username: Some(data.domain().to_string()),
content: self.sidebar.as_ref().map(|d| markdown_to_html(d)),
source: self.sidebar.clone().map(Source::new),
summary: self.description.clone(),
@ -187,7 +188,7 @@ impl Actor for ApubSite {
}
fn private_key_pem(&self) -> Option<String> {
self.private_key.clone()
self.private_key.clone().map(SensitiveString::into_inner)
}
fn inbox(&self) -> Url {

View File

@ -30,6 +30,7 @@ use lemmy_api_common::{
},
};
use lemmy_db_schema::{
sensitive::SensitiveString,
source::{
activity::ActorType,
local_site::LocalSite,
@ -200,7 +201,7 @@ impl Actor for ApubPerson {
}
fn private_key_pem(&self) -> Option<String> {
self.private_key.clone()
self.private_key.clone().map(SensitiveString::into_inner)
}
fn inbox(&self) -> Url {

View File

@ -25,18 +25,12 @@ use html2text::{from_read_with_decorator, render::text_renderer::TrivialDecorato
use lemmy_api_common::{
context::LemmyContext,
request::generate_post_link_metadata,
utils::{
get_url_blocklist,
local_site_opt_to_slur_regex,
process_markdown_opt,
proxy_image_link_opt_apub,
},
utils::{get_url_blocklist, local_site_opt_to_slur_regex, process_markdown_opt},
};
use lemmy_db_schema::{
source::{
community::Community,
local_site::LocalSite,
moderator::{ModLockPost, ModLockPostForm},
person::Person,
post::{Post, PostInsertForm, PostUpdateForm},
},
@ -46,6 +40,7 @@ use lemmy_db_schema::{
use lemmy_db_views_actor::structs::CommunityModeratorView;
use lemmy_utils::{
error::{LemmyError, LemmyErrorType, LemmyResult},
spawn_try_task,
utils::{markdown::markdown_to_html, slurs::check_slurs_opt, validation::check_url_scheme},
};
use std::ops::Deref;
@ -147,7 +142,6 @@ impl Object for ApubPost {
source: self.body.clone().map(Source::new),
attachment,
image: self.thumbnail_url.clone().map(ImageObject::new),
comments_enabled: Some(!self.locked),
sensitive: Some(self.nsfw),
language,
published: Some(self.published),
@ -165,12 +159,8 @@ impl Object for ApubPost {
expected_domain: &Url,
context: &Data<Self::DataType>,
) -> LemmyResult<()> {
// We can't verify the domain in case of mod action, because the mod may be on a different
// instance from the post author.
if !page.is_mod_action(context).await? {
verify_domains_match(page.id.inner(), expected_domain)?;
verify_is_remote_object(&page.id, context)?;
};
verify_domains_match(page.id.inner(), expected_domain)?;
verify_is_remote_object(&page.id, context)?;
let community = page.community(context).await?;
check_apub_id_valid_with_strictness(page.id.inner(), community.local, context).await?;
@ -218,84 +208,55 @@ impl Object for ApubPost {
name = name.chars().take(MAX_TITLE_LENGTH).collect();
}
// read existing, local post if any (for generating mod log)
let old_post = page.id.dereference_local(context).await;
let first_attachment = page.attachment.first();
let local_site = LocalSite::read(&mut context.pool()).await.ok();
let form = if !page.is_mod_action(context).await? {
let url = if let Some(attachment) = first_attachment.cloned() {
Some(attachment.url())
} else if page.kind == PageType::Video {
// we cant display videos directly, so insert a link to external video page
Some(page.id.inner().clone())
} else {
None
};
check_url_scheme(&url)?;
let alt_text = first_attachment.cloned().and_then(Attachment::alt_text);
let url = proxy_image_link_opt_apub(url, context).await?;
let slur_regex = &local_site_opt_to_slur_regex(&local_site);
let url_blocklist = get_url_blocklist(context).await?;
let body = read_from_string_or_source_opt(&page.content, &page.media_type, &page.source);
let body = process_markdown_opt(&body, slur_regex, &url_blocklist, context).await?;
let language_id =
LanguageTag::to_language_id_single(page.language, &mut context.pool()).await?;
PostInsertForm::builder()
.name(name)
.url(url.map(Into::into))
.body(body)
.alt_text(alt_text)
.creator_id(creator.id)
.community_id(community.id)
.locked(page.comments_enabled.map(|e| !e))
.published(page.published.map(Into::into))
.updated(page.updated.map(Into::into))
.deleted(Some(false))
.nsfw(page.sensitive)
.ap_id(Some(page.id.clone().into()))
.local(Some(false))
.language_id(language_id)
.build()
let url = if let Some(attachment) = first_attachment.cloned() {
Some(attachment.url())
} else if page.kind == PageType::Video {
// we cant display videos directly, so insert a link to external video page
Some(page.id.inner().clone())
} else {
// if is mod action, only update locked/stickied fields, nothing else
PostInsertForm::builder()
.name(name)
.creator_id(creator.id)
.community_id(community.id)
.ap_id(Some(page.id.clone().into()))
.locked(page.comments_enabled.map(|e| !e))
.updated(page.updated.map(Into::into))
.build()
None
};
check_url_scheme(&url)?;
let alt_text = first_attachment.cloned().and_then(Attachment::alt_text);
let slur_regex = &local_site_opt_to_slur_regex(&local_site);
let url_blocklist = get_url_blocklist(context).await?;
let body = read_from_string_or_source_opt(&page.content, &page.media_type, &page.source);
let body = process_markdown_opt(&body, slur_regex, &url_blocklist, context).await?;
let language_id =
LanguageTag::to_language_id_single(page.language, &mut context.pool()).await?;
let form = PostInsertForm::builder()
.name(name)
.url(url.map(Into::into))
.body(body)
.alt_text(alt_text)
.creator_id(creator.id)
.community_id(community.id)
.published(page.published.map(Into::into))
.updated(page.updated.map(Into::into))
.deleted(Some(false))
.nsfw(page.sensitive)
.ap_id(Some(page.id.clone().into()))
.local(Some(false))
.language_id(language_id)
.build();
let timestamp = page.updated.or(page.published).unwrap_or_else(naive_now);
let post = Post::insert_apub(&mut context.pool(), timestamp, &form).await?;
let post_ = post.clone();
let context_ = context.reset_request_count();
generate_post_link_metadata(
post.clone(),
None,
page.image.map(|i| i.url),
|_| None,
local_site,
context.reset_request_count(),
);
// write mod log entry for lock
if Page::is_locked_changed(&old_post, &page.comments_enabled) {
let form = ModLockPostForm {
mod_person_id: creator.id,
post_id: post.id,
locked: Some(post.locked),
};
ModLockPost::create(&mut context.pool(), &form).await?;
}
// Generates a post thumbnail in background task, because some sites can be very slow to
// respond.
spawn_try_task(async move {
generate_post_link_metadata(post_, None, |_| None, local_site, context_).await
});
Ok(post.into())
}

View File

@ -23,6 +23,7 @@ pub struct DeleteUser {
#[serde(deserialize_with = "deserialize_one_or_many", default)]
#[serde(skip_serializing_if = "Vec::is_empty")]
pub(crate) cc: Vec<Url>,
/// Nonstandard field. If present, all content from the user should be deleted along with the account
/// Nonstandard field. If present, all content from the user should be deleted along with the
/// account
pub(crate) remove_data: Option<bool>,
}

View File

@ -96,4 +96,10 @@ mod tests {
test_json::<Report>("assets/mbin/activities/flag.json")?;
Ok(())
}
#[test]
fn test_parse_wordpress_activities() -> LemmyResult<()> {
test_json::<AnnounceActivity>("assets/wordpress/activities/announce.json")?;
Ok(())
}
}

View File

@ -22,7 +22,7 @@ pub struct Instance {
/// site name
pub(crate) name: String,
/// instance domain, necessary for mastodon authorized fetch
pub(crate) preferred_username: String,
pub(crate) preferred_username: Option<String>,
pub(crate) inbox: Url,
/// mandatory field in activitypub, lemmy currently serves an empty outbox
pub(crate) outbox: Url,

View File

@ -191,6 +191,14 @@ mod tests {
Ok(())
}
#[test]
fn test_parse_object_discourse() -> LemmyResult<()> {
test_json::<Group>("assets/discourse/objects/group.json")?;
test_json::<Page>("assets/discourse/objects/page.json")?;
test_json::<Person>("assets/discourse/objects/person.json")?;
Ok(())
}
#[test]
fn test_parse_object_nodebb() -> LemmyResult<()> {
test_json::<Group>("assets/nodebb/objects/group.json")?;
@ -198,4 +206,13 @@ mod tests {
test_json::<Person>("assets/nodebb/objects/person.json")?;
Ok(())
}
#[test]
fn test_parse_object_wordpress() -> LemmyResult<()> {
test_json::<Group>("assets/wordpress/objects/group.json")?;
test_json::<Page>("assets/wordpress/objects/page.json")?;
test_json::<Person>("assets/wordpress/objects/person.json")?;
test_json::<Note>("assets/wordpress/objects/note.json")?;
Ok(())
}
}

View File

@ -42,7 +42,7 @@ pub struct Page {
pub(crate) kind: PageType,
pub(crate) id: ObjectId<ApubPost>,
pub(crate) attributed_to: AttributedTo,
#[serde(deserialize_with = "deserialize_one_or_many")]
#[serde(deserialize_with = "deserialize_one_or_many", default)]
pub(crate) to: Vec<Url>,
// If there is inReplyTo field this is actually a comment and must not be parsed
#[serde(deserialize_with = "deserialize_not_present", default)]
@ -60,7 +60,6 @@ pub struct Page {
#[serde(default)]
pub(crate) attachment: Vec<Attachment>,
pub(crate) image: Option<ImageObject>,
pub(crate) comments_enabled: Option<bool>,
pub(crate) sensitive: Option<bool>,
pub(crate) published: Option<DateTime<Utc>>,
pub(crate) updated: Option<DateTime<Utc>>,
@ -156,28 +155,6 @@ pub enum HashtagType {
}
impl Page {
/// Only mods can change the post's locked status. So if it is changed from the default value,
/// it is a mod action and needs to be verified as such.
///
/// Locked needs to be false on a newly created post (verified in [[CreatePost]].
pub(crate) async fn is_mod_action(&self, context: &Data<LemmyContext>) -> LemmyResult<bool> {
let old_post = self.id.clone().dereference_local(context).await;
Ok(Page::is_locked_changed(&old_post, &self.comments_enabled))
}
pub(crate) fn is_locked_changed<E>(
old_post: &Result<ApubPost, E>,
new_comments_enabled: &Option<bool>,
) -> bool {
if let Some(new_comments_enabled) = new_comments_enabled {
if let Ok(old_post) = old_post {
return new_comments_enabled != &!old_post.locked;
}
}
false
}
pub(crate) fn creator(&self) -> LemmyResult<ObjectId<ApubPerson>> {
match &self.attributed_to {
AttributedTo::Lemmy(l) => Ok(l.clone()),
@ -233,6 +210,10 @@ impl ActivityHandler for Page {
#[async_trait::async_trait]
impl InCommunity for Page {
async fn community(&self, context: &Data<LemmyContext>) -> LemmyResult<ApubCommunity> {
if let Some(audience) = &self.audience {
return audience.dereference(context).await;
}
let community = match &self.attributed_to {
AttributedTo::Lemmy(_) => {
let mut iter = self.to.iter().merge(self.cc.iter());
@ -243,7 +224,7 @@ impl InCommunity for Page {
break c;
}
} else {
Err(LemmyErrorType::NoCommunityFoundInCc)?
Err(LemmyErrorType::CouldntFindCommunity)?;
}
}
}
@ -251,11 +232,12 @@ impl InCommunity for Page {
p.iter()
.find(|a| a.kind == PersonOrGroupType::Group)
.map(|a| ObjectId::<ApubCommunity>::from(a.id.clone().into_inner()))
.ok_or(LemmyErrorType::PageDoesNotSpecifyGroup)?
.ok_or(LemmyErrorType::CouldntFindCommunity)?
.dereference(context)
.await?
}
};
if let Some(audience) = &self.audience {
verify_community_matches(audience, community.actor_id.clone())?;
}

View File

@ -1,5 +1,6 @@
[package]
name = "lemmy_db_perf"
publish = false
version.workspace = true
edition.workspace = true
description.workspace = true

View File

@ -132,7 +132,8 @@ async fn try_main() -> LemmyResult<()> {
// Make sure the println above shows the correct amount
assert_eq!(num_inserted_posts, num_posts as usize);
// Manually trigger and wait for a statistics update to ensure consistent and high amount of accuracy in the statistics used for query planning
// Manually trigger and wait for a statistics update to ensure consistent and high amount of
// accuracy in the statistics used for query planning
println!("🧮 updating database statistics");
conn.batch_execute("ANALYZE;").await?;

View File

@ -14,10 +14,12 @@ use diesel::{
/// Gererates a series of rows for insertion.
///
/// An inclusive range is created from `start` and `stop`. A row for each number is generated using `selection`, which can be a tuple.
/// [`current_value`] is an expression that gets the current value.
/// An inclusive range is created from `start` and `stop`. A row for each number is generated using
/// `selection`, which can be a tuple. [`current_value`] is an expression that gets the current
/// value.
///
/// For example, if there's a `numbers` table with a `number` column, this inserts all numbers from 1 to 10 in a single statement:
/// For example, if there's a `numbers` table with a `number` column, this inserts all numbers from
/// 1 to 10 in a single statement:
///
/// ```
/// dsl::insert_into(numbers::table)

View File

@ -70,7 +70,7 @@ diesel_ltree = { workspace = true, optional = true }
typed-builder = { workspace = true }
async-trait = { workspace = true }
tracing = { workspace = true }
deadpool = { version = "0.10.0", features = ["rt_tokio_1"], optional = true }
deadpool = { version = "0.12.1", features = ["rt_tokio_1"], optional = true }
ts-rs = { workspace = true, optional = true }
futures-util = { workspace = true }
tokio = { workspace = true, optional = true }

View File

@ -5,6 +5,12 @@
-- (even if only other columns are updated) because triggers can run after the deletion of referenced rows and
-- before the automatic deletion of the row that references it. This is not a problem for insert or delete.
--
-- After a row update begins, a concurrent update on the same row can't begin until the whole
-- transaction that contains the first update is finished. To reduce this locking, statements in
-- triggers should be ordered based on the likelihood of concurrent writers. For example, updating
-- site_aggregates should be done last because the same row is updated for all local stuff. If
-- it were not last, then the locking period for concurrent writers would extend to include the
-- time consumed by statements that come after.
--
--
-- Create triggers for both post and comments
@ -38,16 +44,18 @@ BEGIN
(thing_like).thing_id, coalesce(sum(count_diff) FILTER (WHERE (thing_like).score = 1), 0) AS upvotes, coalesce(sum(count_diff) FILTER (WHERE (thing_like).score != 1), 0) AS downvotes FROM select_old_and_new_rows AS old_and_new_rows GROUP BY (thing_like).thing_id) AS diff
WHERE
a.thing_id = diff.thing_id
RETURNING
r.creator_id_from_thing_aggregates (a.*) AS creator_id, diff.upvotes - diff.downvotes AS score)
UPDATE
person_aggregates AS a
SET
thing_score = a.thing_score + diff.score FROM (
SELECT
creator_id, sum(score) AS score FROM thing_diff GROUP BY creator_id) AS diff
WHERE
a.person_id = diff.creator_id;
AND (diff.upvotes, diff.downvotes) != (0, 0)
RETURNING
r.creator_id_from_thing_aggregates (a.*) AS creator_id, diff.upvotes - diff.downvotes AS score)
UPDATE
person_aggregates AS a
SET
thing_score = a.thing_score + diff.score FROM (
SELECT
creator_id, sum(score) AS score FROM thing_diff GROUP BY creator_id) AS diff
WHERE
a.person_id = diff.creator_id
AND diff.score != 0;
RETURN NULL;
END;
$$);
@ -62,6 +70,21 @@ CALL r.post_or_comment ('post');
CALL r.post_or_comment ('comment');
-- Create triggers that update counts in parent aggregates
CREATE FUNCTION r.parent_comment_ids (path ltree)
RETURNS SETOF int
LANGUAGE sql
IMMUTABLE parallel safe
BEGIN
ATOMIC
SELECT
comment_id::int
FROM
string_to_table (ltree2text (path), '.') AS comment_id
-- Skip first and last
LIMIT (nlevel (path) - 2) OFFSET 1;
END;
CALL r.create_triggers ('comment', $$
BEGIN
UPDATE
@ -76,60 +99,84 @@ BEGIN
r.is_counted (comment)
GROUP BY (comment).creator_id) AS diff
WHERE
a.person_id = diff.creator_id;
a.person_id = diff.creator_id
AND diff.comment_count != 0;
UPDATE
site_aggregates AS a
comment_aggregates AS a
SET
comments = a.comments + diff.comments
child_count = a.child_count + diff.child_count
FROM (
SELECT
coalesce(sum(count_diff), 0) AS comments
FROM
select_old_and_new_rows AS old_and_new_rows
WHERE
r.is_counted (comment)
AND (comment).local) AS diff;
parent_id,
coalesce(sum(count_diff), 0) AS child_count
FROM (
-- For each inserted or deleted comment, this outputs 1 row for each parent comment.
-- For example, this:
--
-- count_diff | (comment).path
-- ------------+----------------
-- 1 | 0.5.6.7
-- 1 | 0.5.6.7.8
--
-- becomes this:
--
-- count_diff | parent_id
-- ------------+-----------
-- 1 | 5
-- 1 | 6
-- 1 | 5
-- 1 | 6
-- 1 | 7
SELECT
count_diff,
parent_id
FROM
select_old_and_new_rows AS old_and_new_rows,
LATERAL r.parent_comment_ids ((comment).path) AS parent_id) AS expanded_old_and_new_rows
GROUP BY
parent_id) AS diff
WHERE
a.comment_id = diff.parent_id
AND diff.child_count != 0;
WITH post_diff AS (
UPDATE
post_aggregates AS a
SET
comments = a.comments + diff.comments,
newest_comment_time = GREATEST (a.newest_comment_time, (
SELECT
published
FROM select_new_rows AS new_comment
WHERE
a.post_id = new_comment.post_id ORDER BY published DESC LIMIT 1)),
newest_comment_time_necro = GREATEST (a.newest_comment_time_necro, (
SELECT
published
FROM select_new_rows AS new_comment
WHERE
a.post_id = new_comment.post_id
-- Ignore comments from the post's creator
AND a.creator_id != new_comment.creator_id
-- Ignore comments on old posts
AND a.published > (new_comment.published - '2 days'::interval)
ORDER BY published DESC LIMIT 1))
newest_comment_time = GREATEST (a.newest_comment_time, diff.newest_comment_time),
newest_comment_time_necro = GREATEST (a.newest_comment_time_necro, diff.newest_comment_time_necro)
FROM (
SELECT
(comment).post_id,
coalesce(sum(count_diff), 0) AS comments
post.id AS post_id,
coalesce(sum(count_diff), 0) AS comments,
-- Old rows are excluded using `count_diff = 1`
max((comment).published) FILTER (WHERE count_diff = 1) AS newest_comment_time,
max((comment).published) FILTER (WHERE count_diff = 1
-- Ignore comments from the post's creator
AND post.creator_id != (comment).creator_id
-- Ignore comments on old posts
AND post.published > ((comment).published - '2 days'::interval)) AS newest_comment_time_necro,
r.is_counted (post.*) AS include_in_community_aggregates
FROM
select_old_and_new_rows AS old_and_new_rows
LEFT JOIN post ON post.id = (comment).post_id
WHERE
r.is_counted (comment)
GROUP BY
(comment).post_id) AS diff
LEFT JOIN post ON post.id = diff.post_id
post.id) AS diff
WHERE
a.post_id = diff.post_id
AND (diff.comments,
GREATEST (a.newest_comment_time, diff.newest_comment_time),
GREATEST (a.newest_comment_time_necro, diff.newest_comment_time_necro)) != (0,
a.newest_comment_time,
a.newest_comment_time_necro)
RETURNING
a.community_id,
diff.comments,
r.is_counted (post.*) AS include_in_community_aggregates)
diff.include_in_community_aggregates)
UPDATE
community_aggregates AS a
SET
@ -145,7 +192,23 @@ FROM (
GROUP BY
community_id) AS diff
WHERE
a.community_id = diff.community_id;
a.community_id = diff.community_id
AND diff.comments != 0;
UPDATE
site_aggregates AS a
SET
comments = a.comments + diff.comments
FROM (
SELECT
coalesce(sum(count_diff), 0) AS comments
FROM
select_old_and_new_rows AS old_and_new_rows
WHERE
r.is_counted (comment)
AND (comment).local) AS diff
WHERE
diff.comments != 0;
RETURN NULL;
@ -167,20 +230,8 @@ BEGIN
r.is_counted (post)
GROUP BY (post).creator_id) AS diff
WHERE
a.person_id = diff.creator_id;
UPDATE
site_aggregates AS a
SET
posts = a.posts + diff.posts
FROM (
SELECT
coalesce(sum(count_diff), 0) AS posts
FROM
select_old_and_new_rows AS old_and_new_rows
WHERE
r.is_counted (post)
AND (post).local) AS diff;
a.person_id = diff.creator_id
AND diff.post_count != 0;
UPDATE
community_aggregates AS a
@ -197,7 +248,23 @@ FROM (
GROUP BY
(post).community_id) AS diff
WHERE
a.community_id = diff.community_id;
a.community_id = diff.community_id
AND diff.posts != 0;
UPDATE
site_aggregates AS a
SET
posts = a.posts + diff.posts
FROM (
SELECT
coalesce(sum(count_diff), 0) AS posts
FROM
select_old_and_new_rows AS old_and_new_rows
WHERE
r.is_counted (post)
AND (post).local) AS diff
WHERE
diff.posts != 0;
RETURN NULL;
@ -217,7 +284,9 @@ BEGIN
FROM select_old_and_new_rows AS old_and_new_rows
WHERE
r.is_counted (community)
AND (community).local) AS diff;
AND (community).local) AS diff
WHERE
diff.communities != 0;
RETURN NULL;
@ -235,7 +304,9 @@ BEGIN
SELECT
coalesce(sum(count_diff), 0) AS users
FROM select_old_and_new_rows AS old_and_new_rows
WHERE (person).local) AS diff;
WHERE (person).local) AS diff
WHERE
diff.users != 0;
RETURN NULL;
@ -270,7 +341,8 @@ BEGIN
GROUP BY
old_post.community_id) AS diff
WHERE
a.community_id = diff.community_id;
a.community_id = diff.community_id
AND diff.comments != 0;
RETURN NULL;
END;
$$;
@ -296,7 +368,8 @@ BEGIN
LEFT JOIN community ON community.id = (community_follower).community_id
LEFT JOIN person ON person.id = (community_follower).person_id GROUP BY (community_follower).community_id) AS diff
WHERE
a.community_id = diff.community_id;
a.community_id = diff.community_id
AND (diff.subscribers, diff.subscribers_local) != (0, 0);
RETURN NULL;
@ -474,3 +547,24 @@ CREATE TRIGGER delete_follow
FOR EACH ROW
EXECUTE FUNCTION r.delete_follow_before_person ();
-- Triggers that change values before insert or update
CREATE FUNCTION r.comment_change_values ()
RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
DECLARE
id text = NEW.id::text;
BEGIN
-- Make `path` end with `id` if it doesn't already
IF NOT (NEW.path ~ ('*.' || id)::lquery) THEN
NEW.path = NEW.path || id;
END IF;
RETURN NEW;
END
$$;
CREATE TRIGGER change_values
BEFORE INSERT OR UPDATE ON comment
FOR EACH ROW
EXECUTE FUNCTION r.comment_change_values ();

View File

@ -15,12 +15,7 @@ use crate::{
utils::{functions::coalesce, get_conn, naive_now, DbPool, DELETED_REPLACEMENT_TEXT},
};
use chrono::{DateTime, Utc};
use diesel::{
dsl::{insert_into, sql_query},
result::Error,
ExpressionMethods,
QueryDsl,
};
use diesel::{dsl::insert_into, result::Error, ExpressionMethods, QueryDsl};
use diesel_async::RunQueryDsl;
use diesel_ltree::Ltree;
use url::Url;
@ -72,81 +67,23 @@ impl Comment {
parent_path: Option<&Ltree>,
) -> Result<Comment, Error> {
let conn = &mut get_conn(pool).await?;
let comment_form = (comment_form, parent_path.map(|p| comment::path.eq(p)));
conn
.build_transaction()
.run(|conn| {
Box::pin(async move {
// Insert, to get the id
let inserted_comment = if let Some(timestamp) = timestamp {
insert_into(comment::table)
.values(comment_form)
.on_conflict(comment::ap_id)
.filter_target(coalesce(comment::updated, comment::published).lt(timestamp))
.do_update()
.set(comment_form)
.get_result::<Self>(conn)
.await?
} else {
insert_into(comment::table)
.values(comment_form)
.get_result::<Self>(conn)
.await?
};
let comment_id = inserted_comment.id;
// You need to update the ltree column
let ltree = Ltree(if let Some(parent_path) = parent_path {
// The previous parent will already have 0 in it
// Append this comment id
format!("{}.{}", parent_path.0, comment_id)
} else {
// '0' is always the first path, append to that
format!("{}.{}", 0, comment_id)
});
let updated_comment = diesel::update(comment::table.find(comment_id))
.set(comment::path.eq(ltree))
.get_result::<Self>(conn)
.await?;
// Update the child count for the parent comment_aggregates
// You could do this with a trigger, but since you have to do this manually anyway,
// you can just have it here
if let Some(parent_path) = parent_path {
// You have to update counts for all parents, not just the immediate one
// TODO if the performance of this is terrible, it might be better to do this as part of a
// scheduled query... although the counts would often be wrong.
//
// The child_count query for reference:
// select c.id, c.path, count(c2.id) as child_count from comment c
// left join comment c2 on c2.path <@ c.path and c2.path != c.path
// group by c.id
let parent_id = parent_path.0.split('.').nth(1);
if let Some(parent_id) = parent_id {
let top_parent = format!("0.{}", parent_id);
let update_child_count_stmt = format!(
"
update comment_aggregates ca set child_count = c.child_count
from (
select c.id, c.path, count(c2.id) as child_count from comment c
join comment c2 on c2.path <@ c.path and c2.path != c.path
and c.path <@ '{top_parent}'
group by c.id
) as c
where ca.comment_id = c.id"
);
sql_query(update_child_count_stmt).execute(conn).await?;
}
}
Ok(updated_comment)
}) as _
})
.await
if let Some(timestamp) = timestamp {
insert_into(comment::table)
.values(comment_form)
.on_conflict(comment::ap_id)
.filter_target(coalesce(comment::updated, comment::published).lt(timestamp))
.do_update()
.set(comment_form)
.get_result::<Self>(conn)
.await
} else {
insert_into(comment::table)
.values(comment_form)
.get_result::<Self>(conn)
.await
}
}
pub async fn read_from_apub_id(

View File

@ -87,117 +87,3 @@ impl CommentReply {
.optional()
}
}
#[cfg(test)]
#[allow(clippy::unwrap_used)]
#[allow(clippy::indexing_slicing)]
mod tests {
use crate::{
source::{
comment::{Comment, CommentInsertForm},
comment_reply::{CommentReply, CommentReplyInsertForm, CommentReplyUpdateForm},
community::{Community, CommunityInsertForm},
instance::Instance,
person::{Person, PersonInsertForm},
post::{Post, PostInsertForm},
},
traits::Crud,
utils::build_db_pool_for_tests,
};
use pretty_assertions::assert_eq;
use serial_test::serial;
#[tokio::test]
#[serial]
async fn test_crud() {
let pool = &build_db_pool_for_tests().await;
let pool = &mut pool.into();
let inserted_instance = Instance::read_or_create(pool, "my_domain.tld".to_string())
.await
.unwrap();
let new_person = PersonInsertForm::builder()
.name("terrylake".into())
.public_key("pubkey".to_string())
.instance_id(inserted_instance.id)
.build();
let inserted_person = Person::create(pool, &new_person).await.unwrap();
let recipient_form = PersonInsertForm::builder()
.name("terrylakes recipient".into())
.public_key("pubkey".to_string())
.instance_id(inserted_instance.id)
.build();
let inserted_recipient = Person::create(pool, &recipient_form).await.unwrap();
let new_community = CommunityInsertForm::builder()
.name("test community lake".to_string())
.title("nada".to_owned())
.public_key("pubkey".to_string())
.instance_id(inserted_instance.id)
.build();
let inserted_community = Community::create(pool, &new_community).await.unwrap();
let new_post = PostInsertForm::builder()
.name("A test post".into())
.creator_id(inserted_person.id)
.community_id(inserted_community.id)
.build();
let inserted_post = Post::create(pool, &new_post).await.unwrap();
let comment_form = CommentInsertForm::builder()
.content("A test comment".into())
.creator_id(inserted_person.id)
.post_id(inserted_post.id)
.build();
let inserted_comment = Comment::create(pool, &comment_form, None).await.unwrap();
let comment_reply_form = CommentReplyInsertForm {
recipient_id: inserted_recipient.id,
comment_id: inserted_comment.id,
read: None,
};
let inserted_reply = CommentReply::create(pool, &comment_reply_form)
.await
.unwrap();
let expected_reply = CommentReply {
id: inserted_reply.id,
recipient_id: inserted_reply.recipient_id,
comment_id: inserted_reply.comment_id,
read: false,
published: inserted_reply.published,
};
let read_reply = CommentReply::read(pool, inserted_reply.id)
.await
.unwrap()
.unwrap();
let comment_reply_update_form = CommentReplyUpdateForm { read: Some(false) };
let updated_reply = CommentReply::update(pool, inserted_reply.id, &comment_reply_update_form)
.await
.unwrap();
Comment::delete(pool, inserted_comment.id).await.unwrap();
Post::delete(pool, inserted_post.id).await.unwrap();
Community::delete(pool, inserted_community.id)
.await
.unwrap();
Person::delete(pool, inserted_person.id).await.unwrap();
Person::delete(pool, inserted_recipient.id).await.unwrap();
Instance::delete(pool, inserted_instance.id).await.unwrap();
assert_eq!(expected_reply, read_reply);
assert_eq!(expected_reply, inserted_reply);
assert_eq!(expected_reply, updated_reply);
}
}

View File

@ -141,7 +141,8 @@ impl Community {
Ok(community_)
}
/// Get the community which has a given moderators or featured url, also return the collection type
/// Get the community which has a given moderators or featured url, also return the collection
/// type
pub async fn get_by_collection_url(
pool: &mut DbPool<'_>,
url: &DbUrl,

View File

@ -94,11 +94,15 @@ impl Instance {
.await
}
#[cfg(test)]
/// Only for use in tests
pub async fn delete_all(pool: &mut DbPool<'_>) -> Result<usize, Error> {
let conn = &mut get_conn(pool).await?;
diesel::delete(federation_queue_state::table)
.execute(conn)
.await?;
diesel::delete(instance::table).execute(conn).await
}
pub async fn allowlist(pool: &mut DbPool<'_>) -> Result<Vec<Self>, Error> {
let conn = &mut get_conn(pool).await?;
instance::table
@ -117,15 +121,15 @@ impl Instance {
.await
}
/// returns a list of all instances, each with a flag of whether the instance is allowed or not and dead or not
/// ordered by id
/// returns a list of all instances, each with a flag of whether the instance is allowed or not
/// and dead or not ordered by id
pub async fn read_federated_with_blocked_and_dead(
pool: &mut DbPool<'_>,
) -> Result<Vec<(Self, bool, bool)>, Error> {
let conn = &mut get_conn(pool).await?;
let is_dead_expr = coalesce(instance::updated, instance::published).lt(now() - 3.days());
// this needs to be done in two steps because the meaning of the "blocked" column depends on the existence
// of any value at all in the allowlist. (so a normal join wouldn't work)
// this needs to be done in two steps because the meaning of the "blocked" column depends on the
// existence of any value at all in the allowlist. (so a normal join wouldn't work)
let use_allowlist = federation_allowlist::table
.select(count_star().gt(0))
.get_result::<bool>(conn)

View File

@ -55,12 +55,17 @@ impl LocalUser {
pool: &mut DbPool<'_>,
local_user_id: LocalUserId,
form: &LocalUserUpdateForm,
) -> Result<LocalUser, Error> {
) -> Result<usize, Error> {
let conn = &mut get_conn(pool).await?;
diesel::update(local_user::table.find(local_user_id))
let res = diesel::update(local_user::table.find(local_user_id))
.set(form)
.get_result::<Self>(conn)
.await
.execute(conn)
.await;
// Diesel will throw an error if the query is all Nones (not updating anything), ignore this.
match res {
Err(Error::QueryBuilderError(_)) => Ok(0),
other => other,
}
}
pub async fn delete(pool: &mut DbPool<'_>, id: LocalUserId) -> Result<usize, Error> {

View File

@ -1,81 +1,45 @@
use crate::{
diesel::OptionalExtension,
newtypes::LocalUserId,
schema::password_reset_request::dsl::{local_user_id, password_reset_request, published, token},
schema::password_reset_request::dsl::{password_reset_request, published, token},
source::password_reset_request::{PasswordResetRequest, PasswordResetRequestForm},
traits::Crud,
utils::{get_conn, DbPool},
};
use diesel::{
delete,
dsl::{insert_into, now, IntervalDsl},
result::Error,
sql_types::Timestamptz,
ExpressionMethods,
IntoSql,
QueryDsl,
};
use diesel_async::RunQueryDsl;
#[async_trait]
impl Crud for PasswordResetRequest {
type InsertForm = PasswordResetRequestForm;
type UpdateForm = PasswordResetRequestForm;
type IdType = i32;
async fn create(pool: &mut DbPool<'_>, form: &PasswordResetRequestForm) -> Result<Self, Error> {
let conn = &mut get_conn(pool).await?;
insert_into(password_reset_request)
.values(form)
.get_result::<Self>(conn)
.await
}
async fn update(
pool: &mut DbPool<'_>,
password_reset_request_id: i32,
form: &PasswordResetRequestForm,
) -> Result<Self, Error> {
let conn = &mut get_conn(pool).await?;
diesel::update(password_reset_request.find(password_reset_request_id))
.set(form)
.get_result::<Self>(conn)
.await
}
}
impl PasswordResetRequest {
pub async fn create_token(
pub async fn create(
pool: &mut DbPool<'_>,
from_local_user_id: LocalUserId,
token_: String,
) -> Result<PasswordResetRequest, Error> {
let form = PasswordResetRequestForm {
local_user_id: from_local_user_id,
token: token_,
token: token_.into(),
};
Self::create(pool, &form).await
}
pub async fn read_from_token(pool: &mut DbPool<'_>, token_: &str) -> Result<Option<Self>, Error> {
let conn = &mut get_conn(pool).await?;
password_reset_request
insert_into(password_reset_request)
.values(form)
.get_result::<Self>(conn)
.await
}
pub async fn read_and_delete(pool: &mut DbPool<'_>, token_: &str) -> Result<Option<Self>, Error> {
let conn = &mut get_conn(pool).await?;
delete(password_reset_request)
.filter(token.eq(token_))
.filter(published.gt(now.into_sql::<Timestamptz>() - 1.days()))
.first(conn)
.await
.optional()
}
pub async fn get_recent_password_resets_count(
pool: &mut DbPool<'_>,
user_id: LocalUserId,
) -> Result<i64, Error> {
let conn = &mut get_conn(pool).await?;
password_reset_request
.filter(local_user_id.eq(user_id))
.filter(published.gt(now.into_sql::<Timestamptz>() - 1.days()))
.count()
.get_result(conn)
.await
.optional()
}
}
@ -94,62 +58,64 @@ mod tests {
traits::Crud,
utils::build_db_pool_for_tests,
};
use lemmy_utils::error::LemmyResult;
use pretty_assertions::assert_eq;
use serial_test::serial;
#[tokio::test]
#[serial]
async fn test_crud() {
async fn test_password_reset() -> LemmyResult<()> {
let pool = &build_db_pool_for_tests().await;
let pool = &mut pool.into();
let inserted_instance = Instance::read_or_create(pool, "my_domain.tld".to_string())
.await
.unwrap();
// Setup
let inserted_instance = Instance::read_or_create(pool, "my_domain.tld".to_string()).await?;
let new_person = PersonInsertForm::builder()
.name("thommy prw".into())
.public_key("pubkey".to_string())
.instance_id(inserted_instance.id)
.build();
let inserted_person = Person::create(pool, &new_person).await.unwrap();
let inserted_person = Person::create(pool, &new_person).await?;
let new_local_user = LocalUserInsertForm::builder()
.person_id(inserted_person.id)
.password_encrypted("pass".to_string())
.build();
let inserted_local_user = LocalUser::create(pool, &new_local_user, vec![]).await?;
let inserted_local_user = LocalUser::create(pool, &new_local_user, vec![])
.await
.unwrap();
// Create password reset token
let token = "nope";
let inserted_password_reset_request =
PasswordResetRequest::create_token(pool, inserted_local_user.id, token.to_string())
.await
.unwrap();
PasswordResetRequest::create(pool, inserted_local_user.id, token.to_string()).await?;
let expected_password_reset_request = PasswordResetRequest {
id: inserted_password_reset_request.id,
local_user_id: inserted_local_user.id,
token: token.to_string(),
published: inserted_password_reset_request.published,
};
let read_password_reset_request = PasswordResetRequest::read_from_token(pool, token)
.await
.unwrap()
// Read it and verify
let read_password_reset_request = PasswordResetRequest::read_and_delete(pool, token)
.await?
.unwrap();
let num_deleted = Person::delete(pool, inserted_person.id).await.unwrap();
Instance::delete(pool, inserted_instance.id).await.unwrap();
assert_eq!(expected_password_reset_request, read_password_reset_request);
assert_eq!(
expected_password_reset_request,
inserted_password_reset_request
inserted_password_reset_request.id,
read_password_reset_request.id
);
assert_eq!(
inserted_password_reset_request.local_user_id,
read_password_reset_request.local_user_id
);
assert_eq!(
inserted_password_reset_request.token,
read_password_reset_request.token
);
assert_eq!(
inserted_password_reset_request.published,
read_password_reset_request.published
);
// Cannot reuse same token again
let read_password_reset_request = PasswordResetRequest::read_and_delete(pool, token).await?;
assert!(read_password_reset_request.is_none());
// Cleanup
let num_deleted = Person::delete(pool, inserted_person.id).await?;
Instance::delete(pool, inserted_instance.id).await?;
assert_eq!(1, num_deleted);
Ok(())
}
}

View File

@ -55,7 +55,8 @@ impl Crud for Person {
impl Person {
/// Update or insert the person.
///
/// This is necessary for federation, because Activitypub doesn't distinguish between these actions.
/// This is necessary for federation, because Activitypub doesn't distinguish between these
/// actions.
pub async fn upsert(pool: &mut DbPool<'_>, form: &PersonInsertForm) -> Result<Self, Error> {
let conn = &mut get_conn(pool).await?;
insert_into(person::table)

View File

@ -74,117 +74,3 @@ impl PersonMention {
.optional()
}
}
#[cfg(test)]
#[allow(clippy::unwrap_used)]
#[allow(clippy::indexing_slicing)]
mod tests {
use crate::{
source::{
comment::{Comment, CommentInsertForm},
community::{Community, CommunityInsertForm},
instance::Instance,
person::{Person, PersonInsertForm},
person_mention::{PersonMention, PersonMentionInsertForm, PersonMentionUpdateForm},
post::{Post, PostInsertForm},
},
traits::Crud,
utils::build_db_pool_for_tests,
};
use pretty_assertions::assert_eq;
use serial_test::serial;
#[tokio::test]
#[serial]
async fn test_crud() {
let pool = &build_db_pool_for_tests().await;
let pool = &mut pool.into();
let inserted_instance = Instance::read_or_create(pool, "my_domain.tld".to_string())
.await
.unwrap();
let new_person = PersonInsertForm::builder()
.name("terrylake".into())
.public_key("pubkey".to_string())
.instance_id(inserted_instance.id)
.build();
let inserted_person = Person::create(pool, &new_person).await.unwrap();
let recipient_form = PersonInsertForm::builder()
.name("terrylakes recipient".into())
.public_key("pubkey".to_string())
.instance_id(inserted_instance.id)
.build();
let inserted_recipient = Person::create(pool, &recipient_form).await.unwrap();
let new_community = CommunityInsertForm::builder()
.name("test community lake".to_string())
.title("nada".to_owned())
.public_key("pubkey".to_string())
.instance_id(inserted_instance.id)
.build();
let inserted_community = Community::create(pool, &new_community).await.unwrap();
let new_post = PostInsertForm::builder()
.name("A test post".into())
.creator_id(inserted_person.id)
.community_id(inserted_community.id)
.build();
let inserted_post = Post::create(pool, &new_post).await.unwrap();
let comment_form = CommentInsertForm::builder()
.content("A test comment".into())
.creator_id(inserted_person.id)
.post_id(inserted_post.id)
.build();
let inserted_comment = Comment::create(pool, &comment_form, None).await.unwrap();
let person_mention_form = PersonMentionInsertForm {
recipient_id: inserted_recipient.id,
comment_id: inserted_comment.id,
read: None,
};
let inserted_mention = PersonMention::create(pool, &person_mention_form)
.await
.unwrap();
let expected_mention = PersonMention {
id: inserted_mention.id,
recipient_id: inserted_mention.recipient_id,
comment_id: inserted_mention.comment_id,
read: false,
published: inserted_mention.published,
};
let read_mention = PersonMention::read(pool, inserted_mention.id)
.await
.unwrap()
.unwrap();
let person_mention_update_form = PersonMentionUpdateForm { read: Some(false) };
let updated_mention =
PersonMention::update(pool, inserted_mention.id, &person_mention_update_form)
.await
.unwrap();
Comment::delete(pool, inserted_comment.id).await.unwrap();
Post::delete(pool, inserted_post.id).await.unwrap();
Community::delete(pool, inserted_community.id)
.await
.unwrap();
Person::delete(pool, inserted_person.id).await.unwrap();
Person::delete(pool, inserted_recipient.id).await.unwrap();
Instance::delete(pool, inserted_instance.id).await.unwrap();
assert_eq!(expected_mention, read_mention);
assert_eq!(expected_mention, inserted_mention);
assert_eq!(expected_mention, updated_mention);
}
}

View File

@ -24,6 +24,7 @@ pub mod aggregates;
#[cfg(feature = "full")]
pub mod impls;
pub mod newtypes;
pub mod sensitive;
#[cfg(feature = "full")]
#[rustfmt::skip]
#[allow(clippy::wildcard_imports)]

View File

@ -127,11 +127,13 @@ pub struct LanguageId(pub i32);
/// The comment reply id.
pub struct CommentReplyId(i32);
#[derive(Debug, Copy, Clone, Hash, Eq, PartialEq, Serialize, Deserialize, Default)]
#[derive(
Debug, Copy, Clone, Hash, Eq, PartialEq, Serialize, Deserialize, Default, Ord, PartialOrd,
)]
#[cfg_attr(feature = "full", derive(DieselNewType, TS))]
#[cfg_attr(feature = "full", ts(export))]
/// The instance id.
pub struct InstanceId(i32);
pub struct InstanceId(pub i32);
#[derive(
Debug, Copy, Clone, Hash, Eq, PartialEq, Serialize, Deserialize, Default, PartialOrd, Ord,

View File

@ -6,18 +6,19 @@ use tracing::info;
const MIGRATIONS: EmbeddedMigrations = embed_migrations!();
/// This SQL code sets up the `r` schema, which contains things that can be safely dropped and replaced
/// instead of being changed using migrations. It may not create or modify things outside of the `r` schema
/// (indicated by `r.` before the name), unless a comment says otherwise.
/// This SQL code sets up the `r` schema, which contains things that can be safely dropped and
/// replaced instead of being changed using migrations. It may not create or modify things outside
/// of the `r` schema (indicated by `r.` before the name), unless a comment says otherwise.
///
/// Currently, this code is only run after the server starts and there's at least 1 pending migration
/// to run. This means every time you change something here, you must also create a migration (a blank
/// up.sql file works fine). This behavior will be removed when we implement a better way to avoid
/// useless schema updates and locks.
/// Currently, this code is only run after the server starts and there's at least 1 pending
/// migration to run. This means every time you change something here, you must also create a
/// migration (a blank up.sql file works fine). This behavior will be removed when we implement a
/// better way to avoid useless schema updates and locks.
///
/// If you add something that depends on something (such as a table) created in a new migration, then down.sql
/// must use `CASCADE` when dropping it. This doesn't need to be fixed in old migrations because the
/// "replaceable-schema" migration runs `DROP SCHEMA IF EXISTS r CASCADE` in down.sql.
/// If you add something that depends on something (such as a table) created in a new migration,
/// then down.sql must use `CASCADE` when dropping it. This doesn't need to be fixed in old
/// migrations because the "replaceable-schema" migration runs `DROP SCHEMA IF EXISTS r CASCADE` in
/// down.sql.
const REPLACEABLE_SCHEMA: &[&str] = &[
"DROP SCHEMA IF EXISTS r CASCADE;",
"CREATE SCHEMA r;",
@ -29,9 +30,10 @@ pub fn run(db_url: &str) -> Result<(), LemmyError> {
// Migrations don't support async connection
let mut conn = PgConnection::establish(db_url).with_context(|| "Error connecting to database")?;
// Run all pending migrations except for the newest one, then run the newest one in the same transaction
// as `REPLACEABLE_SCHEMA`. This code will be becone less hacky when the conditional setup of things in
// `REPLACEABLE_SCHEMA` is done without using the number of pending migrations.
// Run all pending migrations except for the newest one, then run the newest one in the same
// transaction as `REPLACEABLE_SCHEMA`. This code will be becone less hacky when the conditional
// setup of things in `REPLACEABLE_SCHEMA` is done without using the number of pending
// migrations.
info!("Running Database migrations (This may take a long time)...");
let migrations = conn
.pending_migrations(MIGRATIONS)

View File

@ -0,0 +1,57 @@
use serde::{Deserialize, Serialize};
use std::{fmt::Debug, ops::Deref};
#[cfg(feature = "full")]
use ts_rs::TS;
#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Deserialize, Serialize, Default)]
#[cfg_attr(feature = "full", derive(DieselNewType))]
#[serde(transparent)]
pub struct SensitiveString(String);
impl SensitiveString {
pub fn into_inner(self) -> String {
self.0
}
}
impl Debug for SensitiveString {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("Sensitive").finish()
}
}
impl AsRef<[u8]> for SensitiveString {
fn as_ref(&self) -> &[u8] {
self.0.as_ref()
}
}
impl Deref for SensitiveString {
type Target = str;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl From<String> for SensitiveString {
fn from(t: String) -> Self {
SensitiveString(t)
}
}
#[cfg(feature = "full")]
impl TS for SensitiveString {
fn name() -> String {
"string".to_string()
}
fn name_with_type_args(_args: Vec<String>) -> String {
"string".to_string()
}
fn dependencies() -> Vec<ts_rs::Dependency> {
Vec::new()
}
fn transparent() -> bool {
true
}
}

View File

@ -41,7 +41,8 @@ pub struct Comment {
#[cfg(feature = "full")]
#[cfg_attr(feature = "full", serde(with = "LtreeDef"))]
#[cfg_attr(feature = "full", ts(type = "string"))]
/// The path / tree location of a comment, separated by dots, ending with the comment's id. Ex: 0.24.27
/// The path / tree location of a comment, separated by dots, ending with the comment's id. Ex:
/// 0.24.27
pub path: Ltree,
#[cfg(not(feature = "full"))]
pub path: String,

View File

@ -2,6 +2,7 @@
use crate::schema::{community, community_follower, community_moderator, community_person_ban};
use crate::{
newtypes::{CommunityId, DbUrl, InstanceId, PersonId},
sensitive::SensitiveString,
source::placeholder_apub_url,
CommunityVisibility,
};
@ -39,7 +40,7 @@ pub struct Community {
/// Whether the community is local.
pub local: bool,
#[serde(skip)]
pub private_key: Option<String>,
pub private_key: Option<SensitiveString>,
#[serde(skip)]
pub public_key: String,
#[serde(skip)]

View File

@ -22,7 +22,7 @@ use typed_builder::TypedBuilder;
diesel(belongs_to(crate::source::local_user::LocalUser))
)]
#[cfg_attr(feature = "full", diesel(check_for_backend(diesel::pg::Pg)))]
#[cfg_attr(feature = "full", diesel(primary_key(local_user_id)))]
#[cfg_attr(feature = "full", diesel(primary_key(pictrs_alias)))]
pub struct LocalImage {
pub local_user_id: Option<LocalUserId>,
pub pictrs_alias: String,
@ -41,9 +41,11 @@ pub struct LocalImageForm {
/// Stores all images which are hosted on remote domains. When attempting to proxy an image, it
/// is checked against this table to avoid Lemmy being used as a general purpose proxy.
#[derive(PartialEq, Eq, Debug, Clone)]
#[cfg_attr(feature = "full", derive(Queryable, Identifiable))]
#[skip_serializing_none]
#[derive(Clone, PartialEq, Eq, Debug, Serialize, Deserialize)]
#[cfg_attr(feature = "full", derive(Queryable, Selectable, Identifiable))]
#[cfg_attr(feature = "full", diesel(table_name = remote_image))]
#[cfg_attr(feature = "full", diesel(check_for_backend(diesel::pg::Pg)))]
pub struct RemoteImage {
pub id: i32,
pub link: DbUrl,

View File

@ -2,6 +2,7 @@
use crate::schema::local_user;
use crate::{
newtypes::{LocalUserId, PersonId},
sensitive::SensitiveString,
ListingType,
PostListingMode,
SortType,
@ -24,8 +25,8 @@ pub struct LocalUser {
/// The person_id for the local user.
pub person_id: PersonId,
#[serde(skip)]
pub password_encrypted: String,
pub email: Option<String>,
pub password_encrypted: SensitiveString,
pub email: Option<SensitiveString>,
/// Whether to show NSFW content.
pub show_nsfw: bool,
pub theme: String,
@ -47,7 +48,7 @@ pub struct LocalUser {
/// Whether their registration application has been accepted.
pub accepted_application: bool,
#[serde(skip)]
pub totp_2fa_secret: Option<String>,
pub totp_2fa_secret: Option<SensitiveString>,
/// Open links in a new tab.
pub open_links_in_new_tab: bool,
pub blur_nsfw: bool,
@ -61,7 +62,8 @@ pub struct LocalUser {
pub totp_2fa_enabled: bool,
/// Whether to allow keyboard navigation (for browsing and interacting with posts and comments).
pub enable_keyboard_navigation: bool,
/// Whether user avatars and inline images in the UI that are gifs should be allowed to play or should be paused
/// Whether user avatars and inline images in the UI that are gifs should be allowed to play or
/// should be paused
pub enable_animated_images: bool,
/// Whether to auto-collapse bot comments.
pub collapse_bot_comments: bool,

View File

@ -1,6 +1,6 @@
use crate::newtypes::LocalUserId;
#[cfg(feature = "full")]
use crate::schema::login_token;
use crate::{newtypes::LocalUserId, sensitive::SensitiveString};
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_with::skip_serializing_none;
@ -18,7 +18,7 @@ use ts_rs::TS;
pub struct LoginToken {
/// Jwt token for this login
#[serde(skip)]
pub token: String,
pub token: SensitiveString,
pub user_id: LocalUserId,
/// Time of login
pub published: DateTime<Utc>,
@ -31,7 +31,7 @@ pub struct LoginToken {
#[cfg_attr(feature = "full", derive(Insertable, AsChangeset))]
#[cfg_attr(feature = "full", diesel(table_name = login_token))]
pub struct LoginTokenCreateForm {
pub token: String,
pub token: SensitiveString,
pub user_id: LocalUserId,
pub ip: Option<String>,
pub user_agent: Option<String>,

View File

@ -43,7 +43,8 @@ pub mod tagline;
/// Default value for columns like [community::Community.inbox_url] which are marked as serde(skip).
///
/// This is necessary so they can be successfully deserialized from API responses, even though the
/// value is not sent by Lemmy. Necessary for crates which rely on Rust API such as lemmy-stats-crawler.
/// value is not sent by Lemmy. Necessary for crates which rely on Rust API such as
/// lemmy-stats-crawler.
fn placeholder_apub_url() -> DbUrl {
DbUrl(Box::new(
Url::parse("http://example.com").expect("parse placeholder url"),

View File

@ -1,6 +1,6 @@
use crate::newtypes::LocalUserId;
#[cfg(feature = "full")]
use crate::schema::password_reset_request;
use crate::{newtypes::LocalUserId, sensitive::SensitiveString};
use chrono::{DateTime, Utc};
#[derive(PartialEq, Eq, Debug)]
@ -9,7 +9,7 @@ use chrono::{DateTime, Utc};
#[cfg_attr(feature = "full", diesel(check_for_backend(diesel::pg::Pg)))]
pub struct PasswordResetRequest {
pub id: i32,
pub token: String,
pub token: SensitiveString,
pub published: DateTime<Utc>,
pub local_user_id: LocalUserId,
}
@ -18,5 +18,5 @@ pub struct PasswordResetRequest {
#[cfg_attr(feature = "full", diesel(table_name = password_reset_request))]
pub struct PasswordResetRequestForm {
pub local_user_id: LocalUserId,
pub token: String,
pub token: SensitiveString,
}

View File

@ -2,6 +2,7 @@
use crate::schema::{person, person_follower};
use crate::{
newtypes::{DbUrl, InstanceId, PersonId},
sensitive::SensitiveString,
source::placeholder_apub_url,
};
use chrono::{DateTime, Utc};
@ -36,7 +37,7 @@ pub struct Person {
/// Whether the person is local to our site.
pub local: bool,
#[serde(skip)]
pub private_key: Option<String>,
pub private_key: Option<SensitiveString>,
#[serde(skip)]
pub public_key: String,
#[serde(skip)]

Some files were not shown because too many files have changed in this diff Show More