It turns out that the only thing we use the __dict__ of LoggingContext for is
`request`, and given we create lots of LoggingContexts and then copy them every
time we do a db transaction or log line, using the __dict__ seems a bit
redundant. Let's try to optimise things by making the request attribute
explicit.
ObserveableDeferred expects its callbacks to be called without any
logcontexts, whereas it turns out we were calling them with the logcontext of
the request which initiated the persistence loop.
It seems wrong that we are attributing work done in the persistence loop to the
request that happened to initiate it, so let's solve this by dropping the
logcontext for it.
(I'm not sure this actually causes any real problems other than messages in the
debug log, but let's clean it up anyway)
In order to circumvent the number of duplicate foo:count metrics increasing
without bounds, it's time for a rearrangement.
The following are all deprecated, and replaced with synapse_util_metrics_block_count:
synapse_util_metrics_block_timer:count
synapse_util_metrics_block_ru_utime:count
synapse_util_metrics_block_ru_stime:count
synapse_util_metrics_block_db_txn_count:count
synapse_util_metrics_block_db_txn_duration:count
The following are all deprecated, and replaced with synapse_http_server_response_count:
synapse_http_server_requests
synapse_http_server_response_time:count
synapse_http_server_response_ru_utime:count
synapse_http_server_response_ru_stime:count
synapse_http_server_response_db_txn_count:count
synapse_http_server_response_db_txn_duration:count
The following are renamed (the old metrics are kept for now, but deprecated):
synapse_util_metrics_block_timer:total ->
synapse_util_metrics_block_time_seconds
synapse_util_metrics_block_ru_utime:total ->
synapse_util_metrics_block_ru_utime_seconds
synapse_util_metrics_block_ru_stime:total ->
synapse_util_metrics_block_ru_stime_seconds
synapse_util_metrics_block_db_txn_count:total ->
synapse_util_metrics_block_db_txn_count
synapse_util_metrics_block_db_txn_duration:total ->
synapse_util_metrics_block_db_txn_duration_seconds
synapse_http_server_response_time:total ->
synapse_http_server_response_time_seconds
synapse_http_server_response_ru_utime:total ->
synapse_http_server_response_ru_utime_seconds
synapse_http_server_response_ru_stime:total ->
synapse_http_server_response_ru_stime_seconds
synapse_http_server_response_db_txn_count:total ->
synapse_http_server_response_db_txn_count
synapse_http_server_response_db_txn_duration:total
synapse_http_server_response_db_txn_duration_seconds
Prometheus handles all metrics as floats, and sometimes we store non-integer
values in them (notably, durations in seconds), so let's render them as floats
too.
(Note that the standard client libraries also treat Counters as floats.)
which was missing its fed client API, since there is no other API
it might as well reuse the bulk one and unwrap it
Signed-off-by: Michael Telatynski <7t3chguy@gmail.com>
Turns out that there is a valid usecase for retrieving event by id (notably
having received a push), but event ids should be scoped to room, so /event/{id}
is wrong.
... because these only really exist to confuse people nowadays.
Also bring log config more into line with the generated log config, by making `level_for_storage`
apply to the `synapse.storage.SQL` logger rather than `synapse.storage`.
Add listen_tcp and listen_ssl which implement Twisted's reactor.listenTCP
and reactor.listenSSL for multiple addresses.
Signed-off-by: Silke Hofstra <silke@slxh.eu>
Binding on 0.0.0.0 when :: is specified in the bind_addresses is now allowed.
This causes a warning explaining the behaviour.
Configuration changed to match.
See #2232
Signed-off-by: Silke Hofstra <silke@slxh.eu>
Most deployments are on Linux (or Mac OS), so this would actually bind
on both IPv4 and IPv6.
Resolves#1886.
Signed-off-by: Willem Mulder <willemmaster@hotmail.com>
Initial commit; this doesn't work yet - the LIKE filtering seems too aggressive.
It also needs _do_initial_spam to be aware of prepopulating the whole user_directory_search table with all users...
...and it needs a handle_user_signup() or something to be added so that new signups get incrementally added to the table too.
Committing it here as a WIP
Make sure that we delete devices whenever a user is logged out due to any of
the following situations:
* /logout
* /logout_all
* change password
* deactivate account (by the user or by an admin)
* invalidate access token from a dynamic module
Fixes#2672.
Non-functional refactoring to move set_password. This means that we'll be able
to properly deactivate devices and access tokens without introducing a
dependency loop.
Non-functional refactoring to move deactivate_account. This means that we'll be
able to properly deactivate devices and access tokens without introducing a
dependency loop.
matrix-dev has an event (`$/6ANj/9QWQyd71N6DpRQPf+SDUu11+HVMeKSpMzBCwM:zemos.net`)
which has no `hashes` member.
Check for missing `hashes` element in events.
These processes take a long time compared to the request, so there is lots of
"Entering|Restoring dead context" in the logs. Let's try to shut it up a bit.