Compare commits

...

51 Commits

Author SHA1 Message Date
Travis Ralston
7e09a4d233
Merge e3430769b29a0767431b8daf488aa0cae3d6a1fb into 6ddbb0361222da276cf0f9b76dd2dff92862327e 2025-07-02 18:04:26 +02:00
V02460
6ddbb03612
Raise poetry-core version cap to 2.1.3 (#18575)
Request to raise the defensive version cap for poetry-core from 1.9.1 to
2.1.3.

My understanding is that the major version bump of poetry signals the
transition to standardized pyproject.toml metadata, but does not affect
backwards compatibility.

This is a subset of the changes in #18432

Fixes #18200

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-07-02 15:57:30 +00:00
Erik Johnston
cc8da2c5ed
Log the room ID we're purging state for (#18625)
So we can see what we're deleting.
2025-07-02 15:02:12 +01:00
reivilibre
c17fd947f3
Fix documentation of the Delete Room Admin API's status field. (#18519)
Fixes: #18502

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-07-01 17:55:38 +01:00
Quentin Gliech
24bcdb3f3c
Merge branch 'master' into develop 2025-07-01 17:37:49 +02:00
Quentin Gliech
e3ed93adf3
Add a note in the changelog about the manylinux wheels 2025-07-01 16:01:28 +02:00
Quentin Gliech
214ac2f005
1.133.0 2025-07-01 15:13:42 +02:00
Quentin Gliech
c471e84697
Bump cibuildwheel to 3.0.0 to fix the building of wheels (#18615)
Fixes https://github.com/element-hq/synapse/issues/18614

This upgrade CIBW to 3.0, which now builds using the manylinux_2_28
image, as the previous image is EOL and not supported by some of our
dependencies anymore.

This also updates the job to use the `ubuntu-24.04` base image instead
of `ubuntu-22.04`
2025-07-01 14:54:33 +02:00
Andrew Morgan
291880012f
Stop sending or processing the origin field in PDUs (#18418)
Co-authored-by: Quentin Gliech <quenting@element.io>
Co-authored-by: Eric Eastwood <erice@element.io>
2025-07-01 12:04:23 +01:00
Krishan
a2bee2f255
Add via param to hierarchy enpoint (#18070)
### Pull Request Checklist

Implementation of
[MSC4235](https://github.com/matrix-org/matrix-spec-proposals/pull/4235)
as per suggestion in [pull request
17750](https://github.com/element-hq/synapse/pull/17750#issuecomment-2411248598).

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Quentin Gliech <quenting@element.io>
2025-06-30 12:42:14 +00:00
Travis Ralston
e3430769b2 kick ci 2025-06-20 13:17:02 -06:00
turt2live
1bee9501d8 Attempt to fix linting 2025-06-20 19:15:56 +00:00
Travis Ralston
91595cf780 Fix list 2025-06-20 13:13:27 -06:00
Travis Ralston
a1a05b3c3a Fix condition 2025-06-20 13:08:37 -06:00
turt2live
dac68a3bdd Attempt to fix linting 2025-06-20 19:07:03 +00:00
Travis Ralston
bb9922f50f Add a client config flag too 2025-06-20 13:03:21 -06:00
Travis Ralston
0408d5ec36 More docs 2025-06-20 12:45:41 -06:00
Travis Ralston
2fb8ce71a6 changelog 2025-06-20 12:43:19 -06:00
Travis Ralston
6e79df0b9e docs 2025-06-20 12:40:55 -06:00
Travis Ralston
f7c089c93f Fix rust 2025-06-20 12:38:20 -06:00
Travis Ralston
35993c6110 Set unsigned flag 2025-06-20 12:12:31 -06:00
Travis Ralston
c3f2b94c42 Track internal metadata for policy server bool 2025-06-20 12:06:51 -06:00
Travis Ralston
881d521532 Merge branch 'travis/admin-soft-fail' into travis/flag-ps-events 2025-06-20 11:59:39 -06:00
Travis Ralston
61ba9018b1 one more time 2025-06-19 17:37:07 -06:00
Travis Ralston
5c4ba56e2b Maybe there's just a bunch of checks behind the scenes 2025-06-19 17:06:31 -06:00
Travis Ralston
bcf3a8925d ok, so pop breaks things 2025-06-19 16:54:52 -06:00
Travis Ralston
7e8e66e6d6 Flip if order to reduce db transactions in the general case 2025-06-19 16:22:53 -06:00
Travis Ralston
37a38f8a53 fix internal_metadata? 2025-06-19 16:22:07 -06:00
Travis Ralston
25ef8dd1e6 await 2025-06-19 16:21:18 -06:00
Travis Ralston
b38c208d5d More txn count bumps 2025-06-19 16:19:52 -06:00
Travis Ralston
280094382e Bump db txn count in tests 2025-06-19 16:06:45 -06:00
Travis Ralston
1c7e7f108a Create internal metadata properly in tests 2025-06-19 16:04:51 -06:00
Travis Ralston
88b4723ca7 kick ci 2025-06-19 15:43:52 -06:00
turt2live
b2e4a63439 Attempt to fix linting 2025-06-19 21:43:30 +00:00
Travis Ralston
f967bbb521 Appease linter 2025-06-19 15:41:35 -06:00
Travis Ralston
7344990e22 actually render new docs 2025-06-19 15:39:44 -06:00
Travis Ralston
1262b10b4c kick ci 2025-06-19 15:31:20 -06:00
turt2live
8908312875 Attempt to fix linting 2025-06-19 21:31:05 +00:00
Travis Ralston
99b9ee27e9 oops 2025-06-19 15:27:15 -06:00
Travis Ralston
043bd86d7d I guess the CI doesn't want us to do that 2025-06-19 15:26:29 -06:00
Travis Ralston
38a8937b27 kick ci 2025-06-19 15:23:55 -06:00
turt2live
f24386ad2c Attempt to fix linting 2025-06-19 21:21:22 +00:00
Travis Ralston
24c809f27b Add some untested tests 2025-06-19 15:17:11 -06:00
Travis Ralston
efa5ad9fa3 Reset aggregations counts 2025-06-19 15:00:39 -06:00
Travis Ralston
a12af4500f Switch to a general concept of "CS API extensions" on a per-user basis 2025-06-19 14:59:58 -06:00
Travis Ralston
8e823be4cb Merge branch 'develop' into travis/admin-soft-fail 2025-06-19 14:12:18 -06:00
Andrew Morgan
b453b1a7dd Bump db txn expected count in relations tests
As we're now performing another db txn to check if the user is an admin.
2025-03-14 09:58:41 +00:00
Travis Ralston
331bc7c0fd Empty commit to fix CI 2025-03-13 15:08:57 -06:00
turt2live
8f2fa30fcb Attempt to fix linting 2025-03-13 21:07:15 +00:00
Travis Ralston
a855b55c6b changelog 2025-03-13 15:05:19 -06:00
Travis Ralston
d1c73e71c7 Allow admins to see soft failed events 2025-03-13 15:02:53 -06:00
39 changed files with 441 additions and 55 deletions

View File

@ -111,7 +111,7 @@ jobs:
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
strategy: strategy:
matrix: matrix:
os: [ubuntu-22.04, macos-13] os: [ubuntu-24.04, macos-13]
arch: [x86_64, aarch64] arch: [x86_64, aarch64]
# is_pr is a flag used to exclude certain jobs from the matrix on PRs. # is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow. # It is not read by the rest of the workflow.
@ -139,7 +139,7 @@ jobs:
python-version: "3.x" python-version: "3.x"
- name: Install cibuildwheel - name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.23.0 run: python -m pip install cibuildwheel==3.0.0
- name: Set up QEMU to emulate aarch64 - name: Set up QEMU to emulate aarch64
if: matrix.arch == 'aarch64' if: matrix.arch == 'aarch64'

View File

@ -1,3 +1,21 @@
# Synapse 1.133.0 (2025-07-01)
Pre-built wheels are now built using the [manylinux_2_28](https://github.com/pypa/manylinux#manylinux_2_28-almalinux-8-based) base, which is expected to be compatible with distros using glibc 2.28 or later, including:
- Debian 10+
- Ubuntu 18.10+
- Fedora 29+
- CentOS/RHEL 8+
Previously, wheels were built using the [manylinux2014](https://github.com/pypa/manylinux#manylinux2014-centos-7-based-glibc-217) base, which was expected to be compatible with distros using glibc 2.17 or later.
### Bugfixes
- Bump `cibuildwheel` to 3.0.0 to fix the `manylinux` wheel builds. ([\#18615](https://github.com/element-hq/synapse/issues/18615))
# Synapse 1.133.0rc1 (2025-06-24) # Synapse 1.133.0rc1 (2025-06-24)
### Features ### Features

View File

@ -0,0 +1 @@
Support for [MSC4235](https://github.com/matrix-org/matrix-spec-proposals/pull/4235): via query param for hierarchy endpoint. Contributed by Krishan (@kfiven).

View File

@ -0,0 +1 @@
If enabled by the user, server admins will see [soft failed](https://spec.matrix.org/v1.13/server-server-api/#soft-failure) events over the Client-Server API.

View File

@ -0,0 +1 @@
Stop adding the "origin" field to newly-created events (PDUs).

1
changelog.d/18519.doc Normal file
View File

@ -0,0 +1 @@
Fix documentation of the Delete Room Admin API's status field.

1
changelog.d/18575.misc Normal file
View File

@ -0,0 +1 @@
Raise poetry-core version cap to 2.1.3.

View File

@ -0,0 +1 @@
When admins enable themselves to see soft-failed events, they will also see if the cause is due to the policy server flagging them as spam via `unsigned`.

1
changelog.d/18625.misc Normal file
View File

@ -0,0 +1 @@
Log the room ID we're purging state for.

View File

@ -45,6 +45,10 @@ def make_graph(pdus: List[dict], filename_prefix: str) -> None:
colors = {"red", "green", "blue", "yellow", "purple"} colors = {"red", "green", "blue", "yellow", "purple"}
for pdu in pdus: for pdu in pdus:
# TODO: The "origin" field has since been removed from events generated
# by Synapse. We should consider removing it here as well but since this
# is part of `contrib/`, it is left for the community to revise and ensure things
# still work correctly.
origins.add(pdu.get("origin")) origins.add(pdu.get("origin"))
color_map = {color: color for color in colors if color in origins} color_map = {color: color for color in colors if color in origins}

6
debian/changelog vendored
View File

@ -1,3 +1,9 @@
matrix-synapse-py3 (1.133.0) stable; urgency=medium
* New synapse release 1.133.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Jul 2025 13:13:24 +0000
matrix-synapse-py3 (1.133.0~rc1) stable; urgency=medium matrix-synapse-py3 (1.133.0~rc1) stable; urgency=medium
* New Synapse release 1.133.0rc1. * New Synapse release 1.133.0rc1.

View File

@ -74,6 +74,7 @@
- [Users](admin_api/user_admin_api.md) - [Users](admin_api/user_admin_api.md)
- [Server Version](admin_api/version_api.md) - [Server Version](admin_api/version_api.md)
- [Federation](usage/administration/admin_api/federation.md) - [Federation](usage/administration/admin_api/federation.md)
- [Client-Server API Extensions](admin_api/client_server_api_extensions.md)
- [Manhole](manhole.md) - [Manhole](manhole.md)
- [Monitoring](metrics-howto.md) - [Monitoring](metrics-howto.md)
- [Reporting Homeserver Usage Statistics](usage/administration/monitoring/reporting_homeserver_usage_statistics.md) - [Reporting Homeserver Usage Statistics](usage/administration/monitoring/reporting_homeserver_usage_statistics.md)

View File

@ -0,0 +1,51 @@
# Client-Server API Extensions
Server administrators can set special account data to change how the Client-Server API behaves for
their clients. Setting the account data, or having it already set, as a non-admin has no effect.
All configuration options can be set through the `io.element.synapse.admin_client_config` global
account data on the admin's user account.
Example:
```
PUT /_matrix/client/v3/user/{adminUserId}/account_data/io.element.synapse.admin_client_config
{
"return_soft_failed_events": true
}
```
## See soft failed events
Learn more about soft failure from [the spec](https://spec.matrix.org/v1.14/server-server-api/#soft-failure).
To receive soft failed events in APIs like `/sync` and `/messages`, set `return_soft_failed_events`
to `true` in the admin client config. When `false`, the normal behaviour of these endpoints is to
exclude soft failed events.
**Note**: If the policy server flagged the event as spam and that caused soft failure, that will be indicated
in the event's `unsigned` content like so:
```json
{
"type": "m.room.message",
"other": "event_fields_go_here",
"unsigned": {
"io.element.synapse.soft_failed": true,
"io.element.synapse.policy_server_spammy": true
}
}
```
Default: `false`
## See events marked spammy by policy servers
Learn more about policy servers from [MSC4284](https://github.com/matrix-org/matrix-spec-proposals/pull/4284).
Similar to `return_soft_failed_events`, clients logged in with admin accounts can see events which were
flagged by the policy server as spammy (and thus soft failed) by setting `return_policy_server_spammy_events`
to `true`. If `return_soft_failed_events` is `true`, then `return_policy_server_spammy_events` is implied
`true`. When `false`, the normal behaviour of Client-Server API endpoints is retained (unless `return_soft_failed_events`
is `true`, per above).
Default: `false`

View File

@ -117,7 +117,6 @@ It returns a JSON body like the following:
"hashes": { "hashes": {
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw" "sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
}, },
"origin": "matrix.org",
"origin_server_ts": 1592291711430, "origin_server_ts": 1592291711430,
"prev_events": [ "prev_events": [
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M" "$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"

View File

@ -806,7 +806,7 @@ A response body like the following is returned:
}, { }, {
"delete_id": "delete_id2", "delete_id": "delete_id2",
"room_id": "!roomid:example.com", "room_id": "!roomid:example.com",
"status": "purging", "status": "active",
"shutdown_room": { "shutdown_room": {
"kicked_users": [ "kicked_users": [
"@foobar:example.com" "@foobar:example.com"
@ -843,7 +843,7 @@ A response body like the following is returned:
```json ```json
{ {
"status": "purging", "status": "active",
"delete_id": "bHkCNQpHqOaFhPtK", "delete_id": "bHkCNQpHqOaFhPtK",
"room_id": "!roomid:example.com", "room_id": "!roomid:example.com",
"shutdown_room": { "shutdown_room": {
@ -876,8 +876,8 @@ The following fields are returned in the JSON response body:
- `delete_id` - The ID for this purge - `delete_id` - The ID for this purge
- `room_id` - The ID of the room being deleted - `room_id` - The ID of the room being deleted
- `status` - The status will be one of: - `status` - The status will be one of:
- `shutting_down` - The process is removing users from the room. - `scheduled` - The deletion is waiting to be started
- `purging` - The process is purging the room and event data from database. - `active` - The process is purging the room and event data from database.
- `complete` - The process has completed successfully. - `complete` - The process has completed successfully.
- `failed` - The process is aborted, an error has occurred. - `failed` - The process is aborted, an error has occurred.
- `error` - A string that shows an error message if `status` is `failed`. - `error` - A string that shows an error message if `status` is `failed`.

View File

@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry] [tool.poetry]
name = "matrix-synapse" name = "matrix-synapse"
version = "1.133.0rc1" version = "1.133.0"
description = "Homeserver for the Matrix decentralised comms protocol" description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"] authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later" license = "AGPL-3.0-or-later"
@ -374,7 +374,7 @@ tomli = ">=1.2.3"
# runtime errors caused by build system changes. # runtime errors caused by build system changes.
# We are happy to raise these upper bounds upon request, # We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes). # provided we check that it's safe to do so (i.e. that CI passes).
requires = ["poetry-core>=1.1.0,<=1.9.1", "setuptools_rust>=1.3,<=1.10.2"] requires = ["poetry-core>=1.1.0,<=2.1.3", "setuptools_rust>=1.3,<=1.10.2"]
build-backend = "poetry.core.masonry.api" build-backend = "poetry.core.masonry.api"

View File

@ -54,6 +54,7 @@ enum EventInternalMetadataData {
RecheckRedaction(bool), RecheckRedaction(bool),
SoftFailed(bool), SoftFailed(bool),
ProactivelySend(bool), ProactivelySend(bool),
PolicyServerSpammy(bool),
Redacted(bool), Redacted(bool),
TxnId(Box<str>), TxnId(Box<str>),
TokenId(i64), TokenId(i64),
@ -96,6 +97,13 @@ impl EventInternalMetadataData {
.to_owned() .to_owned()
.into_any(), .into_any(),
), ),
EventInternalMetadataData::PolicyServerSpammy(o) => (
pyo3::intern!(py, "policy_server_spammy"),
o.into_pyobject(py)
.unwrap_infallible()
.to_owned()
.into_any(),
),
EventInternalMetadataData::Redacted(o) => ( EventInternalMetadataData::Redacted(o) => (
pyo3::intern!(py, "redacted"), pyo3::intern!(py, "redacted"),
o.into_pyobject(py) o.into_pyobject(py)
@ -155,6 +163,11 @@ impl EventInternalMetadataData {
.extract() .extract()
.with_context(|| format!("'{key_str}' has invalid type"))?, .with_context(|| format!("'{key_str}' has invalid type"))?,
), ),
"policy_server_spammy" => EventInternalMetadataData::PolicyServerSpammy(
value
.extract()
.with_context(|| format!("'{key_str}' has invalid type"))?,
),
"redacted" => EventInternalMetadataData::Redacted( "redacted" => EventInternalMetadataData::Redacted(
value value
.extract() .extract()
@ -427,6 +440,17 @@ impl EventInternalMetadata {
set_property!(self, ProactivelySend, obj); set_property!(self, ProactivelySend, obj);
} }
#[getter]
fn get_policy_server_spammy(&self) -> PyResult<bool> {
Ok(get_property_opt!(self, PolicyServerSpammy)
.copied()
.unwrap_or(false))
}
#[setter]
fn set_policy_server_spammy(&mut self, obj: bool) {
set_property!(self, PolicyServerSpammy, obj);
}
#[getter] #[getter]
fn get_redacted(&self) -> PyResult<bool> { fn get_redacted(&self) -> PyResult<bool> {
let bool = get_property!(self, Redacted)?; let bool = get_property!(self, Redacted)?;

View File

@ -290,6 +290,9 @@ class AccountDataTypes:
MSC4155_INVITE_PERMISSION_CONFIG: Final = ( MSC4155_INVITE_PERMISSION_CONFIG: Final = (
"org.matrix.msc4155.invite_permission_config" "org.matrix.msc4155.invite_permission_config"
) )
# Synapse-specific behaviour. See "Client-Server API Extensions" documentation
# in Admin API for more information.
SYNAPSE_ADMIN_CLIENT_CONFIG: Final = "io.element.synapse.admin_client_config"
class HistoryVisibility: class HistoryVisibility:

View File

@ -555,6 +555,9 @@ class ApplicationServiceApi(SimpleHttpClient):
) )
and service.is_interested_in_user(e.state_key) and service.is_interested_in_user(e.state_key)
), ),
# Appservices are considered 'trusted' by the admin and should have
# applicable metadata on their events.
include_admin_metadata=True,
), ),
) )
for e in events for e in events

View File

@ -561,6 +561,9 @@ class ExperimentalConfig(Config):
# MSC4076: Add `disable_badge_count`` to pusher configuration # MSC4076: Add `disable_badge_count`` to pusher configuration
self.msc4076_enabled: bool = experimental.get("msc4076_enabled", False) self.msc4076_enabled: bool = experimental.get("msc4076_enabled", False)
# MSC4235: Add `via` param to hierarchy endpoint
self.msc4235_enabled: bool = experimental.get("msc4235_enabled", False)
# MSC4263: Preventing MXID enumeration via key queries # MSC4263: Preventing MXID enumeration via key queries
self.msc4263_limit_key_queries_to_users_who_share_rooms = experimental.get( self.msc4263_limit_key_queries_to_users_who_share_rooms = experimental.get(
"msc4263_limit_key_queries_to_users_who_share_rooms", "msc4263_limit_key_queries_to_users_who_share_rooms",

View File

@ -208,7 +208,6 @@ class EventBase(metaclass=abc.ABCMeta):
depth: DictProperty[int] = DictProperty("depth") depth: DictProperty[int] = DictProperty("depth")
content: DictProperty[JsonDict] = DictProperty("content") content: DictProperty[JsonDict] = DictProperty("content")
hashes: DictProperty[Dict[str, str]] = DictProperty("hashes") hashes: DictProperty[Dict[str, str]] = DictProperty("hashes")
origin: DictProperty[str] = DictProperty("origin")
origin_server_ts: DictProperty[int] = DictProperty("origin_server_ts") origin_server_ts: DictProperty[int] = DictProperty("origin_server_ts")
room_id: DictProperty[str] = DictProperty("room_id") room_id: DictProperty[str] = DictProperty("room_id")
sender: DictProperty[str] = DictProperty("sender") sender: DictProperty[str] = DictProperty("sender")

View File

@ -302,8 +302,8 @@ def create_local_event_from_event_dict(
event_dict: JsonDict, event_dict: JsonDict,
internal_metadata_dict: Optional[JsonDict] = None, internal_metadata_dict: Optional[JsonDict] = None,
) -> EventBase: ) -> EventBase:
"""Takes a fully formed event dict, ensuring that fields like `origin` """Takes a fully formed event dict, ensuring that fields like
and `origin_server_ts` have correct values for a locally produced event, `origin_server_ts` have correct values for a locally produced event,
then signs and hashes it. then signs and hashes it.
""" """
@ -319,7 +319,6 @@ def create_local_event_from_event_dict(
if format_version == EventFormatVersions.ROOM_V1_V2: if format_version == EventFormatVersions.ROOM_V1_V2:
event_dict["event_id"] = _create_event_id(clock, hostname) event_dict["event_id"] = _create_event_id(clock, hostname)
event_dict["origin"] = hostname
event_dict.setdefault("origin_server_ts", time_now) event_dict.setdefault("origin_server_ts", time_now)
event_dict.setdefault("unsigned", {}) event_dict.setdefault("unsigned", {})

View File

@ -421,11 +421,31 @@ class SerializeEventConfig:
# False, that state will be removed from the event before it is returned. # False, that state will be removed from the event before it is returned.
# Otherwise, it will be kept. # Otherwise, it will be kept.
include_stripped_room_state: bool = False include_stripped_room_state: bool = False
# When True, sets unsigned flags to help clients identify events which
# only server admins can see through other configuration. For example,
# whether an event was soft failed by the server.
include_admin_metadata: bool = False
# Developer note: when adding properties, update make_config_for_admin() below.
_DEFAULT_SERIALIZE_EVENT_CONFIG = SerializeEventConfig() _DEFAULT_SERIALIZE_EVENT_CONFIG = SerializeEventConfig()
def make_config_for_admin(existing: SerializeEventConfig) -> SerializeEventConfig:
# Developer note: when adding properties, update test_make_serialize_config_for_admin_retains_other_fields
return SerializeEventConfig(
# Set the options which are only available to server admins
include_admin_metadata=True,
# And copy the rest
as_client_event=existing.as_client_event,
event_format=existing.event_format,
requester=existing.requester,
only_event_fields=existing.only_event_fields,
include_stripped_room_state=existing.include_stripped_room_state,
)
def serialize_event( def serialize_event(
e: Union[JsonDict, EventBase], e: Union[JsonDict, EventBase],
time_now_ms: int, time_now_ms: int,
@ -528,6 +548,12 @@ def serialize_event(
d["content"] = dict(d["content"]) d["content"] = dict(d["content"])
d["content"]["redacts"] = e.redacts d["content"]["redacts"] = e.redacts
if config.include_admin_metadata:
if e.internal_metadata.is_soft_failed():
d["unsigned"]["io.element.synapse.soft_failed"] = True
if e.internal_metadata.policy_server_spammy:
d["unsigned"]["io.element.synapse.policy_server_spammy"] = True
only_event_fields = config.only_event_fields only_event_fields = config.only_event_fields
if only_event_fields: if only_event_fields:
if not isinstance(only_event_fields, list) or not all( if not isinstance(only_event_fields, list) or not all(
@ -576,6 +602,15 @@ class EventClientSerializer:
if not isinstance(event, EventBase): if not isinstance(event, EventBase):
return event return event
# Force-enable server admin metadata because the only time an event with
# relevant metadata will be when the admin requested it via their admin
# client config account data. Also, it's "just" some `unsigned` flags, so
# shouldn't cause much in terms of problems to downstream consumers.
if config.requester is not None and await self._store.is_server_admin(
config.requester.user
):
config = make_config_for_admin(config)
serialized_event = serialize_event(event, time_now, config=config) serialized_event = serialize_event(event, time_now, config=config)
new_unsigned = {} new_unsigned = {}

View File

@ -67,7 +67,6 @@ class EventValidator:
"auth_events", "auth_events",
"content", "content",
"hashes", "hashes",
"origin",
"prev_events", "prev_events",
"sender", "sender",
"type", "type",
@ -77,13 +76,6 @@ class EventValidator:
if k not in event: if k not in event:
raise SynapseError(400, "Event does not have key %s" % (k,)) raise SynapseError(400, "Event does not have key %s" % (k,))
# Check that the following keys have string values
event_strings = ["origin"]
for s in event_strings:
if not isinstance(getattr(event, s), str):
raise SynapseError(400, "'%s' not a string type" % (s,))
# Depending on the room version, ensure the data is spec compliant JSON. # Depending on the room version, ensure the data is spec compliant JSON.
if event.room_version.strict_canonicaljson: if event.room_version.strict_canonicaljson:
validate_canonicaljson(event.get_pdu_json()) validate_canonicaljson(event.get_pdu_json())

View File

@ -174,6 +174,7 @@ class FederationBase:
"Event not allowed by policy server, soft-failing %s", pdu.event_id "Event not allowed by policy server, soft-failing %s", pdu.event_id
) )
pdu.internal_metadata.soft_failed = True pdu.internal_metadata.soft_failed = True
pdu.internal_metadata.policy_server_spammy = True
# Note: we don't redact the event so admins can inspect the event after the # Note: we don't redact the event so admins can inspect the event after the
# fact. Other processes may redact the event, but that won't be applied to # fact. Other processes may redact the event, but that won't be applied to
# the database copy of the event until the server's config requires it. # the database copy of the event until the server's config requires it.
@ -322,8 +323,7 @@ def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventB
SynapseError: if the pdu is missing required fields or is otherwise SynapseError: if the pdu is missing required fields or is otherwise
not a valid matrix event not a valid matrix event
""" """
# we could probably enforce a bunch of other fields here (room_id, sender, # we could probably enforce a bunch of other fields here (room_id, sender, etc.)
# origin, etc etc)
assert_params_in_dict(pdu_json, ("type", "depth")) assert_params_in_dict(pdu_json, ("type", "depth"))
# Strip any unauthorized values from "unsigned" if they exist # Strip any unauthorized values from "unsigned" if they exist

View File

@ -1111,6 +1111,9 @@ class EventCreationHandler:
policy_allowed = await self._policy_handler.is_event_allowed(event) policy_allowed = await self._policy_handler.is_event_allowed(event)
if not policy_allowed: if not policy_allowed:
# We shouldn't need to set the metadata because the raise should
# cause the request to be denied, but just in case:
event.internal_metadata.policy_server_spammy = True
logger.warning( logger.warning(
"Event not allowed by policy server, rejecting %s", "Event not allowed by policy server, rejecting %s",
event.event_id, event.event_id,

View File

@ -111,7 +111,15 @@ class RoomSummaryHandler:
# If a user tries to fetch the same page multiple times in quick succession, # If a user tries to fetch the same page multiple times in quick succession,
# only process the first attempt and return its result to subsequent requests. # only process the first attempt and return its result to subsequent requests.
self._pagination_response_cache: ResponseCache[ self._pagination_response_cache: ResponseCache[
Tuple[str, str, bool, Optional[int], Optional[int], Optional[str]] Tuple[
str,
str,
bool,
Optional[int],
Optional[int],
Optional[str],
Optional[Tuple[str, ...]],
]
] = ResponseCache( ] = ResponseCache(
hs.get_clock(), hs.get_clock(),
"get_room_hierarchy", "get_room_hierarchy",
@ -126,6 +134,7 @@ class RoomSummaryHandler:
max_depth: Optional[int] = None, max_depth: Optional[int] = None,
limit: Optional[int] = None, limit: Optional[int] = None,
from_token: Optional[str] = None, from_token: Optional[str] = None,
remote_room_hosts: Optional[Tuple[str, ...]] = None,
) -> JsonDict: ) -> JsonDict:
""" """
Implementation of the room hierarchy C-S API. Implementation of the room hierarchy C-S API.
@ -143,6 +152,9 @@ class RoomSummaryHandler:
limit: An optional limit on the number of rooms to return per limit: An optional limit on the number of rooms to return per
page. Must be a positive integer. page. Must be a positive integer.
from_token: An optional pagination token. from_token: An optional pagination token.
remote_room_hosts: An optional list of remote homeserver server names. If defined,
each host will be used to try and fetch the room hierarchy. Must be a tuple so
that it can be hashed by the `RoomSummaryHandler._pagination_response_cache`.
Returns: Returns:
The JSON hierarchy dictionary. The JSON hierarchy dictionary.
@ -162,6 +174,7 @@ class RoomSummaryHandler:
max_depth, max_depth,
limit, limit,
from_token, from_token,
remote_room_hosts,
), ),
self._get_room_hierarchy, self._get_room_hierarchy,
requester.user.to_string(), requester.user.to_string(),
@ -170,6 +183,7 @@ class RoomSummaryHandler:
max_depth, max_depth,
limit, limit,
from_token, from_token,
remote_room_hosts,
) )
async def _get_room_hierarchy( async def _get_room_hierarchy(
@ -180,6 +194,7 @@ class RoomSummaryHandler:
max_depth: Optional[int] = None, max_depth: Optional[int] = None,
limit: Optional[int] = None, limit: Optional[int] = None,
from_token: Optional[str] = None, from_token: Optional[str] = None,
remote_room_hosts: Optional[Tuple[str, ...]] = None,
) -> JsonDict: ) -> JsonDict:
"""See docstring for SpaceSummaryHandler.get_room_hierarchy.""" """See docstring for SpaceSummaryHandler.get_room_hierarchy."""
@ -199,7 +214,7 @@ class RoomSummaryHandler:
if not local_room: if not local_room:
room_hierarchy = await self._summarize_remote_room_hierarchy( room_hierarchy = await self._summarize_remote_room_hierarchy(
_RoomQueueEntry(requested_room_id, ()), _RoomQueueEntry(requested_room_id, remote_room_hosts or ()),
False, False,
) )
root_room_entry = room_hierarchy[0] root_room_entry = room_hierarchy[0]
@ -240,7 +255,7 @@ class RoomSummaryHandler:
processed_rooms = set(pagination_session["processed_rooms"]) processed_rooms = set(pagination_session["processed_rooms"])
else: else:
# The queue of rooms to process, the next room is last on the stack. # The queue of rooms to process, the next room is last on the stack.
room_queue = [_RoomQueueEntry(requested_room_id, ())] room_queue = [_RoomQueueEntry(requested_room_id, remote_room_hosts or ())]
# Rooms we have already processed. # Rooms we have already processed.
processed_rooms = set() processed_rooms = set()

View File

@ -1538,6 +1538,7 @@ class RoomHierarchyRestServlet(RestServlet):
super().__init__() super().__init__()
self._auth = hs.get_auth() self._auth = hs.get_auth()
self._room_summary_handler = hs.get_room_summary_handler() self._room_summary_handler = hs.get_room_summary_handler()
self.msc4235_enabled = hs.config.experimental.msc4235_enabled
async def on_GET( async def on_GET(
self, request: SynapseRequest, room_id: str self, request: SynapseRequest, room_id: str
@ -1547,6 +1548,15 @@ class RoomHierarchyRestServlet(RestServlet):
max_depth = parse_integer(request, "max_depth") max_depth = parse_integer(request, "max_depth")
limit = parse_integer(request, "limit") limit = parse_integer(request, "limit")
# twisted.web.server.Request.args is incorrectly defined as Optional[Any]
remote_room_hosts = None
if self.msc4235_enabled:
args: Dict[bytes, List[bytes]] = request.args # type: ignore
via_param = parse_strings_from_args(
args, "org.matrix.msc4235.via", required=False
)
remote_room_hosts = tuple(via_param or [])
return 200, await self._room_summary_handler.get_room_hierarchy( return 200, await self._room_summary_handler.get_room_hierarchy(
requester, requester,
room_id, room_id,
@ -1554,6 +1564,7 @@ class RoomHierarchyRestServlet(RestServlet):
max_depth=max_depth, max_depth=max_depth,
limit=limit, limit=limit,
from_token=parse_string(request, "from"), from_token=parse_string(request, "from"),
remote_room_hosts=remote_room_hosts,
) )

View File

@ -0,0 +1,22 @@
import logging
from typing import Optional
from synapse.types import JsonMapping
logger = logging.getLogger(__name__)
class AdminClientConfig:
"""Class to track various Synapse-specific admin-only client-impacting config options."""
def __init__(self, account_data: Optional[JsonMapping]):
self.return_soft_failed_events: bool = False
self.return_policy_server_spammy_events: bool = False
if account_data:
self.return_soft_failed_events = account_data.get(
"return_soft_failed_events", False
)
self.return_policy_server_spammy_events = account_data.get(
"return_policy_server_spammy_events", False
)

View File

@ -34,6 +34,7 @@ from synapse.metrics.background_process_metrics import wrap_as_background_proces
from synapse.storage.database import LoggingTransaction from synapse.storage.database import LoggingTransaction
from synapse.storage.databases import Databases from synapse.storage.databases import Databases
from synapse.types.storage import _BackgroundUpdates from synapse.types.storage import _BackgroundUpdates
from synapse.util.stringutils import shortstr
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -167,6 +168,12 @@ class PurgeEventsStorageController:
break break
(room_id, groups_to_sequences) = next_to_delete (room_id, groups_to_sequences) = next_to_delete
logger.info(
"[purge] deleting state groups for room %s: %s",
room_id,
shortstr(groups_to_sequences.keys(), maxitems=10),
)
made_progress = await self._delete_state_groups( made_progress = await self._delete_state_groups(
room_id, groups_to_sequences room_id, groups_to_sequences
) )

View File

@ -37,6 +37,7 @@ from synapse.api.constants import AccountDataTypes
from synapse.api.errors import Codes, SynapseError from synapse.api.errors import Codes, SynapseError
from synapse.replication.tcp.streams import AccountDataStream from synapse.replication.tcp.streams import AccountDataStream
from synapse.storage._base import db_to_json from synapse.storage._base import db_to_json
from synapse.storage.admin_client_config import AdminClientConfig
from synapse.storage.database import ( from synapse.storage.database import (
DatabasePool, DatabasePool,
LoggingDatabaseConnection, LoggingDatabaseConnection,
@ -578,6 +579,21 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
) )
return InviteRulesConfig(data) return InviteRulesConfig(data)
async def get_admin_client_config_for_user(self, user_id: str) -> AdminClientConfig:
"""
Get the admin client configuration for the specified user.
The admin client config contains Synapse-specific settings that clients running
server admin accounts can use. They have no effect on non-admin users.
Args:
user_id: The user ID to get config for.
"""
data = await self.get_global_account_data_by_type_for_user(
user_id, AccountDataTypes.SYNAPSE_ADMIN_CLIENT_CONFIG
)
return AdminClientConfig(data)
def process_replication_rows( def process_replication_rows(
self, self,
stream_name: str, stream_name: str,

View File

@ -33,6 +33,9 @@ class EventInternalMetadata:
proactively_send: bool proactively_send: bool
redacted: bool redacted: bool
policy_server_spammy: bool
"""whether the policy server indicated that this event is spammy"""
txn_id: str txn_id: str
"""The transaction ID, if it was set when the event was created.""" """The transaction ID, if it was set when the event was created."""
token_id: int token_id: int

View File

@ -225,7 +225,7 @@ KNOWN_KEYS = {
"depth", "depth",
"event_id", "event_id",
"hashes", "hashes",
"origin", "origin", # old events were created with an origin field.
"origin_server_ts", "origin_server_ts",
"prev_events", "prev_events",
"room_id", "room_id",

View File

@ -48,7 +48,13 @@ from synapse.logging.opentracing import trace
from synapse.storage.controllers import StorageControllers from synapse.storage.controllers import StorageControllers
from synapse.storage.databases.main import DataStore from synapse.storage.databases.main import DataStore
from synapse.synapse_rust.events import event_visible_to_server from synapse.synapse_rust.events import event_visible_to_server
from synapse.types import RetentionPolicy, StateMap, StrCollection, get_domain_from_id from synapse.types import (
RetentionPolicy,
StateMap,
StrCollection,
UserID,
get_domain_from_id,
)
from synapse.types.state import StateFilter from synapse.types.state import StateFilter
from synapse.util import Clock from synapse.util import Clock
@ -106,9 +112,30 @@ async def filter_events_for_client(
of `user_id` at each event. of `user_id` at each event.
""" """
# Filter out events that have been soft failed so that we don't relay them # Filter out events that have been soft failed so that we don't relay them
# to clients. # to clients, unless they're a server admin and want that to happen.
events_before_filtering = events events_before_filtering = events
events = [e for e in events if not e.internal_metadata.is_soft_failed()] client_config = await storage.main.get_admin_client_config_for_user(user_id)
if (
filter_send_to_client
and (
client_config.return_soft_failed_events
or client_config.return_policy_server_spammy_events
)
and await storage.main.is_server_admin(UserID.from_string(user_id))
):
# `return_soft_failed_events` implies `return_policy_server_spammy_events`, so
# we want to check when they've asked for *just* `return_policy_server_spammy_events`
if not client_config.return_soft_failed_events:
events = [
e
for e in events
if not e.internal_metadata.is_soft_failed()
or e.internal_metadata.policy_server_spammy
]
else:
events = events_before_filtering
else:
events = [e for e in events if not e.internal_metadata.is_soft_failed()]
if len(events_before_filtering) != len(events): if len(events_before_filtering) != len(events):
if filtered_event_logger.isEnabledFor(logging.DEBUG): if filtered_event_logger.isEnabledFor(logging.DEBUG):
filtered_event_logger.debug( filtered_event_logger.debug(

View File

@ -48,7 +48,6 @@ class EventSigningTestCase(unittest.TestCase):
def test_sign_minimal(self) -> None: def test_sign_minimal(self) -> None:
event_dict = { event_dict = {
"event_id": "$0:domain", "event_id": "$0:domain",
"origin": "domain",
"origin_server_ts": 1000000, "origin_server_ts": 1000000,
"signatures": {}, "signatures": {},
"type": "X", "type": "X",
@ -64,7 +63,7 @@ class EventSigningTestCase(unittest.TestCase):
self.assertTrue(hasattr(event, "hashes")) self.assertTrue(hasattr(event, "hashes"))
self.assertIn("sha256", event.hashes) self.assertIn("sha256", event.hashes)
self.assertEqual( self.assertEqual(
event.hashes["sha256"], "6tJjLpXtggfke8UxFhAKg82QVkJzvKOVOOSjUDK4ZSI" event.hashes["sha256"], "A6Nco6sqoy18PPfPDVdYvoowfc0PVBk9g9OiyT3ncRM"
) )
self.assertTrue(hasattr(event, "signatures")) self.assertTrue(hasattr(event, "signatures"))
@ -72,15 +71,14 @@ class EventSigningTestCase(unittest.TestCase):
self.assertIn(KEY_NAME, event.signatures["domain"]) self.assertIn(KEY_NAME, event.signatures["domain"])
self.assertEqual( self.assertEqual(
event.signatures[HOSTNAME][KEY_NAME], event.signatures[HOSTNAME][KEY_NAME],
"2Wptgo4CwmLo/Y8B8qinxApKaCkBG2fjTWB7AbP5Uy+" "PBc48yDVszWB9TRaB/+CZC1B+pDAC10F8zll006j+NN"
"aIbygsSdLOFzvdDjww8zUVKCmI02eP9xtyJxc/cLiBA", "fe4PEMWcVuLaG63LFTK9e4rwJE8iLZMPtCKhDTXhpAQ",
) )
def test_sign_message(self) -> None: def test_sign_message(self) -> None:
event_dict = { event_dict = {
"content": {"body": "Here is the message content"}, "content": {"body": "Here is the message content"},
"event_id": "$0:domain", "event_id": "$0:domain",
"origin": "domain",
"origin_server_ts": 1000000, "origin_server_ts": 1000000,
"type": "m.room.message", "type": "m.room.message",
"room_id": "!r:domain", "room_id": "!r:domain",
@ -98,7 +96,7 @@ class EventSigningTestCase(unittest.TestCase):
self.assertTrue(hasattr(event, "hashes")) self.assertTrue(hasattr(event, "hashes"))
self.assertIn("sha256", event.hashes) self.assertIn("sha256", event.hashes)
self.assertEqual( self.assertEqual(
event.hashes["sha256"], "onLKD1bGljeBWQhWZ1kaP9SorVmRQNdN5aM2JYU2n/g" event.hashes["sha256"], "rDCeYBepPlI891h/RkI2/Lkf9bt7u0TxFku4tMs7WKk"
) )
self.assertTrue(hasattr(event, "signatures")) self.assertTrue(hasattr(event, "signatures"))
@ -106,6 +104,6 @@ class EventSigningTestCase(unittest.TestCase):
self.assertIn(KEY_NAME, event.signatures["domain"]) self.assertIn(KEY_NAME, event.signatures["domain"])
self.assertEqual( self.assertEqual(
event.signatures[HOSTNAME][KEY_NAME], event.signatures[HOSTNAME][KEY_NAME],
"Wm+VzmOUOz08Ds+0NTWb1d4CZrVsJSikkeRxh6aCcUw" "Ay4aj2b5oJ1k8INYZ9n3KnszCflM0emwcmQQ7vxpbdc"
"u6pNC78FunoD7KNWzqFn241eYHYMGCA5McEiVPdhzBA", "Sv9bkJxIZdWX1IJllcZLq89+D3sSabE+vqPtZs9akDw",
) )

View File

@ -34,11 +34,13 @@ from synapse.events.utils import (
_split_field, _split_field,
clone_event, clone_event,
copy_and_fixup_power_levels_contents, copy_and_fixup_power_levels_contents,
format_event_raw,
make_config_for_admin,
maybe_upsert_event_field, maybe_upsert_event_field,
prune_event, prune_event,
serialize_event, serialize_event,
) )
from synapse.types import JsonDict from synapse.types import JsonDict, create_requester
from synapse.util.frozenutils import freeze from synapse.util.frozenutils import freeze
@ -49,7 +51,13 @@ def MockEvent(**kwargs: Any) -> EventBase:
kwargs["type"] = "fake_type" kwargs["type"] = "fake_type"
if "content" not in kwargs: if "content" not in kwargs:
kwargs["content"] = {} kwargs["content"] = {}
return make_event_from_dict(kwargs)
# Move internal metadata out so we can call make_event properly
internal_metadata = kwargs.get("internal_metadata")
if internal_metadata is not None:
kwargs.pop("internal_metadata")
return make_event_from_dict(kwargs, internal_metadata_dict=internal_metadata)
class TestMaybeUpsertEventField(stdlib_unittest.TestCase): class TestMaybeUpsertEventField(stdlib_unittest.TestCase):
@ -122,7 +130,7 @@ class PruneEventTestCase(stdlib_unittest.TestCase):
"prev_events": "prev_events", "prev_events": "prev_events",
"prev_state": "prev_state", "prev_state": "prev_state",
"auth_events": "auth_events", "auth_events": "auth_events",
"origin": "domain", "origin": "domain", # historical top-level field that still exists on old events
"origin_server_ts": 1234, "origin_server_ts": 1234,
"membership": "join", "membership": "join",
# Also include a key that should be removed. # Also include a key that should be removed.
@ -139,7 +147,7 @@ class PruneEventTestCase(stdlib_unittest.TestCase):
"prev_events": "prev_events", "prev_events": "prev_events",
"prev_state": "prev_state", "prev_state": "prev_state",
"auth_events": "auth_events", "auth_events": "auth_events",
"origin": "domain", "origin": "domain", # historical top-level field that still exists on old events
"origin_server_ts": 1234, "origin_server_ts": 1234,
"membership": "join", "membership": "join",
"content": {}, "content": {},
@ -148,13 +156,12 @@ class PruneEventTestCase(stdlib_unittest.TestCase):
}, },
) )
# As of room versions we now redact the membership, prev_states, and origin keys. # As of room versions we now redact the membership and prev_states keys.
self.run_test( self.run_test(
{ {
"type": "A", "type": "A",
"prev_state": "prev_state", "prev_state": "prev_state",
"membership": "join", "membership": "join",
"origin": "example.com",
}, },
{"type": "A", "content": {}, "signatures": {}, "unsigned": {}}, {"type": "A", "content": {}, "signatures": {}, "unsigned": {}},
room_version=RoomVersions.V11, room_version=RoomVersions.V11,
@ -238,7 +245,6 @@ class PruneEventTestCase(stdlib_unittest.TestCase):
{ {
"type": "m.room.create", "type": "m.room.create",
"content": {"not_a_real_key": True}, "content": {"not_a_real_key": True},
"origin": "some_homeserver",
"nonsense_field": "some_random_garbage", "nonsense_field": "some_random_garbage",
}, },
{ {
@ -639,9 +645,18 @@ class CloneEventTestCase(stdlib_unittest.TestCase):
class SerializeEventTestCase(stdlib_unittest.TestCase): class SerializeEventTestCase(stdlib_unittest.TestCase):
def serialize(self, ev: EventBase, fields: Optional[List[str]]) -> JsonDict: def serialize(
self,
ev: EventBase,
fields: Optional[List[str]],
include_admin_metadata: bool = False,
) -> JsonDict:
return serialize_event( return serialize_event(
ev, 1479807801915, config=SerializeEventConfig(only_event_fields=fields) ev,
1479807801915,
config=SerializeEventConfig(
only_event_fields=fields, include_admin_metadata=include_admin_metadata
),
) )
def test_event_fields_works_with_keys(self) -> None: def test_event_fields_works_with_keys(self) -> None:
@ -760,6 +775,78 @@ class SerializeEventTestCase(stdlib_unittest.TestCase):
["room_id", 4], # type: ignore[list-item] ["room_id", 4], # type: ignore[list-item]
) )
def test_default_serialize_config_excludes_admin_metadata(self) -> None:
# We just really don't want this to be set to True accidentally
self.assertFalse(SerializeEventConfig().include_admin_metadata)
def test_event_flagged_for_admins(self) -> None:
# Default behaviour should be *not* to include it
self.assertEqual(
self.serialize(
MockEvent(
type="foo",
event_id="test",
room_id="!foo:bar",
content={"foo": "bar"},
internal_metadata={"soft_failed": True},
),
[],
),
{
"type": "foo",
"event_id": "test",
"room_id": "!foo:bar",
"content": {"foo": "bar"},
"unsigned": {},
},
)
# When asked though, we should set it
self.assertEqual(
self.serialize(
MockEvent(
type="foo",
event_id="test",
room_id="!foo:bar",
content={"foo": "bar"},
internal_metadata={"soft_failed": True},
),
[],
True,
),
{
"type": "foo",
"event_id": "test",
"room_id": "!foo:bar",
"content": {"foo": "bar"},
"unsigned": {"io.element.synapse.soft_failed": True},
},
)
def test_make_serialize_config_for_admin_retains_other_fields(self) -> None:
non_default_config = SerializeEventConfig(
include_admin_metadata=False, # should be True in a moment
as_client_event=False, # default True
event_format=format_event_raw, # default format_event_for_client_v1
requester=create_requester("@example:example.org"), # default None
only_event_fields=["foo"], # default None
include_stripped_room_state=True, # default False
)
admin_config = make_config_for_admin(non_default_config)
self.assertEqual(
admin_config.as_client_event, non_default_config.as_client_event
)
self.assertEqual(admin_config.event_format, non_default_config.event_format)
self.assertEqual(admin_config.requester, non_default_config.requester)
self.assertEqual(
admin_config.only_event_fields, non_default_config.only_event_fields
)
self.assertEqual(
admin_config.include_stripped_room_state,
admin_config.include_stripped_room_state,
)
self.assertTrue(admin_config.include_admin_metadata)
class CopyPowerLevelsContentTestCase(stdlib_unittest.TestCase): class CopyPowerLevelsContentTestCase(stdlib_unittest.TestCase):
def setUp(self) -> None: def setUp(self) -> None:

View File

@ -535,7 +535,6 @@ class StripUnsignedFromEventsTestCase(unittest.TestCase):
"depth": 1000, "depth": 1000,
"origin_server_ts": 1, "origin_server_ts": 1,
"type": "m.room.member", "type": "m.room.member",
"origin": "test.servx",
"content": {"membership": "join"}, "content": {"membership": "join"},
"auth_events": [], "auth_events": [],
"unsigned": {"malicious garbage": "hackz", "more warez": "more hackz"}, "unsigned": {"malicious garbage": "hackz", "more warez": "more hackz"},
@ -552,7 +551,6 @@ class StripUnsignedFromEventsTestCase(unittest.TestCase):
"depth": 1000, "depth": 1000,
"origin_server_ts": 1, "origin_server_ts": 1,
"type": "m.room.member", "type": "m.room.member",
"origin": "test.servx",
"auth_events": [], "auth_events": [],
"content": {"membership": "join"}, "content": {"membership": "join"},
"unsigned": { "unsigned": {
@ -579,7 +577,6 @@ class StripUnsignedFromEventsTestCase(unittest.TestCase):
"depth": 1000, "depth": 1000,
"origin_server_ts": 1, "origin_server_ts": 1,
"type": "m.room.power_levels", "type": "m.room.power_levels",
"origin": "test.servx",
"content": {}, "content": {},
"auth_events": [], "auth_events": [],
"unsigned": { "unsigned": {

View File

@ -1080,6 +1080,62 @@ class SpaceSummaryTestCase(unittest.HomeserverTestCase):
self.assertEqual(federation_requests, 2) self.assertEqual(federation_requests, 2)
self._assert_hierarchy(result, expected) self._assert_hierarchy(result, expected)
def test_fed_remote_room_hosts(self) -> None:
"""
Test if requested room is available over federation using via's.
"""
fed_hostname = self.hs.hostname + "2"
fed_space = "#fed_space:" + fed_hostname
fed_subroom = "#fed_sub_room:" + fed_hostname
remote_room_hosts = tuple(fed_hostname)
requested_room_entry = _RoomEntry(
fed_space,
{
"room_id": fed_space,
"world_readable": True,
"join_rule": "public",
"room_type": RoomTypes.SPACE,
},
[
{
"type": EventTypes.SpaceChild,
"room_id": fed_space,
"state_key": fed_subroom,
"content": {"via": [fed_hostname]},
}
],
)
child_room = {
"room_id": fed_subroom,
"world_readable": True,
"join_rule": "public",
}
async def summarize_remote_room_hierarchy(
_self: Any, room: Any, suggested_only: bool
) -> Tuple[Optional[_RoomEntry], Dict[str, JsonDict], Set[str]]:
return requested_room_entry, {fed_subroom: child_room}, set()
expected = [
(fed_space, [fed_subroom]),
(fed_subroom, ()),
]
with mock.patch(
"synapse.handlers.room_summary.RoomSummaryHandler._summarize_remote_room_hierarchy",
new=summarize_remote_room_hierarchy,
):
result = self.get_success(
self.handler.get_room_hierarchy(
create_requester(self.user),
fed_space,
remote_room_hosts=remote_room_hosts,
)
)
self._assert_hierarchy(result, expected)
class RoomSummaryTestCase(unittest.HomeserverTestCase): class RoomSummaryTestCase(unittest.HomeserverTestCase):
servlets = [ servlets = [

View File

@ -1181,7 +1181,7 @@ class BundledAggregationsTestCase(BaseRelationsTestCase):
bundled_aggregations, bundled_aggregations,
) )
self._test_bundled_aggregations(RelationTypes.REFERENCE, assert_annotations, 6) self._test_bundled_aggregations(RelationTypes.REFERENCE, assert_annotations, 8)
def test_thread(self) -> None: def test_thread(self) -> None:
""" """
@ -1226,21 +1226,21 @@ class BundledAggregationsTestCase(BaseRelationsTestCase):
# The "user" sent the root event and is making queries for the bundled # The "user" sent the root event and is making queries for the bundled
# aggregations: they have participated. # aggregations: they have participated.
self._test_bundled_aggregations(RelationTypes.THREAD, _gen_assert(True), 6) self._test_bundled_aggregations(RelationTypes.THREAD, _gen_assert(True), 9)
# The "user2" sent replies in the thread and is making queries for the # The "user2" sent replies in the thread and is making queries for the
# bundled aggregations: they have participated. # bundled aggregations: they have participated.
# #
# Note that this re-uses some cached values, so the total number of # Note that this re-uses some cached values, so the total number of
# queries is much smaller. # queries is much smaller.
self._test_bundled_aggregations( self._test_bundled_aggregations(
RelationTypes.THREAD, _gen_assert(True), 3, access_token=self.user2_token RelationTypes.THREAD, _gen_assert(True), 6, access_token=self.user2_token
) )
# A user with no interactions with the thread: they have not participated. # A user with no interactions with the thread: they have not participated.
user3_id, user3_token = self._create_user("charlie") user3_id, user3_token = self._create_user("charlie")
self.helper.join(self.room, user=user3_id, tok=user3_token) self.helper.join(self.room, user=user3_id, tok=user3_token)
self._test_bundled_aggregations( self._test_bundled_aggregations(
RelationTypes.THREAD, _gen_assert(False), 3, access_token=user3_token RelationTypes.THREAD, _gen_assert(False), 6, access_token=user3_token
) )
def test_thread_with_bundled_aggregations_for_latest(self) -> None: def test_thread_with_bundled_aggregations_for_latest(self) -> None:
@ -1287,7 +1287,7 @@ class BundledAggregationsTestCase(BaseRelationsTestCase):
bundled_aggregations["latest_event"].get("unsigned"), bundled_aggregations["latest_event"].get("unsigned"),
) )
self._test_bundled_aggregations(RelationTypes.THREAD, assert_thread, 6) self._test_bundled_aggregations(RelationTypes.THREAD, assert_thread, 9)
def test_nested_thread(self) -> None: def test_nested_thread(self) -> None:
""" """